Sample records for neuron networks method

  1. Optimization Methods for Spiking Neurons and Networks

    PubMed Central

    Russell, Alexander; Orchard, Garrick; Dong, Yi; Mihalaş, Ştefan; Niebur, Ernst; Tapson, Jonathan; Etienne-Cummings, Ralph

    2011-01-01

    Spiking neurons and spiking neural circuits are finding uses in a multitude of tasks such as robotic locomotion control, neuroprosthetics, visual sensory processing, and audition. The desired neural output is achieved through the use of complex neuron models, or by combining multiple simple neurons into a network. In either case, a means for configuring the neuron or neural circuit is required. Manual manipulation of parameters is both time consuming and non-intuitive due to the nonlinear relationship between parameters and the neuron’s output. The complexity rises even further as the neurons are networked and the systems often become mathematically intractable. In large circuits, the desired behavior and timing of action potential trains may be known but the timing of the individual action potentials is unknown and unimportant, whereas in single neuron systems the timing of individual action potentials is critical. In this paper, we automate the process of finding parameters. To configure a single neuron we derive a maximum likelihood method for configuring a neuron model, specifically the Mihalas–Niebur Neuron. Similarly, to configure neural circuits, we show how we use genetic algorithms (GAs) to configure parameters for a network of simple integrate and fire with adaptation neurons. The GA approach is demonstrated both in software simulation and hardware implementation on a reconfigurable custom very large scale integration chip. PMID:20959265

  2. Fast numerical methods for simulating large-scale integrate-and-fire neuronal networks.

    PubMed

    Rangan, Aaditya V; Cai, David

    2007-02-01

    We discuss numerical methods for simulating large-scale, integrate-and-fire (I&F) neuronal networks. Important elements in our numerical methods are (i) a neurophysiologically inspired integrating factor which casts the solution as a numerically tractable integral equation, and allows us to obtain stable and accurate individual neuronal trajectories (i.e., voltage and conductance time-courses) even when the I&F neuronal equations are stiff, such as in strongly fluctuating, high-conductance states; (ii) an iterated process of spike-spike corrections within groups of strongly coupled neurons to account for spike-spike interactions within a single large numerical time-step; and (iii) a clustering procedure of firing events in the network to take advantage of localized architectures, such as spatial scales of strong local interactions, which are often present in large-scale computational models-for example, those of the primary visual cortex. (We note that the spike-spike corrections in our methods are more involved than the correction of single neuron spike-time via a polynomial interpolation as in the modified Runge-Kutta methods commonly used in simulations of I&F neuronal networks.) Our methods can evolve networks with relatively strong local interactions in an asymptotically optimal way such that each neuron fires approximately once in [Formula: see text] operations, where N is the number of neurons in the system. We note that quantifications used in computational modeling are often statistical, since measurements in a real experiment to characterize physiological systems are typically statistical, such as firing rate, interspike interval distributions, and spike-triggered voltage distributions. We emphasize that it takes much less computational effort to resolve statistical properties of certain I&F neuronal networks than to fully resolve trajectories of each and every neuron within the system. For networks operating in realistic dynamical regimes, such as

  3. Revealing degree distribution of bursting neuron networks.

    PubMed

    Shen, Yu; Hou, Zhonghuai; Xin, Houwen

    2010-03-01

    We present a method to infer the degree distribution of a bursting neuron network from its dynamics. Burst synchronization (BS) of coupled Morris-Lecar neurons has been studied under the weak coupling condition. In the BS state, all the neurons start and end bursting almost simultaneously, while the spikes inside the burst are incoherent among the neurons. Interestingly, we find that the spike amplitude of a given neuron shows an excellent linear relationship with its degree, which makes it possible to estimate the degree distribution of the network by simple statistics of the spike amplitudes. We demonstrate the validity of this scheme on scale-free as well as small-world networks. The underlying mechanism of such a method is also briefly discussed.

  4. Cortical network modeling: analytical methods for firing rates and some properties of networks of LIF neurons.

    PubMed

    Tuckwell, Henry C

    2006-01-01

    frequencies. The theory may also be applied to sparsely connected networks, whose firing behaviour was found to change abruptly as the probability of a connection passed through a critical value. The analytical method was also found to be useful for a feed-forward excitatory network and a network of excitatory and inhibitory neurons.

  5. Numerical methods for solving moment equations in kinetic theory of neuronal network dynamics

    NASA Astrophysics Data System (ADS)

    Rangan, Aaditya V.; Cai, David; Tao, Louis

    2007-02-01

    Recently developed kinetic theory and related closures for neuronal network dynamics have been demonstrated to be a powerful theoretical framework for investigating coarse-grained dynamical properties of neuronal networks. The moment equations arising from the kinetic theory are a system of (1 + 1)-dimensional nonlinear partial differential equations (PDE) on a bounded domain with nonlinear boundary conditions. The PDEs themselves are self-consistently specified by parameters which are functions of the boundary values of the solution. The moment equations can be stiff in space and time. Numerical methods are presented here for efficiently and accurately solving these moment equations. The essential ingredients in our numerical methods include: (i) the system is discretized in time with an implicit Euler method within a spectral deferred correction framework, therefore, the PDEs of the kinetic theory are reduced to a sequence, in time, of boundary value problems (BVPs) with nonlinear boundary conditions; (ii) a set of auxiliary parameters is introduced to recast the original BVP with nonlinear boundary conditions as BVPs with linear boundary conditions - with additional algebraic constraints on the auxiliary parameters; (iii) a careful combination of two Newton's iterates for the nonlinear BVP with linear boundary condition, interlaced with a Newton's iterate for solving the associated algebraic constraints is constructed to achieve quadratic convergence for obtaining the solutions with self-consistent parameters. It is shown that a simple fixed-point iteration can only achieve a linear convergence for the self-consistent parameters. The practicability and efficiency of our numerical methods for solving the moment equations of the kinetic theory are illustrated with numerical examples. It is further demonstrated that the moment equations derived from the kinetic theory of neuronal network dynamics can very well capture the coarse-grained dynamical properties of

  6. Neuronal network models of epileptogenesis

    PubMed Central

    Abdullahi, Aminu T.; Adamu, Lawan H.

    2017-01-01

    Epilepsy is a chronic neurological condition, following some trigger, transforming a normal brain to one that produces recurrent unprovoked seizures. In the search for the mechanisms that best explain the epileptogenic process, there is a growing body of evidence suggesting that the epilepsies are network level disorders. In this review, we briefly describe the concept of neuronal networks and highlight 2 methods used to analyse such networks. The first method, graph theory, is used to describe general characteristics of a network to facilitate comparison between normal and abnormal networks. The second, dynamic causal modelling, is useful in the analysis of the pathways of seizure spread. We concluded that the end results of the epileptogenic process are best understood as abnormalities of neuronal circuitry and not simply as molecular or cellular abnormalities. The network approach promises to generate new understanding and more targeted treatment of epilepsy. PMID:28416779

  7. Inferring Single Neuron Properties in Conductance Based Balanced Networks

    PubMed Central

    Pool, Román Rossi; Mato, Germán

    2011-01-01

    Balanced states in large networks are a usual hypothesis for explaining the variability of neural activity in cortical systems. In this regime the statistics of the inputs is characterized by static and dynamic fluctuations. The dynamic fluctuations have a Gaussian distribution. Such statistics allows to use reverse correlation methods, by recording synaptic inputs and the spike trains of ongoing spontaneous activity without any additional input. By using this method, properties of the single neuron dynamics that are masked by the balanced state can be quantified. To show the feasibility of this approach we apply it to large networks of conductance based neurons. The networks are classified as Type I or Type II according to the bifurcations which neurons of the different populations undergo near the firing onset. We also analyze mixed networks, in which each population has a mixture of different neuronal types. We determine under which conditions the intrinsic noise generated by the network can be used to apply reverse correlation methods. We find that under realistic conditions we can ascertain with low error the types of neurons present in the network. We also find that data from neurons with similar firing rates can be combined to perform covariance analysis. We compare the results of these methods (that do not requite any external input) to the standard procedure (that requires the injection of Gaussian noise into a single neuron). We find a good agreement between the two procedures. PMID:22016730

  8. Dynamical estimation of neuron and network properties III: network analysis using neuron spike times.

    PubMed

    Knowlton, Chris; Meliza, C Daniel; Margoliash, Daniel; Abarbanel, Henry D I

    2014-06-01

    Estimating the behavior of a network of neurons requires accurate models of the individual neurons along with accurate characterizations of the connections among them. Whereas for a single cell, measurements of the intracellular voltage are technically feasible and sufficient to characterize a useful model of its behavior, making sufficient numbers of simultaneous intracellular measurements to characterize even small networks is infeasible. This paper builds on prior work on single neurons to explore whether knowledge of the time of spiking of neurons in a network, once the nodes (neurons) have been characterized biophysically, can provide enough information to usefully constrain the functional architecture of the network: the existence of synaptic links among neurons and their strength. Using standardized voltage and synaptic gating variable waveforms associated with a spike, we demonstrate that the functional architecture of a small network of model neurons can be established.

  9. Network reconfiguration and neuronal plasticity in rhythm-generating networks.

    PubMed

    Koch, Henner; Garcia, Alfredo J; Ramirez, Jan-Marino

    2011-12-01

    Neuronal networks are highly plastic and reconfigure in a state-dependent manner. The plasticity at the network level emerges through multiple intrinsic and synaptic membrane properties that imbue neurons and their interactions with numerous nonlinear properties. These properties are continuously regulated by neuromodulators and homeostatic mechanisms that are critical to maintain not only network stability and also adapt networks in a short- and long-term manner to changes in behavioral, developmental, metabolic, and environmental conditions. This review provides concrete examples from neuronal networks in invertebrates and vertebrates, and illustrates that the concepts and rules that govern neuronal networks and behaviors are universal.

  10. Solving Constraint Satisfaction Problems with Networks of Spiking Neurons

    PubMed Central

    Jonke, Zeno; Habenschuss, Stefan; Maass, Wolfgang

    2016-01-01

    Network of neurons in the brain apply—unlike processors in our current generation of computer hardware—an event-based processing strategy, where short pulses (spikes) are emitted sparsely by neurons to signal the occurrence of an event at a particular point in time. Such spike-based computations promise to be substantially more power-efficient than traditional clocked processing schemes. However, it turns out to be surprisingly difficult to design networks of spiking neurons that can solve difficult computational problems on the level of single spikes, rather than rates of spikes. We present here a new method for designing networks of spiking neurons via an energy function. Furthermore, we show how the energy function of a network of stochastically firing neurons can be shaped in a transparent manner by composing the networks of simple stereotypical network motifs. We show that this design approach enables networks of spiking neurons to produce approximate solutions to difficult (NP-hard) constraint satisfaction problems from the domains of planning/optimization and verification/logical inference. The resulting networks employ noise as a computational resource. Nevertheless, the timing of spikes plays an essential role in their computations. Furthermore, networks of spiking neurons carry out for the Traveling Salesman Problem a more efficient stochastic search for good solutions compared with stochastic artificial neural networks (Boltzmann machines) and Gibbs sampling. PMID:27065785

  11. Computational exploration of neuron and neural network models in neurobiology.

    PubMed

    Prinz, Astrid A

    2007-01-01

    The electrical activity of individual neurons and neuronal networks is shaped by the complex interplay of a large number of non-linear processes, including the voltage-dependent gating of ion channels and the activation of synaptic receptors. These complex dynamics make it difficult to understand how individual neuron or network parameters-such as the number of ion channels of a given type in a neuron's membrane or the strength of a particular synapse-influence neural system function. Systematic exploration of cellular or network model parameter spaces by computational brute force can overcome this difficulty and generate comprehensive data sets that contain information about neuron or network behavior for many different combinations of parameters. Searching such data sets for parameter combinations that produce functional neuron or network output provides insights into how narrowly different neural system parameters have to be tuned to produce a desired behavior. This chapter describes the construction and analysis of databases of neuron or neuronal network models and describes some of the advantages and downsides of such exploration methods.

  12. Recording axonal conduction to evaluate the integration of pluripotent cell-derived neurons into a neuronal network.

    PubMed

    Shimba, Kenta; Sakai, Koji; Takayama, Yuzo; Kotani, Kiyoshi; Jimbo, Yasuhiko

    2015-10-01

    Stem cell transplantation is a promising therapy to treat neurodegenerative disorders, and a number of in vitro models have been developed for studying interactions between grafted neurons and the host neuronal network to promote drug discovery. However, methods capable of evaluating the process by which stem cells integrate into the host neuronal network are lacking. In this study, we applied an axonal conduction-based analysis to a co-culture study of primary and differentiated neurons. Mouse cortical neurons and neuronal cells differentiated from P19 embryonal carcinoma cells, a model for early neural differentiation of pluripotent stem cells, were co-cultured in a microfabricated device. The somata of these cells were separated by the co-culture device, but their axons were able to elongate through microtunnels and then form synaptic contacts. Propagating action potentials were recorded from these axons by microelectrodes embedded at the bottom of the microtunnels and sorted into clusters representing individual axons. While the number of axons of cortical neurons increased until 14 days in vitro and then decreased, those of P19 neurons increased throughout the culture period. Network burst analysis showed that P19 neurons participated in approximately 80% of the bursting activity after 14 days in vitro. Interestingly, the axonal conduction delay of P19 neurons was significantly greater than that of cortical neurons, suggesting that there are some physiological differences in their axons. These results suggest that our method is feasible to evaluate the process by which stem cell-derived neurons integrate into a host neuronal network.

  13. Simulating synchronization in neuronal networks

    NASA Astrophysics Data System (ADS)

    Fink, Christian G.

    2016-06-01

    We discuss several techniques used in simulating neuronal networks by exploring how a network's connectivity structure affects its propensity for synchronous spiking. Network connectivity is generated using the Watts-Strogatz small-world algorithm, and two key measures of network structure are described. These measures quantify structural characteristics that influence collective neuronal spiking, which is simulated using the leaky integrate-and-fire model. Simulations show that adding a small number of random connections to an otherwise lattice-like connectivity structure leads to a dramatic increase in neuronal synchronization.

  14. An artificial network model for estimating the network structure underlying partially observed neuronal signals.

    PubMed

    Komatsu, Misako; Namikawa, Jun; Chao, Zenas C; Nagasaka, Yasuo; Fujii, Naotaka; Nakamura, Kiyohiko; Tani, Jun

    2014-01-01

    Many previous studies have proposed methods for quantifying neuronal interactions. However, these methods evaluated the interactions between recorded signals in an isolated network. In this study, we present a novel approach for estimating interactions between observed neuronal signals by theorizing that those signals are observed from only a part of the network that also includes unobserved structures. We propose a variant of the recurrent network model that consists of both observable and unobservable units. The observable units represent recorded neuronal activity, and the unobservable units are introduced to represent activity from unobserved structures in the network. The network structures are characterized by connective weights, i.e., the interaction intensities between individual units, which are estimated from recorded signals. We applied this model to multi-channel brain signals recorded from monkeys, and obtained robust network structures with physiological relevance. Furthermore, the network exhibited common features that portrayed cortical dynamics as inversely correlated interactions between excitatory and inhibitory populations of neurons, which are consistent with the previous view of cortical local circuits. Our results suggest that the novel concept of incorporating an unobserved structure into network estimations has theoretical advantages and could provide insights into brain dynamics beyond what can be directly observed. Copyright © 2014 Elsevier Ireland Ltd and the Japan Neuroscience Society. All rights reserved.

  15. Uncovering Neuronal Networks Defined by Consistent Between-Neuron Spike Timing from Neuronal Spike Recordings

    PubMed Central

    2018-01-01

    Abstract It is widely assumed that distributed neuronal networks are fundamental to the functioning of the brain. Consistent spike timing between neurons is thought to be one of the key principles for the formation of these networks. This can involve synchronous spiking or spiking with time delays, forming spike sequences when the order of spiking is consistent. Finding networks defined by their sequence of time-shifted spikes, denoted here as spike timing networks, is a tremendous challenge. As neurons can participate in multiple spike sequences at multiple between-spike time delays, the possible complexity of networks is prohibitively large. We present a novel approach that is capable of (1) extracting spike timing networks regardless of their sequence complexity, and (2) that describes their spiking sequences with high temporal precision. We achieve this by decomposing frequency-transformed neuronal spiking into separate networks, characterizing each network’s spike sequence by a time delay per neuron, forming a spike sequence timeline. These networks provide a detailed template for an investigation of the experimental relevance of their spike sequences. Using simulated spike timing networks, we show network extraction is robust to spiking noise, spike timing jitter, and partial occurrences of the involved spike sequences. Using rat multineuron recordings, we demonstrate the approach is capable of revealing real spike timing networks with sub-millisecond temporal precision. By uncovering spike timing networks, the prevalence, structure, and function of complex spike sequences can be investigated in greater detail, allowing us to gain a better understanding of their role in neuronal functioning. PMID:29789811

  16. Network synchronization in hippocampal neurons.

    PubMed

    Penn, Yaron; Segal, Menahem; Moses, Elisha

    2016-03-22

    Oscillatory activity is widespread in dynamic neuronal networks. The main paradigm for the origin of periodicity consists of specialized pacemaking elements that synchronize and drive the rest of the network; however, other models exist. Here, we studied the spontaneous emergence of synchronized periodic bursting in a network of cultured dissociated neurons from rat hippocampus and cortex. Surprisingly, about 60% of all active neurons were self-sustained oscillators when disconnected, each with its own natural frequency. The individual neuron's tendency to oscillate and the corresponding oscillation frequency are controlled by its excitability. The single neuron intrinsic oscillations were blocked by riluzole, and are thus dependent on persistent sodium leak currents. Upon a gradual retrieval of connectivity, the synchrony evolves: Loose synchrony appears already at weak connectivity, with the oscillators converging to one common oscillation frequency, yet shifted in phase across the population. Further strengthening of the connectivity causes a reduction in the mean phase shifts until zero-lag is achieved, manifested by synchronous periodic network bursts. Interestingly, the frequency of network bursting matches the average of the intrinsic frequencies. Overall, the network behaves like other universal systems, where order emerges spontaneously by entrainment of independent rhythmic units. Although simplified with respect to circuitry in the brain, our results attribute a basic functional role for intrinsic single neuron excitability mechanisms in driving the network's activity and dynamics, contributing to our understanding of developing neural circuits.

  17. A principled dimension-reduction method for the population density approach to modeling networks of neurons with synaptic dynamics.

    PubMed

    Ly, Cheng

    2013-10-01

    The population density approach to neural network modeling has been utilized in a variety of contexts. The idea is to group many similar noisy neurons into populations and track the probability density function for each population that encompasses the proportion of neurons with a particular state rather than simulating individual neurons (i.e., Monte Carlo). It is commonly used for both analytic insight and as a time-saving computational tool. The main shortcoming of this method is that when realistic attributes are incorporated in the underlying neuron model, the dimension of the probability density function increases, leading to intractable equations or, at best, computationally intensive simulations. Thus, developing principled dimension-reduction methods is essential for the robustness of these powerful methods. As a more pragmatic tool, it would be of great value for the larger theoretical neuroscience community. For exposition of this method, we consider a single uncoupled population of leaky integrate-and-fire neurons receiving external excitatory synaptic input only. We present a dimension-reduction method that reduces a two-dimensional partial differential-integral equation to a computationally efficient one-dimensional system and gives qualitatively accurate results in both the steady-state and nonequilibrium regimes. The method, termed modified mean-field method, is based entirely on the governing equations and not on any auxiliary variables or parameters, and it does not require fine-tuning. The principles of the modified mean-field method have potential applicability to more realistic (i.e., higher-dimensional) neural networks.

  18. An FPGA-Based Silicon Neuronal Network with Selectable Excitability Silicon Neurons

    PubMed Central

    Li, Jing; Katori, Yuichi; Kohno, Takashi

    2012-01-01

    This paper presents a digital silicon neuronal network which simulates the nerve system in creatures and has the ability to execute intelligent tasks, such as associative memory. Two essential elements, the mathematical-structure-based digital spiking silicon neuron (DSSN) and the transmitter release based silicon synapse, allow us to tune the excitability of silicon neurons and are computationally efficient for hardware implementation. We adopt mixed pipeline and parallel structure and shift operations to design a sufficient large and complex network without excessive hardware resource cost. The network with 256 full-connected neurons is built on a Digilent Atlys board equipped with a Xilinx Spartan-6 LX45 FPGA. Besides, a memory control block and USB control block are designed to accomplish the task of data communication between the network and the host PC. This paper also describes the mechanism of associative memory performed in the silicon neuronal network. The network is capable of retrieving stored patterns if the inputs contain enough information of them. The retrieving probability increases with the similarity between the input and the stored pattern increasing. Synchronization of neurons is observed when the successful stored pattern retrieval occurs. PMID:23269911

  19. Cultured neuronal networks as environmental biosensors.

    PubMed

    O'Shaughnessy, Thomas J; Gray, Samuel A; Pancrazio, Joseph J

    2004-01-01

    Contamination of water by toxins, either intentionally or unintentionally, is a growing concern for both military and civilian agencies and thus there is a need for systems capable of monitoring a wide range of natural and industrial toxicants. The EILATox-Oregon Workshop held in September 2002 provided an opportunity to test the capabilities of a prototype neuronal network-based biosensor with unknown contaminants in water samples. The biosensor is a portable device capable of recording the action potential activity from a network of mammalian neurons grown on glass microelectrode arrays. Changes in the action potential fi ring rate across the network are monitored to determine exposure to toxicants. A series of three neuronal networks derived from mice was used to test seven unknown samples. Two of these unknowns later were revealed to be blanks, to which the neuronal networks did not respond. Of the five remaining unknowns, a significant change in network activity was detected for four of the compounds at concentrations below a lethal level for humans: mercuric chloride, sodium arsenite, phosdrin and chlordimeform. These compounds--two heavy metals, an organophosphate and an insecticide--demonstrate the breadth of detection possible with neuronal networks. The results generated at the workshop show the promise of the neuronal network biosensor as an environmental detector but there is still considerable effort needed to produce a device suitable for routine environmental threat monitoring.

  20. Network-induced chaos in integrate-and-fire neuronal ensembles.

    PubMed

    Zhou, Douglas; Rangan, Aaditya V; Sun, Yi; Cai, David

    2009-09-01

    It has been shown that a single standard linear integrate-and-fire (IF) neuron under a general time-dependent stimulus cannot possess chaotic dynamics despite the firing-reset discontinuity. Here we address the issue of whether conductance-based, pulsed-coupled network interactions can induce chaos in an IF neuronal ensemble. Using numerical methods, we demonstrate that all-to-all, homogeneously pulse-coupled IF neuronal networks can indeed give rise to chaotic dynamics under an external periodic current drive. We also provide a precise characterization of the largest Lyapunov exponent for these high dimensional nonsmooth dynamical systems. In addition, we present a stable and accurate numerical algorithm for evaluating the largest Lyapunov exponent, which can overcome difficulties encountered by traditional methods for these nonsmooth dynamical systems with degeneracy induced by, e.g., refractoriness of neurons.

  1. Shaping Neuronal Network Activity by Presynaptic Mechanisms

    PubMed Central

    Ashery, Uri

    2015-01-01

    Neuronal microcircuits generate oscillatory activity, which has been linked to basic functions such as sleep, learning and sensorimotor gating. Although synaptic release processes are well known for their ability to shape the interaction between neurons in microcircuits, most computational models do not simulate the synaptic transmission process directly and hence cannot explain how changes in synaptic parameters alter neuronal network activity. In this paper, we present a novel neuronal network model that incorporates presynaptic release mechanisms, such as vesicle pool dynamics and calcium-dependent release probability, to model the spontaneous activity of neuronal networks. The model, which is based on modified leaky integrate-and-fire neurons, generates spontaneous network activity patterns, which are similar to experimental data and robust under changes in the model's primary gain parameters such as excitatory postsynaptic potential and connectivity ratio. Furthermore, it reliably recreates experimental findings and provides mechanistic explanations for data obtained from microelectrode array recordings, such as network burst termination and the effects of pharmacological and genetic manipulations. The model demonstrates how elevated asynchronous release, but not spontaneous release, synchronizes neuronal network activity and reveals that asynchronous release enhances utilization of the recycling vesicle pool to induce the network effect. The model further predicts a positive correlation between vesicle priming at the single-neuron level and burst frequency at the network level; this prediction is supported by experimental findings. Thus, the model is utilized to reveal how synaptic release processes at the neuronal level govern activity patterns and synchronization at the network level. PMID:26372048

  2. Bayesian Inference and Online Learning in Poisson Neuronal Networks.

    PubMed

    Huang, Yanping; Rao, Rajesh P N

    2016-08-01

    Motivated by the growing evidence for Bayesian computation in the brain, we show how a two-layer recurrent network of Poisson neurons can perform both approximate Bayesian inference and learning for any hidden Markov model. The lower-layer sensory neurons receive noisy measurements of hidden world states. The higher-layer neurons infer a posterior distribution over world states via Bayesian inference from inputs generated by sensory neurons. We demonstrate how such a neuronal network with synaptic plasticity can implement a form of Bayesian inference similar to Monte Carlo methods such as particle filtering. Each spike in a higher-layer neuron represents a sample of a particular hidden world state. The spiking activity across the neural population approximates the posterior distribution over hidden states. In this model, variability in spiking is regarded not as a nuisance but as an integral feature that provides the variability necessary for sampling during inference. We demonstrate how the network can learn the likelihood model, as well as the transition probabilities underlying the dynamics, using a Hebbian learning rule. We present results illustrating the ability of the network to perform inference and learning for arbitrary hidden Markov models.

  3. Rhythmogenic neuronal networks, emergent leaders, and k-cores.

    PubMed

    Schwab, David J; Bruinsma, Robijn F; Feldman, Jack L; Levine, Alex J

    2010-11-01

    Neuronal network behavior results from a combination of the dynamics of individual neurons and the connectivity of the network that links them together. We study a simplified model, based on the proposal of Feldman and Del Negro (FDN) [Nat. Rev. Neurosci. 7, 232 (2006)], of the preBötzinger Complex, a small neuronal network that participates in the control of the mammalian breathing rhythm through periodic firing bursts. The dynamics of this randomly connected network of identical excitatory neurons differ from those of a uniformly connected one. Specifically, network connectivity determines the identity of emergent leader neurons that trigger the firing bursts. When neuronal desensitization is controlled by the number of input signals to the neurons (as proposed by FDN), the network's collective desensitization--required for successful burst termination--is mediated by k-core clusters of neurons.

  4. Neuronal avalanches of a self-organized neural network with active-neuron-dominant structure.

    PubMed

    Li, Xiumin; Small, Michael

    2012-06-01

    Neuronal avalanche is a spontaneous neuronal activity which obeys a power-law distribution of population event sizes with an exponent of -3/2. It has been observed in the superficial layers of cortex both in vivo and in vitro. In this paper, we analyze the information transmission of a novel self-organized neural network with active-neuron-dominant structure. Neuronal avalanches can be observed in this network with appropriate input intensity. We find that the process of network learning via spike-timing dependent plasticity dramatically increases the complexity of network structure, which is finally self-organized to be active-neuron-dominant connectivity. Both the entropy of activity patterns and the complexity of their resulting post-synaptic inputs are maximized when the network dynamics are propagated as neuronal avalanches. This emergent topology is beneficial for information transmission with high efficiency and also could be responsible for the large information capacity of this network compared with alternative archetypal networks with different neural connectivity.

  5. Collective Dynamics for Heterogeneous Networks of Theta Neurons

    NASA Astrophysics Data System (ADS)

    Luke, Tanushree

    Collective behavior in neural networks has often been used as an indicator of communication between different brain areas. These collective synchronization and desynchronization patterns are also considered an important feature in understanding normal and abnormal brain function. To understand the emergence of these collective patterns, I create an analytic model that identifies all such macroscopic steady-states attainable by a network of Type-I neurons. This network, whose basic unit is the model "theta'' neuron, contains a mixture of excitable and spiking neurons coupled via a smooth pulse-like synapse. Applying the Ott-Antonsen reduction method in the thermodynamic limit, I obtain a low-dimensional evolution equation that describes the asymptotic dynamics of the macroscopic mean field of the network. This model can be used as the basis in understanding more complicated neuronal networks when additional dynamical features are included. From this reduced dynamical equation for the mean field, I show that the network exhibits three collective attracting steady-states. The first two are equilibrium states that both reflect partial synchronization in the network, whereas the third is a limit cycle in which the degree of network synchronization oscillates in time. In addition to a comprehensive identification of all possible attracting macro-states, this analytic model permits a complete bifurcation analysis of the collective behavior of the network with respect to three key network features: the degree of excitability of the neurons, the heterogeneity of the population, and the overall coupling strength. The network typically tends towards the two macroscopic equilibrium states when the neuron's intrinsic dynamics and the network interactions reinforce each other. In contrast, the limit cycle state, bifurcations, and multistability tend to occur when there is competition between these network features. I also outline here an extension of the above model where the

  6. Nanostructured superhydrophobic substrates trigger the development of 3D neuronal networks.

    PubMed

    Limongi, Tania; Cesca, Fabrizia; Gentile, Francesco; Marotta, Roberto; Ruffilli, Roberta; Barberis, Andrea; Dal Maschio, Marco; Petrini, Enrica Maria; Santoriello, Stefania; Benfenati, Fabio; Di Fabrizio, Enzo

    2013-02-11

    The generation of 3D networks of primary neurons is a big challenge in neuroscience. Here, a novel method is presented for a 3D neuronal culture on superhydrophobic (SH) substrates. How nano-patterned SH devices stimulate neurons to build 3D networks is investigated. Scanning electron microscopy and confocal imaging show that soon after plating neurites adhere to the nanopatterned pillar sidewalls and they are subsequently pulled between pillars in a suspended position. These neurons display an enhanced survival rate compared to standard cultures and develop mature networks with physiological excitability. These findings underline the importance of using nanostructured SH surfaces for directing 3D neuronal growth, as well as for the design of biomaterials for neuronal regeneration. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Multiplex Networks of Cortical and Hippocampal Neurons Revealed at Different Timescales

    PubMed Central

    Timme, Nicholas; Ito, Shinya; Myroshnychenko, Maxym; Yeh, Fang-Chin; Hiolski, Emma; Hottowy, Pawel; Beggs, John M.

    2014-01-01

    Recent studies have emphasized the importance of multiplex networks – interdependent networks with shared nodes and different types of connections – in systems primarily outside of neuroscience. Though the multiplex properties of networks are frequently not considered, most networks are actually multiplex networks and the multiplex specific features of networks can greatly affect network behavior (e.g. fault tolerance). Thus, the study of networks of neurons could potentially be greatly enhanced using a multiplex perspective. Given the wide range of temporally dependent rhythms and phenomena present in neural systems, we chose to examine multiplex networks of individual neurons with time scale dependent connections. To study these networks, we used transfer entropy – an information theoretic quantity that can be used to measure linear and nonlinear interactions – to systematically measure the connectivity between individual neurons at different time scales in cortical and hippocampal slice cultures. We recorded the spiking activity of almost 12,000 neurons across 60 tissue samples using a 512-electrode array with 60 micrometer inter-electrode spacing and 50 microsecond temporal resolution. To the best of our knowledge, this preparation and recording method represents a superior combination of number of recorded neurons and temporal and spatial recording resolutions to any currently available in vivo system. We found that highly connected neurons (“hubs”) were localized to certain time scales, which, we hypothesize, increases the fault tolerance of the network. Conversely, a large proportion of non-hub neurons were not localized to certain time scales. In addition, we found that long and short time scale connectivity was uncorrelated. Finally, we found that long time scale networks were significantly less modular and more disassortative than short time scale networks in both tissue types. As far as we are aware, this analysis represents the first

  8. Energetic Constraints Produce Self-sustained Oscillatory Dynamics in Neuronal Networks

    PubMed Central

    Burroni, Javier; Taylor, P.; Corey, Cassian; Vachnadze, Tengiz; Siegelmann, Hava T.

    2017-01-01

    Overview: We model energy constraints in a network of spiking neurons, while exploring general questions of resource limitation on network function abstractly. Background: Metabolic states like dietary ketosis or hypoglycemia have a large impact on brain function and disease outcomes. Glia provide metabolic support for neurons, among other functions. Yet, in computational models of glia-neuron cooperation, there have been no previous attempts to explore the effects of direct realistic energy costs on network activity in spiking neurons. Currently, biologically realistic spiking neural networks assume that membrane potential is the main driving factor for neural spiking, and do not take into consideration energetic costs. Methods: We define local energy pools to constrain a neuron model, termed Spiking Neuron Energy Pool (SNEP), which explicitly incorporates energy limitations. Each neuron requires energy to spike, and resources in the pool regenerate over time. Our simulation displays an easy-to-use GUI, which can be run locally in a web browser, and is freely available. Results: Energy dependence drastically changes behavior of these neural networks, causing emergent oscillations similar to those in networks of biological neurons. We analyze the system via Lotka-Volterra equations, producing several observations: (1) energy can drive self-sustained oscillations, (2) the energetic cost of spiking modulates the degree and type of oscillations, (3) harmonics emerge with frequencies determined by energy parameters, and (4) varying energetic costs have non-linear effects on energy consumption and firing rates. Conclusions: Models of neuron function which attempt biological realism may benefit from including energy constraints. Further, we assert that observed oscillatory effects of energy limitations exist in networks of many kinds, and that these findings generalize to abstract graphs and technological applications. PMID:28289370

  9. Reducing Neuronal Networks to Discrete Dynamics

    PubMed Central

    Terman, David; Ahn, Sungwoo; Wang, Xueying; Just, Winfried

    2008-01-01

    We consider a general class of purely inhibitory and excitatory-inhibitory neuronal networks, with a general class of network architectures, and characterize the complex firing patterns that emerge. Our strategy for studying these networks is to first reduce them to a discrete model. In the discrete model, each neuron is represented as a finite number of states and there are rules for how a neuron transitions from one state to another. In this paper, we rigorously demonstrate that the continuous neuronal model can be reduced to the discrete model if the intrinsic and synaptic properties of the cells are chosen appropriately. In a companion paper [1], we analyze the discrete model. PMID:18443649

  10. Orientation selectivity in inhibition-dominated networks of spiking neurons: effect of single neuron properties and network dynamics.

    PubMed

    Sadeh, Sadra; Rotter, Stefan

    2015-01-01

    The neuronal mechanisms underlying the emergence of orientation selectivity in the primary visual cortex of mammals are still elusive. In rodents, visual neurons show highly selective responses to oriented stimuli, but neighboring neurons do not necessarily have similar preferences. Instead of a smooth map, one observes a salt-and-pepper organization of orientation selectivity. Modeling studies have recently confirmed that balanced random networks are indeed capable of amplifying weakly tuned inputs and generating highly selective output responses, even in absence of feature-selective recurrent connectivity. Here we seek to elucidate the neuronal mechanisms underlying this phenomenon by resorting to networks of integrate-and-fire neurons, which are amenable to analytic treatment. Specifically, in networks of perfect integrate-and-fire neurons, we observe that highly selective and contrast invariant output responses emerge, very similar to networks of leaky integrate-and-fire neurons. We then demonstrate that a theory based on mean firing rates and the detailed network topology predicts the output responses, and explains the mechanisms underlying the suppression of the common-mode, amplification of modulation, and contrast invariance. Increasing inhibition dominance in our networks makes the rectifying nonlinearity more prominent, which in turn adds some distortions to the otherwise essentially linear prediction. An extension of the linear theory can account for all the distortions, enabling us to compute the exact shape of every individual tuning curve in our networks. We show that this simple form of nonlinearity adds two important properties to orientation selectivity in the network, namely sharpening of tuning curves and extra suppression of the modulation. The theory can be further extended to account for the nonlinearity of the leaky model by replacing the rectifier by the appropriate smooth input-output transfer function. These results are robust and do not

  11. Orientation Selectivity in Inhibition-Dominated Networks of Spiking Neurons: Effect of Single Neuron Properties and Network Dynamics

    PubMed Central

    Sadeh, Sadra; Rotter, Stefan

    2015-01-01

    The neuronal mechanisms underlying the emergence of orientation selectivity in the primary visual cortex of mammals are still elusive. In rodents, visual neurons show highly selective responses to oriented stimuli, but neighboring neurons do not necessarily have similar preferences. Instead of a smooth map, one observes a salt-and-pepper organization of orientation selectivity. Modeling studies have recently confirmed that balanced random networks are indeed capable of amplifying weakly tuned inputs and generating highly selective output responses, even in absence of feature-selective recurrent connectivity. Here we seek to elucidate the neuronal mechanisms underlying this phenomenon by resorting to networks of integrate-and-fire neurons, which are amenable to analytic treatment. Specifically, in networks of perfect integrate-and-fire neurons, we observe that highly selective and contrast invariant output responses emerge, very similar to networks of leaky integrate-and-fire neurons. We then demonstrate that a theory based on mean firing rates and the detailed network topology predicts the output responses, and explains the mechanisms underlying the suppression of the common-mode, amplification of modulation, and contrast invariance. Increasing inhibition dominance in our networks makes the rectifying nonlinearity more prominent, which in turn adds some distortions to the otherwise essentially linear prediction. An extension of the linear theory can account for all the distortions, enabling us to compute the exact shape of every individual tuning curve in our networks. We show that this simple form of nonlinearity adds two important properties to orientation selectivity in the network, namely sharpening of tuning curves and extra suppression of the modulation. The theory can be further extended to account for the nonlinearity of the leaky model by replacing the rectifier by the appropriate smooth input-output transfer function. These results are robust and do not

  12. Synaptic dynamics regulation in response to high frequency stimulation in neuronal networks

    NASA Astrophysics Data System (ADS)

    Su, Fei; Wang, Jiang; Li, Huiyan; Wei, Xile; Yu, Haitao; Deng, Bin

    2018-02-01

    High frequency stimulation (HFS) has confirmed its ability in modulating the pathological neural activities. However its detailed mechanism is unclear. This study aims to explore the effects of HFS on neuronal networks dynamics. First, the two-neuron FitzHugh-Nagumo (FHN) networks with static coupling strength and the small-world FHN networks with spike-time-dependent plasticity (STDP) modulated synaptic coupling strength are constructed. Then, the multi-scale method is used to transform the network models into equivalent averaged models, where the HFS intensity is modeled as the ratio between stimulation amplitude and frequency. Results show that in static two-neuron networks, there is still synaptic current projected to the postsynaptic neuron even if the presynaptic neuron is blocked by the HFS. In the small-world networks, the effects of the STDP adjusting rate parameter on the inactivation ratio and synchrony degree increase with the increase of HFS intensity. However, only when the HFS intensity becomes very large can the STDP time window parameter affect the inactivation ratio and synchrony index. Both simulation and numerical analysis demonstrate that the effects of HFS on neuronal network dynamics are realized through the adjustment of synaptic variable and conductance.

  13. Irregular behavior in an excitatory-inhibitory neuronal network

    NASA Astrophysics Data System (ADS)

    Park, Choongseok; Terman, David

    2010-06-01

    Excitatory-inhibitory networks arise in many regions throughout the central nervous system and display complex spatiotemporal firing patterns. These neuronal activity patterns (of individual neurons and/or the whole network) are closely related to the functional status of the system and differ between normal and pathological states. For example, neurons within the basal ganglia, a group of subcortical nuclei that are responsible for the generation of movement, display a variety of dynamic behaviors such as correlated oscillatory activity and irregular, uncorrelated spiking. Neither the origins of these firing patterns nor the mechanisms that underlie the patterns are well understood. We consider a biophysical model of an excitatory-inhibitory network in the basal ganglia and explore how specific biophysical properties of the network contribute to the generation of irregular spiking. We use geometric dynamical systems and singular perturbation methods to systematically reduce the model to a simpler set of equations, which is suitable for analysis. The results specify the dependence on the strengths of synaptic connections and the intrinsic firing properties of the cells in the irregular regime when applied to the subthalamopallidal network of the basal ganglia.

  14. Constructing Precisely Computing Networks with Biophysical Spiking Neurons.

    PubMed

    Schwemmer, Michael A; Fairhall, Adrienne L; Denéve, Sophie; Shea-Brown, Eric T

    2015-07-15

    While spike timing has been shown to carry detailed stimulus information at the sensory periphery, its possible role in network computation is less clear. Most models of computation by neural networks are based on population firing rates. In equivalent spiking implementations, firing is assumed to be random such that averaging across populations of neurons recovers the rate-based approach. Recently, however, Denéve and colleagues have suggested that the spiking behavior of neurons may be fundamental to how neuronal networks compute, with precise spike timing determined by each neuron's contribution to producing the desired output (Boerlin and Denéve, 2011; Boerlin et al., 2013). By postulating that each neuron fires to reduce the error in the network's output, it was demonstrated that linear computations can be performed by networks of integrate-and-fire neurons that communicate through instantaneous synapses. This left open, however, the possibility that realistic networks, with conductance-based neurons with subthreshold nonlinearity and the slower timescales of biophysical synapses, may not fit into this framework. Here, we show how the spike-based approach can be extended to biophysically plausible networks. We then show that our network reproduces a number of key features of cortical networks including irregular and Poisson-like spike times and a tight balance between excitation and inhibition. Lastly, we discuss how the behavior of our model scales with network size or with the number of neurons "recorded" from a larger computing network. These results significantly increase the biological plausibility of the spike-based approach to network computation. We derive a network of neurons with standard spike-generating currents and synapses with realistic timescales that computes based upon the principle that the precise timing of each spike is important for the computation. We then show that our network reproduces a number of key features of cortical networks

  15. Neurons from the adult human dentate nucleus: neural networks in the neuron classification.

    PubMed

    Grbatinić, Ivan; Marić, Dušica L; Milošević, Nebojša T

    2015-04-07

    Topological (central vs. border neuron type) and morphological classification of adult human dentate nucleus neurons according to their quantified histomorphological properties using neural networks on real and virtual neuron samples. In the real sample 53.1% and 14.1% of central and border neurons, respectively, are classified correctly with total of 32.8% of misclassified neurons. The most important result present 62.2% of misclassified neurons in border neurons group which is even greater than number of correctly classified neurons (37.8%) in that group, showing obvious failure of network to classify neurons correctly based on computational parameters used in our study. On the virtual sample 97.3% of misclassified neurons in border neurons group which is much greater than number of correctly classified neurons (2.7%) in that group, again confirms obvious failure of network to classify neurons correctly. Statistical analysis shows that there is no statistically significant difference in between central and border neurons for each measured parameter (p>0.05). Total of 96.74% neurons are morphologically classified correctly by neural networks and each one belongs to one of the four histomorphological types: (a) neurons with small soma and short dendrites, (b) neurons with small soma and long dendrites, (c) neuron with large soma and short dendrites, (d) neurons with large soma and long dendrites. Statistical analysis supports these results (p<0.05). Human dentate nucleus neurons can be classified in four neuron types according to their quantitative histomorphological properties. These neuron types consist of two neuron sets, small and large ones with respect to their perykarions with subtypes differing in dendrite length i.e. neurons with short vs. long dendrites. Besides confirmation of neuron classification on small and large ones, already shown in literature, we found two new subtypes i.e. neurons with small soma and long dendrites and with large soma and short

  16. Emergent Oscillations in Networks of Stochastic Spiking Neurons

    PubMed Central

    van Drongelen, Wim; Cowan, Jack D.

    2011-01-01

    Networks of neurons produce diverse patterns of oscillations, arising from the network's global properties, the propensity of individual neurons to oscillate, or a mixture of the two. Here we describe noisy limit cycles and quasi-cycles, two related mechanisms underlying emergent oscillations in neuronal networks whose individual components, stochastic spiking neurons, do not themselves oscillate. Both mechanisms are shown to produce gamma band oscillations at the population level while individual neurons fire at a rate much lower than the population frequency. Spike trains in a network undergoing noisy limit cycles display a preferred period which is not found in the case of quasi-cycles, due to the even faster decay of phase information in quasi-cycles. These oscillations persist in sparsely connected networks, and variation of the network's connectivity results in variation of the oscillation frequency. A network of such neurons behaves as a stochastic perturbation of the deterministic Wilson-Cowan equations, and the network undergoes noisy limit cycles or quasi-cycles depending on whether these have limit cycles or a weakly stable focus. These mechanisms provide a new perspective on the emergence of rhythmic firing in neural networks, showing the coexistence of population-level oscillations with very irregular individual spike trains in a simple and general framework. PMID:21573105

  17. Adaptive Neurons For Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Tawel, Raoul

    1990-01-01

    Training time decreases dramatically. In improved mathematical model of neural-network processor, temperature of neurons (in addition to connection strengths, also called weights, of synapses) varied during supervised-learning phase of operation according to mathematical formalism and not heuristic rule. Evidence that biological neural networks also process information at neuronal level.

  18. PhotoMEA: an opto-electronic biosensor for monitoring in vitro neuronal network activity.

    PubMed

    Ghezzi, Diego; Pedrocchi, Alessandra; Menegon, Andrea; Mantero, Sara; Valtorta, Flavia; Ferrigno, Giancarlo

    2007-02-01

    PhotoMEA is a biosensor useful for the analysis of an in vitro neuronal network, fully based on optical methods. Its function is based on the stimulation of neurons with caged glutamate and the recording of neuronal activity by Voltage-Sensitive fluorescent Dyes (VSD). The main advantage is that it will be possible to stimulate even at sub-single neuron level and to record with high resolution the activity of the entire network in the culture. A large-scale view of neuronal intercommunications offers a unique opportunity for testing the ability of drugs to affect neuronal properties as well as alterations in the behaviour of the entire network. The concept and a prototype for validation is described here in detail.

  19. [Functional organization and structure of the serotonergic neuronal network of terrestrial snail].

    PubMed

    Nikitin, E S; Balaban, P M

    2011-01-01

    The extension of knowledge how the brain works requires permanent improvement of methods of recording of neuronal activity and increase in the number of neurons recorded simultaneously to better understand the collective work of neuronal networks and assemblies. Conventional methods allow simultaneous intracellular recording up to 2-5 neurons and their membrane potentials, currents or monosynaptic connections or observation of spiking of neuronal groups with subsequent discrimination of individual spikes with loss of details of the dynamics of membrane potential. We recorded activity of a compact group of serotonergic neurons (up to 56 simultaneously) in the ganglion of a terrestrial mollusk using the method of optical recording of membrane potential that allowed to record individual action potentials in details with action potential parameters and to reveal morphology of the neurons rcorded. We demonstrated clear clustering in the group in relation with the dynamics of action potentials and phasic or tonic components in the neuronal responses to external electrophysiological and tactile stimuli. Also, we showed that identified neuron Pd2 could induce activation of a significant number of neurons in the group whereas neuron Pd4 did not induce any activation. However, its activation is delayed with regard to activation of the reacting group of neurons. Our data strongly support the concept of possible delegation of the integrative function by the network to a single neuron.

  20. Mean-field equations for neuronal networks with arbitrary degree distributions.

    PubMed

    Nykamp, Duane Q; Friedman, Daniel; Shaker, Sammy; Shinn, Maxwell; Vella, Michael; Compte, Albert; Roxin, Alex

    2017-04-01

    The emergent dynamics in networks of recurrently coupled spiking neurons depends on the interplay between single-cell dynamics and network topology. Most theoretical studies on network dynamics have assumed simple topologies, such as connections that are made randomly and independently with a fixed probability (Erdös-Rényi network) (ER) or all-to-all connected networks. However, recent findings from slice experiments suggest that the actual patterns of connectivity between cortical neurons are more structured than in the ER random network. Here we explore how introducing additional higher-order statistical structure into the connectivity can affect the dynamics in neuronal networks. Specifically, we consider networks in which the number of presynaptic and postsynaptic contacts for each neuron, the degrees, are drawn from a joint degree distribution. We derive mean-field equations for a single population of homogeneous neurons and for a network of excitatory and inhibitory neurons, where the neurons can have arbitrary degree distributions. Through analysis of the mean-field equations and simulation of networks of integrate-and-fire neurons, we show that such networks have potentially much richer dynamics than an equivalent ER network. Finally, we relate the degree distributions to so-called cortical motifs.

  1. Mean-field equations for neuronal networks with arbitrary degree distributions

    NASA Astrophysics Data System (ADS)

    Nykamp, Duane Q.; Friedman, Daniel; Shaker, Sammy; Shinn, Maxwell; Vella, Michael; Compte, Albert; Roxin, Alex

    2017-04-01

    The emergent dynamics in networks of recurrently coupled spiking neurons depends on the interplay between single-cell dynamics and network topology. Most theoretical studies on network dynamics have assumed simple topologies, such as connections that are made randomly and independently with a fixed probability (Erdös-Rényi network) (ER) or all-to-all connected networks. However, recent findings from slice experiments suggest that the actual patterns of connectivity between cortical neurons are more structured than in the ER random network. Here we explore how introducing additional higher-order statistical structure into the connectivity can affect the dynamics in neuronal networks. Specifically, we consider networks in which the number of presynaptic and postsynaptic contacts for each neuron, the degrees, are drawn from a joint degree distribution. We derive mean-field equations for a single population of homogeneous neurons and for a network of excitatory and inhibitory neurons, where the neurons can have arbitrary degree distributions. Through analysis of the mean-field equations and simulation of networks of integrate-and-fire neurons, we show that such networks have potentially much richer dynamics than an equivalent ER network. Finally, we relate the degree distributions to so-called cortical motifs.

  2. Synaptic Plasticity and Spike Synchronisation in Neuronal Networks

    NASA Astrophysics Data System (ADS)

    Borges, Rafael R.; Borges, Fernando S.; Lameu, Ewandson L.; Protachevicz, Paulo R.; Iarosz, Kelly C.; Caldas, Iberê L.; Viana, Ricardo L.; Macau, Elbert E. N.; Baptista, Murilo S.; Grebogi, Celso; Batista, Antonio M.

    2017-12-01

    Brain plasticity, also known as neuroplasticity, is a fundamental mechanism of neuronal adaptation in response to changes in the environment or due to brain injury. In this review, we show our results about the effects of synaptic plasticity on neuronal networks composed by Hodgkin-Huxley neurons. We show that the final topology of the evolved network depends crucially on the ratio between the strengths of the inhibitory and excitatory synapses. Excitation of the same order of inhibition revels an evolved network that presents the rich-club phenomenon, well known to exist in the brain. For initial networks with considerably larger inhibitory strengths, we observe the emergence of a complex evolved topology, where neurons sparsely connected to other neurons, also a typical topology of the brain. The presence of noise enhances the strength of both types of synapses, but if the initial network has synapses of both natures with similar strengths. Finally, we show how the synchronous behaviour of the evolved network will reflect its evolved topology.

  3. Identification of neuronal network properties from the spectral analysis of calcium imaging signals in neuronal cultures.

    PubMed

    Tibau, Elisenda; Valencia, Miguel; Soriano, Jordi

    2013-01-01

    Neuronal networks in vitro are prominent systems to study the development of connections in living neuronal networks and the interplay between connectivity, activity and function. These cultured networks show a rich spontaneous activity that evolves concurrently with the connectivity of the underlying network. In this work we monitor the development of neuronal cultures, and record their activity using calcium fluorescence imaging. We use spectral analysis to characterize global dynamical and structural traits of the neuronal cultures. We first observe that the power spectrum can be used as a signature of the state of the network, for instance when inhibition is active or silent, as well as a measure of the network's connectivity strength. Second, the power spectrum identifies prominent developmental changes in the network such as GABAA switch. And third, the analysis of the spatial distribution of the spectral density, in experiments with a controlled disintegration of the network through CNQX, an AMPA-glutamate receptor antagonist in excitatory neurons, reveals the existence of communities of strongly connected, highly active neurons that display synchronous oscillations. Our work illustrates the interest of spectral analysis for the study of in vitro networks, and its potential use as a network-state indicator, for instance to compare healthy and diseased neuronal networks.

  4. Synchronization properties of heterogeneous neuronal networks with mixed excitability type

    NASA Astrophysics Data System (ADS)

    Leone, Michael J.; Schurter, Brandon N.; Letson, Benjamin; Booth, Victoria; Zochowski, Michal; Fink, Christian G.

    2015-03-01

    We study the synchronization of neuronal networks with dynamical heterogeneity, showing that network structures with the same propensity for synchronization (as quantified by master stability function analysis) may develop dramatically different synchronization properties when heterogeneity is introduced with respect to neuronal excitability type. Specifically, we investigate networks composed of neurons with different types of phase response curves (PRCs), which characterize how oscillating neurons respond to excitatory perturbations. Neurons exhibiting type 1 PRC respond exclusively with phase advances, while neurons exhibiting type 2 PRC respond with either phase delays or phase advances, depending on when the perturbation occurs. We find that Watts-Strogatz small world networks transition to synchronization gradually as the proportion of type 2 neurons increases, whereas scale-free networks may transition gradually or rapidly, depending upon local correlations between node degree and excitability type. Random placement of type 2 neurons results in gradual transition to synchronization, whereas placement of type 2 neurons as hubs leads to a much more rapid transition, showing that type 2 hub cells easily "hijack" neuronal networks to synchronization. These results underscore the fact that the degree of synchronization observed in neuronal networks is determined by a complex interplay between network structure and the dynamical properties of individual neurons, indicating that efforts to recover structural connectivity from dynamical correlations must in general take both factors into account.

  5. Spiking neuron network Helmholtz machine.

    PubMed

    Sountsov, Pavel; Miller, Paul

    2015-01-01

    An increasing amount of behavioral and neurophysiological data suggests that the brain performs optimal (or near-optimal) probabilistic inference and learning during perception and other tasks. Although many machine learning algorithms exist that perform inference and learning in an optimal way, the complete description of how one of those algorithms (or a novel algorithm) can be implemented in the brain is currently incomplete. There have been many proposed solutions that address how neurons can perform optimal inference but the question of how synaptic plasticity can implement optimal learning is rarely addressed. This paper aims to unify the two fields of probabilistic inference and synaptic plasticity by using a neuronal network of realistic model spiking neurons to implement a well-studied computational model called the Helmholtz Machine. The Helmholtz Machine is amenable to neural implementation as the algorithm it uses to learn its parameters, called the wake-sleep algorithm, uses a local delta learning rule. Our spiking-neuron network implements both the delta rule and a small example of a Helmholtz machine. This neuronal network can learn an internal model of continuous-valued training data sets without supervision. The network can also perform inference on the learned internal models. We show how various biophysical features of the neural implementation constrain the parameters of the wake-sleep algorithm, such as the duration of the wake and sleep phases of learning and the minimal sample duration. We examine the deviations from optimal performance and tie them to the properties of the synaptic plasticity rule.

  6. Spiking neuron network Helmholtz machine

    PubMed Central

    Sountsov, Pavel; Miller, Paul

    2015-01-01

    An increasing amount of behavioral and neurophysiological data suggests that the brain performs optimal (or near-optimal) probabilistic inference and learning during perception and other tasks. Although many machine learning algorithms exist that perform inference and learning in an optimal way, the complete description of how one of those algorithms (or a novel algorithm) can be implemented in the brain is currently incomplete. There have been many proposed solutions that address how neurons can perform optimal inference but the question of how synaptic plasticity can implement optimal learning is rarely addressed. This paper aims to unify the two fields of probabilistic inference and synaptic plasticity by using a neuronal network of realistic model spiking neurons to implement a well-studied computational model called the Helmholtz Machine. The Helmholtz Machine is amenable to neural implementation as the algorithm it uses to learn its parameters, called the wake-sleep algorithm, uses a local delta learning rule. Our spiking-neuron network implements both the delta rule and a small example of a Helmholtz machine. This neuronal network can learn an internal model of continuous-valued training data sets without supervision. The network can also perform inference on the learned internal models. We show how various biophysical features of the neural implementation constrain the parameters of the wake-sleep algorithm, such as the duration of the wake and sleep phases of learning and the minimal sample duration. We examine the deviations from optimal performance and tie them to the properties of the synaptic plasticity rule. PMID:25954191

  7. Spectral Entropy Based Neuronal Network Synchronization Analysis Based on Microelectrode Array Measurements

    PubMed Central

    Kapucu, Fikret E.; Välkki, Inkeri; Mikkonen, Jarno E.; Leone, Chiara; Lenk, Kerstin; Tanskanen, Jarno M. A.; Hyttinen, Jari A. K.

    2016-01-01

    Synchrony and asynchrony are essential aspects of the functioning of interconnected neuronal cells and networks. New information on neuronal synchronization can be expected to aid in understanding these systems. Synchronization provides insight in the functional connectivity and the spatial distribution of the information processing in the networks. Synchronization is generally studied with time domain analysis of neuronal events, or using direct frequency spectrum analysis, e.g., in specific frequency bands. However, these methods have their pitfalls. Thus, we have previously proposed a method to analyze temporal changes in the complexity of the frequency of signals originating from different network regions. The method is based on the correlation of time varying spectral entropies (SEs). SE assesses the regularity, or complexity, of a time series by quantifying the uniformity of the frequency spectrum distribution. It has been previously employed, e.g., in electroencephalogram analysis. Here, we revisit our correlated spectral entropy method (CorSE), providing evidence of its justification, usability, and benefits. Here, CorSE is assessed with simulations and in vitro microelectrode array (MEA) data. CorSE is first demonstrated with a specifically tailored toy simulation to illustrate how it can identify synchronized populations. To provide a form of validation, the method was tested with simulated data from integrate-and-fire model based computational neuronal networks. To demonstrate the analysis of real data, CorSE was applied on in vitro MEA data measured from rat cortical cell cultures, and the results were compared with three known event based synchronization measures. Finally, we show the usability by tracking the development of networks in dissociated mouse cortical cell cultures. The results show that temporal correlations in frequency spectrum distributions reflect the network relations of neuronal populations. In the simulated data, CorSE unraveled the

  8. Vehicle dynamic analysis using neuronal network algorithms

    NASA Astrophysics Data System (ADS)

    Oloeriu, Florin; Mocian, Oana

    2014-06-01

    Theoretical developments of certain engineering areas, the emergence of new investigation tools, which are better and more precise and their implementation on-board the everyday vehicles, all these represent main influence factors that impact the theoretical and experimental study of vehicle's dynamic behavior. Once the implementation of these new technologies onto the vehicle's construction had been achieved, it had led to more and more complex systems. Some of the most important, such as the electronic control of engine, transmission, suspension, steering, braking and traction had a positive impact onto the vehicle's dynamic behavior. The existence of CPU on-board vehicles allows data acquisition and storage and it leads to a more accurate and better experimental and theoretical study of vehicle dynamics. It uses the information offered directly by the already on-board built-in elements of electronic control systems. The technical literature that studies vehicle dynamics is entirely focused onto parametric analysis. This kind of approach adopts two simplifying assumptions. Functional parameters obey certain distribution laws, which are known in classical statistics theory. The second assumption states that the mathematical models are previously known and have coefficients that are not time-dependent. Both the mentioned assumptions are not confirmed in real situations: the functional parameters do not follow any known statistical repartition laws and the mathematical laws aren't previously known and contain families of parameters and are mostly time-dependent. The purpose of the paper is to present a more accurate analysis methodology that can be applied when studying vehicle's dynamic behavior. A method that provides the setting of non-parametrical mathematical models for vehicle's dynamic behavior is relying on neuronal networks. This method contains coefficients that are time-dependent. Neuronal networks are mostly used in various types' system controls, thus

  9. Automated quantification of neuronal networks and single-cell calcium dynamics using calcium imaging

    PubMed Central

    Patel, Tapan P.; Man, Karen; Firestein, Bonnie L.; Meaney, David F.

    2017-01-01

    Background Recent advances in genetically engineered calcium and membrane potential indicators provide the potential to estimate the activation dynamics of individual neurons within larger, mesoscale networks (100s–1000 +neurons). However, a fully integrated automated workflow for the analysis and visualization of neural microcircuits from high speed fluorescence imaging data is lacking. New method Here we introduce FluoroSNNAP, Fluorescence Single Neuron and Network Analysis Package. FluoroSNNAP is an open-source, interactive software developed in MATLAB for automated quantification of numerous biologically relevant features of both the calcium dynamics of single-cells and network activity patterns. FluoroSNNAP integrates and improves upon existing tools for spike detection, synchronization analysis, and inference of functional connectivity, making it most useful to experimentalists with little or no programming knowledge. Results We apply FluoroSNNAP to characterize the activity patterns of neuronal microcircuits undergoing developmental maturation in vitro. Separately, we highlight the utility of single-cell analysis for phenotyping a mixed population of neurons expressing a human mutant variant of the microtubule associated protein tau and wild-type tau. Comparison with existing method(s) We show the performance of semi-automated cell segmentation using spatiotemporal independent component analysis and significant improvement in detecting calcium transients using a template-based algorithm in comparison to peak-based or wavelet-based detection methods. Our software further enables automated analysis of microcircuits, which is an improvement over existing methods. Conclusions We expect the dissemination of this software will facilitate a comprehensive analysis of neuronal networks, promoting the rapid interrogation of circuits in health and disease. PMID:25629800

  10. Bifurcations of large networks of two-dimensional integrate and fire neurons.

    PubMed

    Nicola, Wilten; Campbell, Sue Ann

    2013-08-01

    Recently, a class of two-dimensional integrate and fire models has been used to faithfully model spiking neurons. This class includes the Izhikevich model, the adaptive exponential integrate and fire model, and the quartic integrate and fire model. The bifurcation types for the individual neurons have been thoroughly analyzed by Touboul (SIAM J Appl Math 68(4):1045-1079, 2008). However, when the models are coupled together to form networks, the networks can display bifurcations that an uncoupled oscillator cannot. For example, the networks can transition from firing with a constant rate to burst firing. This paper introduces a technique to reduce a full network of this class of neurons to a mean field model, in the form of a system of switching ordinary differential equations. The reduction uses population density methods and a quasi-steady state approximation to arrive at the mean field system. Reduced models are derived for networks with different topologies and different model neurons with biologically derived parameters. The mean field equations are able to qualitatively and quantitatively describe the bifurcations that the full networks display. Extensions and higher order approximations are discussed.

  11. Robust spatial memory maps in flickering neuronal networks: a topological model

    NASA Astrophysics Data System (ADS)

    Dabaghian, Yuri; Babichev, Andrey; Memoli, Facundo; Chowdhury, Samir; Rice University Collaboration; Ohio State University Collaboration

    It is widely accepted that the hippocampal place cells provide a substrate of the neuronal representation of the environment--the ``cognitive map''. However, hippocampal network, as any other network in the brain is transient: thousands of hippocampal neurons die every day and the connections formed by these cells constantly change due to various forms of synaptic plasticity. What then explains the remarkable reliability of our spatial memories? We propose a computational approach to answering this question based on a couple of insights. First, we propose that the hippocampal cognitive map is fundamentally topological, and hence it is amenable to analysis by topological methods. We then apply several novel methods from homology theory, to understand how dynamic connections between cells influences the speed and reliability of spatial learning. We simulate the rat's exploratory movements through different environments and study how topological invariants of these environments arise in a network of simulated neurons with ``flickering'' connectivity. We find that despite transient connectivity the network of place cells produces a stable representation of the topology of the environment.

  12. Detection of 5-hydroxytryptamine (5-HT) in vitro using a hippocampal neuronal network-based biosensor with extracellular potential analysis of neurons.

    PubMed

    Hu, Liang; Wang, Qin; Qin, Zhen; Su, Kaiqi; Huang, Liquan; Hu, Ning; Wang, Ping

    2015-04-15

    5-hydroxytryptamine (5-HT) is an important neurotransmitter in regulating emotions and related behaviors in mammals. To detect and monitor the 5-HT, effective and convenient methods are demanded in investigation of neuronal network. In this study, hippocampal neuronal networks (HNNs) endogenously expressing 5-HT receptors were employed as sensing elements to build an in vitro neuronal network-based biosensor. The electrophysiological characteristics were analyzed in both neuron and network levels. The firing rates and amplitudes were derived from signal to determine the biosensor response characteristics. The experimental results demonstrate a dose-dependent inhibitory effect of 5-HT on hippocampal neuron activities, indicating the effectiveness of this hybrid biosensor in detecting 5-HT with a response range from 0.01μmol/L to 10μmol/L. In addition, the cross-correlation analysis of HNNs activities suggests 5-HT could weaken HNN connectivity reversibly, providing more specificity of this biosensor in detecting 5-HT. Moreover, 5-HT induced spatiotemporal firing pattern alterations could be monitored in neuron and network levels simultaneously by this hybrid biosensor in a convenient and direct way. With those merits, this neuronal network-based biosensor will be promising to be a valuable and utility platform for the study of neurotransmitter in vitro. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Computational properties of networks of synchronous groups of spiking neurons.

    PubMed

    Dayhoff, Judith E

    2007-09-01

    We demonstrate a model in which synchronously firing ensembles of neurons are networked to produce computational results. Each ensemble is a group of biological integrate-and-fire spiking neurons, with probabilistic interconnections between groups. An analogy is drawn in which each individual processing unit of an artificial neural network corresponds to a neuronal group in a biological model. The activation value of a unit in the artificial neural network corresponds to the fraction of active neurons, synchronously firing, in a biological neuronal group. Weights of the artificial neural network correspond to the product of the interconnection density between groups, the group size of the presynaptic group, and the postsynaptic potential heights in the synchronous group model. All three of these parameters can modulate connection strengths between neuronal groups in the synchronous group models. We give an example of nonlinear classification (XOR) and a function approximation example in which the capability of the artificial neural network can be captured by a neural network model with biological integrate-and-fire neurons configured as a network of synchronously firing ensembles of such neurons. We point out that the general function approximation capability proven for feedforward artificial neural networks appears to be approximated by networks of neuronal groups that fire in synchrony, where the groups comprise integrate-and-fire neurons. We discuss the advantages of this type of model for biological systems, its possible learning mechanisms, and the associated timing relationships.

  14. Population coding in sparsely connected networks of noisy neurons.

    PubMed

    Tripp, Bryan P; Orchard, Jeff

    2012-01-01

    This study examines the relationship between population coding and spatial connection statistics in networks of noisy neurons. Encoding of sensory information in the neocortex is thought to require coordinated neural populations, because individual cortical neurons respond to a wide range of stimuli, and exhibit highly variable spiking in response to repeated stimuli. Population coding is rooted in network structure, because cortical neurons receive information only from other neurons, and because the information they encode must be decoded by other neurons, if it is to affect behavior. However, population coding theory has often ignored network structure, or assumed discrete, fully connected populations (in contrast with the sparsely connected, continuous sheet of the cortex). In this study, we modeled a sheet of cortical neurons with sparse, primarily local connections, and found that a network with this structure could encode multiple internal state variables with high signal-to-noise ratio. However, we were unable to create high-fidelity networks by instantiating connections at random according to spatial connection probabilities. In our models, high-fidelity networks required additional structure, with higher cluster factors and correlations between the inputs to nearby neurons.

  15. To Break or to Brake Neuronal Network Accelerated by Ammonium Ions?

    PubMed Central

    Dynnik, Vladimir V.; Kononov, Alexey V.; Sergeev, Alexander I.; Teplov, Iliya Y.; Tankanag, Arina V.; Zinchenko, Valery P.

    2015-01-01

    Purpose The aim of present study was to investigate the effects of ammonium ions on in vitro neuronal network activity and to search alternative methods of acute ammonia neurotoxicity prevention. Methods Rat hippocampal neuronal and astrocytes co-cultures in vitro, fluorescent microscopy and perforated patch clamp were used to monitor the changes in intracellular Ca2+- and membrane potential produced by ammonium ions and various modulators in the cells implicated in neural networks. Results Low concentrations of NH4Cl (0.1–4 mM) produce short temporal effects on network activity. Application of 5–8 mM NH4Cl: invariably transforms diverse network firing regimen to identical burst patterns, characterized by substantial neuronal membrane depolarization at plateau phase of potential and high-amplitude Ca2+-oscillations; raises frequency and average for period of oscillations Ca2+-level in all cells implicated in network; results in the appearance of group of «run out» cells with high intracellular Ca2+ and steadily diminished amplitudes of oscillations; increases astrocyte Ca2+-signalling, characterized by the appearance of groups of cells with increased intracellular Ca2+-level and/or chaotic Ca2+-oscillations. Accelerated network activity may be suppressed by the blockade of NMDA or AMPA/kainate-receptors or by overactivation of AMPA/kainite-receptors. Ammonia still activate neuronal firing in the presence of GABA(A) receptors antagonist bicuculline, indicating that «disinhibition phenomenon» is not implicated in the mechanisms of networks acceleration. Network activity may also be slowed down by glycine, agonists of metabotropic inhibitory receptors, betaine, L-carnitine, L-arginine, etc. Conclusions Obtained results demonstrate that ammonium ions accelerate neuronal networks firing, implicating ionotropic glutamate receptors, having preserved the activities of group of inhibitory ionotropic and metabotropic receptors. This may mean, that ammonia

  16. Aberrant within- and between-network connectivity of the mirror neuron system network and the mentalizing network in first episode psychosis.

    PubMed

    Choe, Eugenie; Lee, Tae Young; Kim, Minah; Hur, Ji-Won; Yoon, Youngwoo Bryan; Cho, Kang-Ik K; Kwon, Jun Soo

    2018-03-26

    It has been suggested that the mentalizing network and the mirror neuron system network support important social cognitive processes that are impaired in schizophrenia. However, the integrity and interaction of these two networks have not been sufficiently studied, and their effects on social cognition in schizophrenia remain unclear. Our study included 26 first-episode psychosis (FEP) patients and 26 healthy controls. We utilized resting-state functional connectivity to examine the a priori-defined mirror neuron system network and the mentalizing network and to assess the within- and between-network connectivities of the networks in FEP patients. We also assessed the correlation between resting-state functional connectivity measures and theory of mind performance. FEP patients showed altered within-network connectivity of the mirror neuron system network, and aberrant between-network connectivity between the mirror neuron system network and the mentalizing network. The within-network connectivity of the mirror neuron system network was noticeably correlated with theory of mind task performance in FEP patients. The integrity and interaction of the mirror neuron system network and the mentalizing network may be altered during the early stages of psychosis. Additionally, this study suggests that alterations in the integrity of the mirror neuron system network are highly related to deficient theory of mind in schizophrenia, and this problem would be present from the early stage of psychosis. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. Results on a binding neuron model and their implications for modified hourglass model for neuronal network.

    PubMed

    Arunachalam, Viswanathan; Akhavan-Tabatabaei, Raha; Lopez, Cristina

    2013-01-01

    The classical models of single neuron like Hodgkin-Huxley point neuron or leaky integrate and fire neuron assume the influence of postsynaptic potentials to last till the neuron fires. Vidybida (2008) in a refreshing departure has proposed models for binding neurons in which the trace of an input is remembered only for a finite fixed period of time after which it is forgotten. The binding neurons conform to the behaviour of real neurons and are applicable in constructing fast recurrent networks for computer modeling. This paper develops explicitly several useful results for a binding neuron like the firing time distribution and other statistical characteristics. We also discuss the applicability of the developed results in constructing a modified hourglass network model in which there are interconnected neurons with excitatory as well as inhibitory inputs. Limited simulation results of the hourglass network are presented.

  18. FPGA implementation of motifs-based neuronal network and synchronization analysis

    NASA Astrophysics Data System (ADS)

    Deng, Bin; Zhu, Zechen; Yang, Shuangming; Wei, Xile; Wang, Jiang; Yu, Haitao

    2016-06-01

    Motifs in complex networks play a crucial role in determining the brain functions. In this paper, 13 kinds of motifs are implemented with Field Programmable Gate Array (FPGA) to investigate the relationships between the networks properties and motifs properties. We use discretization method and pipelined architecture to construct various motifs with Hindmarsh-Rose (HR) neuron as the node model. We also build a small-world network based on these motifs and conduct the synchronization analysis of motifs as well as the constructed network. We find that the synchronization properties of motif determine that of motif-based small-world network, which demonstrates effectiveness of our proposed hardware simulation platform. By imitation of some vital nuclei in the brain to generate normal discharges, our proposed FPGA-based artificial neuronal networks have the potential to replace the injured nuclei to complete the brain function in the treatment of Parkinson's disease and epilepsy.

  19. Reciprocal cholinergic and GABAergic modulation of the small ventrolateral pacemaker neurons of Drosophila's circadian clock neuron network.

    PubMed

    Lelito, Katherine R; Shafer, Orie T

    2012-04-01

    The relatively simple clock neuron network of Drosophila is a valuable model system for the neuronal basis of circadian timekeeping. Unfortunately, many key neuronal classes of this network are inaccessible to electrophysiological analysis. We have therefore adopted the use of genetically encoded sensors to address the physiology of the fly's circadian clock network. Using genetically encoded Ca(2+) and cAMP sensors, we have investigated the physiological responses of two specific classes of clock neuron, the large and small ventrolateral neurons (l- and s-LN(v)s), to two neurotransmitters implicated in their modulation: acetylcholine (ACh) and γ-aminobutyric acid (GABA). Live imaging of l-LN(v) cAMP and Ca(2+) dynamics in response to cholinergic agonist and GABA application were well aligned with published electrophysiological data, indicating that our sensors were capable of faithfully reporting acute physiological responses to these transmitters within single adult clock neuron soma. We extended these live imaging methods to s-LN(v)s, critical neuronal pacemakers whose physiological properties in the adult brain are largely unknown. Our s-LN(v) experiments revealed the predicted excitatory responses to bath-applied cholinergic agonists and the predicted inhibitory effects of GABA and established that the antagonism of ACh and GABA extends to their effects on cAMP signaling. These data support recently published but physiologically untested models of s-LN(v) modulation and lead to the prediction that cholinergic and GABAergic inputs to s-LN(v)s will have opposing effects on the phase and/or period of the molecular clock within these critical pacemaker neurons.

  20. Effect of Transcranial Magnetic Stimulation on Neuronal Networks

    NASA Astrophysics Data System (ADS)

    Unsal, Ahmet; Hadimani, Ravi; Jiles, David

    2013-03-01

    The human brain contains around 100 billion nerve cells controlling our day to day activities. Consequently, brain disorders often result in impairments such as paralysis, loss of coordination and seizure. It has been said that 1 in 5 Americans suffer some diagnosable mental disorder. There is an urgent need to understand the disorders, prevent them and if possible, develop permanent cure for them. As a result, a significant amount of research activities is being directed towards brain research. Transcranial Magnetic Stimulation (TMS) is a promising tool for diagnosing and treating brain disorders. It is a non-invasive treatment method that produces a current flow in the brain which excites the neurons. Even though TMS has been verified to have advantageous effects on various brain related disorders, there have not been enough studies on the impact of TMS on cells. In this study, we are investigating the electrophysiological effects of TMS on one dimensional neuronal culture grown in a circular pathway. Electrical currents are produced on the neuronal networks depending on the directionality of the applied field. This aids in understanding how neuronal networks react under TMS treatment.

  1. Constructing Neuronal Network Models in Massively Parallel Environments.

    PubMed

    Ippen, Tammo; Eppler, Jochen M; Plesser, Hans E; Diesmann, Markus

    2017-01-01

    Recent advances in the development of data structures to represent spiking neuron network models enable us to exploit the complete memory of petascale computers for a single brain-scale network simulation. In this work, we investigate how well we can exploit the computing power of such supercomputers for the creation of neuronal networks. Using an established benchmark, we divide the runtime of simulation code into the phase of network construction and the phase during which the dynamical state is advanced in time. We find that on multi-core compute nodes network creation scales well with process-parallel code but exhibits a prohibitively large memory consumption. Thread-parallel network creation, in contrast, exhibits speedup only up to a small number of threads but has little overhead in terms of memory. We further observe that the algorithms creating instances of model neurons and their connections scale well for networks of ten thousand neurons, but do not show the same speedup for networks of millions of neurons. Our work uncovers that the lack of scaling of thread-parallel network creation is due to inadequate memory allocation strategies and demonstrates that thread-optimized memory allocators recover excellent scaling. An analysis of the loop order used for network construction reveals that more complex tests on the locality of operations significantly improve scaling and reduce runtime by allowing construction algorithms to step through large networks more efficiently than in existing code. The combination of these techniques increases performance by an order of magnitude and harnesses the increasingly parallel compute power of the compute nodes in high-performance clusters and supercomputers.

  2. Constructing Neuronal Network Models in Massively Parallel Environments

    PubMed Central

    Ippen, Tammo; Eppler, Jochen M.; Plesser, Hans E.; Diesmann, Markus

    2017-01-01

    Recent advances in the development of data structures to represent spiking neuron network models enable us to exploit the complete memory of petascale computers for a single brain-scale network simulation. In this work, we investigate how well we can exploit the computing power of such supercomputers for the creation of neuronal networks. Using an established benchmark, we divide the runtime of simulation code into the phase of network construction and the phase during which the dynamical state is advanced in time. We find that on multi-core compute nodes network creation scales well with process-parallel code but exhibits a prohibitively large memory consumption. Thread-parallel network creation, in contrast, exhibits speedup only up to a small number of threads but has little overhead in terms of memory. We further observe that the algorithms creating instances of model neurons and their connections scale well for networks of ten thousand neurons, but do not show the same speedup for networks of millions of neurons. Our work uncovers that the lack of scaling of thread-parallel network creation is due to inadequate memory allocation strategies and demonstrates that thread-optimized memory allocators recover excellent scaling. An analysis of the loop order used for network construction reveals that more complex tests on the locality of operations significantly improve scaling and reduce runtime by allowing construction algorithms to step through large networks more efficiently than in existing code. The combination of these techniques increases performance by an order of magnitude and harnesses the increasingly parallel compute power of the compute nodes in high-performance clusters and supercomputers. PMID:28559808

  3. Statistical identification of stimulus-activated network nodes in multi-neuron voltage-sensitive dye optical recordings.

    PubMed

    Fathiazar, Elham; Anemuller, Jorn; Kretzberg, Jutta

    2016-08-01

    Voltage-Sensitive Dye (VSD) imaging is an optical imaging method that allows measuring the graded voltage changes of multiple neurons simultaneously. In neuroscience, this method is used to reveal networks of neurons involved in certain tasks. However, the recorded relative dye fluorescence changes are usually low and signals are superimposed by noise and artifacts. Therefore, establishing a reliable method to identify which cells are activated by specific stimulus conditions is the first step to identify functional networks. In this paper, we present a statistical method to identify stimulus-activated network nodes as cells, whose activities during sensory network stimulation differ significantly from the un-stimulated control condition. This method is demonstrated based on voltage-sensitive dye recordings from up to 100 neurons in a ganglion of the medicinal leech responding to tactile skin stimulation. Without relying on any prior physiological knowledge, the network nodes identified by our statistical analysis were found to match well with published cell types involved in tactile stimulus processing and to be consistent across stimulus conditions and preparations.

  4. Network feedback regulates motor output across a range of modulatory neuron activity

    PubMed Central

    Spencer, Robert M.

    2016-01-01

    Modulatory projection neurons alter network neuron synaptic and intrinsic properties to elicit multiple different outputs. Sensory and other inputs elicit a range of modulatory neuron activity that is further shaped by network feedback, yet little is known regarding how the impact of network feedback on modulatory neurons regulates network output across a physiological range of modulatory neuron activity. Identified network neurons, a fully described connectome, and a well-characterized, identified modulatory projection neuron enabled us to address this issue in the crab (Cancer borealis) stomatogastric nervous system. The modulatory neuron modulatory commissural neuron 1 (MCN1) activates and modulates two networks that generate rhythms via different cellular mechanisms and at distinct frequencies. MCN1 is activated at rates of 5–35 Hz in vivo and in vitro. Additionally, network feedback elicits MCN1 activity time-locked to motor activity. We asked how network activation, rhythm speed, and neuron activity levels are regulated by the presence or absence of network feedback across a physiological range of MCN1 activity rates. There were both similarities and differences in responses of the two networks to MCN1 activity. Many parameters in both networks were sensitive to network feedback effects on MCN1 activity. However, for most parameters, MCN1 activity rate did not determine the extent to which network output was altered by the addition of network feedback. These data demonstrate that the influence of network feedback on modulatory neuron activity is an important determinant of network output and feedback can be effective in shaping network output regardless of the extent of network modulation. PMID:27030739

  5. Network feedback regulates motor output across a range of modulatory neuron activity.

    PubMed

    Spencer, Robert M; Blitz, Dawn M

    2016-06-01

    Modulatory projection neurons alter network neuron synaptic and intrinsic properties to elicit multiple different outputs. Sensory and other inputs elicit a range of modulatory neuron activity that is further shaped by network feedback, yet little is known regarding how the impact of network feedback on modulatory neurons regulates network output across a physiological range of modulatory neuron activity. Identified network neurons, a fully described connectome, and a well-characterized, identified modulatory projection neuron enabled us to address this issue in the crab (Cancer borealis) stomatogastric nervous system. The modulatory neuron modulatory commissural neuron 1 (MCN1) activates and modulates two networks that generate rhythms via different cellular mechanisms and at distinct frequencies. MCN1 is activated at rates of 5-35 Hz in vivo and in vitro. Additionally, network feedback elicits MCN1 activity time-locked to motor activity. We asked how network activation, rhythm speed, and neuron activity levels are regulated by the presence or absence of network feedback across a physiological range of MCN1 activity rates. There were both similarities and differences in responses of the two networks to MCN1 activity. Many parameters in both networks were sensitive to network feedback effects on MCN1 activity. However, for most parameters, MCN1 activity rate did not determine the extent to which network output was altered by the addition of network feedback. These data demonstrate that the influence of network feedback on modulatory neuron activity is an important determinant of network output and feedback can be effective in shaping network output regardless of the extent of network modulation. Copyright © 2016 the American Physiological Society.

  6. Network and neuronal membrane properties in hybrid networks reciprocally regulate selectivity to rapid thalamocortical inputs.

    PubMed

    Pesavento, Michael J; Pinto, David J

    2012-11-01

    Rapidly changing environments require rapid processing from sensory inputs. Varying deflection velocities of a rodent's primary facial vibrissa cause varying temporal neuronal activity profiles within the ventral posteromedial thalamic nucleus. Local neuron populations in a single somatosensory layer 4 barrel transform sparsely coded input into a spike count based on the input's temporal profile. We investigate this transformation by creating a barrel-like hybrid network with whole cell recordings of in vitro neurons from a cortical slice preparation, embedding the biological neuron in the simulated network by presenting virtual synaptic conductances via a conductance clamp. Utilizing the hybrid network, we examine the reciprocal network properties (local excitatory and inhibitory synaptic convergence) and neuronal membrane properties (input resistance) by altering the barrel population response to diverse thalamic input. In the presence of local network input, neurons are more selective to thalamic input timing; this arises from strong feedforward inhibition. Strongly inhibitory (damping) network regimes are more selective to timing and less selective to the magnitude of input but require stronger initial input. Input selectivity relies heavily on the different membrane properties of excitatory and inhibitory neurons. When inhibitory and excitatory neurons had identical membrane properties, the sensitivity of in vitro neurons to temporal vs. magnitude features of input was substantially reduced. Increasing the mean leak conductance of the inhibitory cells decreased the network's temporal sensitivity, whereas increasing excitatory leak conductance enhanced magnitude sensitivity. Local network synapses are essential in shaping thalamic input, and differing membrane properties of functional classes reciprocally modulate this effect.

  7. Simulation of Code Spectrum and Code Flow of Cultured Neuronal Networks.

    PubMed

    Tamura, Shinichi; Nishitani, Yoshi; Hosokawa, Chie; Miyoshi, Tomomitsu; Sawai, Hajime

    2016-01-01

    It has been shown that, in cultured neuronal networks on a multielectrode, pseudorandom-like sequences (codes) are detected, and they flow with some spatial decay constant. Each cultured neuronal network is characterized by a specific spectrum curve. That is, we may consider the spectrum curve as a "signature" of its associated neuronal network that is dependent on the characteristics of neurons and network configuration, including the weight distribution. In the present study, we used an integrate-and-fire model of neurons with intrinsic and instantaneous fluctuations of characteristics for performing a simulation of a code spectrum from multielectrodes on a 2D mesh neural network. We showed that it is possible to estimate the characteristics of neurons such as the distribution of number of neurons around each electrode and their refractory periods. Although this process is a reverse problem and theoretically the solutions are not sufficiently guaranteed, the parameters seem to be consistent with those of neurons. That is, the proposed neural network model may adequately reflect the behavior of a cultured neuronal network. Furthermore, such prospect is discussed that code analysis will provide a base of communication within a neural network that will also create a base of natural intelligence.

  8. Echo state networks with filter neurons and a delay&sum readout.

    PubMed

    Holzmann, Georg; Hauser, Helmut

    2010-03-01

    Echo state networks (ESNs) are a novel approach to recurrent neural network training with the advantage of a very simple and linear learning algorithm. It has been demonstrated that ESNs outperform other methods on a number of benchmark tasks. Although the approach is appealing, there are still some inherent limitations in the original formulation. Here we suggest two enhancements of this network model. First, the previously proposed idea of filters in neurons is extended to arbitrary infinite impulse response (IIR) filter neurons. This enables such networks to learn multiple attractors and signals at different timescales, which is especially important for modeling real-world time series. Second, a delay&sum readout is introduced, which adds trainable delays in the synaptic connections of output neurons and therefore vastly improves the memory capacity of echo state networks. It is shown in commonly used benchmark tasks and real-world examples, that this new structure is able to significantly outperform standard ESNs and other state-of-the-art models for nonlinear dynamical system modeling. Copyright 2009 Elsevier Ltd. All rights reserved.

  9. Network activity of mirror neurons depends on experience.

    PubMed

    Ushakov, Vadim L; Kartashov, Sergey I; Zavyalova, Victoria V; Bezverhiy, Denis D; Posichanyuk, Vladimir I; Terentev, Vasliliy N; Anokhin, Konstantin V

    2013-03-01

    In this work, the investigation of network activity of mirror neurons systems in animal brains depending on experience (existence or absence performance of the shown actions) was carried out. It carried out the research of mirror neurons network in the C57/BL6 line mice in the supervision task of swimming mice-demonstrators in Morris water maze. It showed the presence of mirror neurons systems in the motor cortex M1, M2, cingular cortex, hippocampus in mice groups, having experience of the swimming and without it. The conclusion is drawn about the possibility of the new functional network systems formation by means of mirror neurons systems and the acquisition of new knowledge through supervision by the animals in non-specific tasks.

  10. Energy-efficient neural information processing in individual neurons and neuronal networks.

    PubMed

    Yu, Lianchun; Yu, Yuguo

    2017-11-01

    Brains are composed of networks of an enormous number of neurons interconnected with synapses. Neural information is carried by the electrical signals within neurons and the chemical signals among neurons. Generating these electrical and chemical signals is metabolically expensive. The fundamental issue raised here is whether brains have evolved efficient ways of developing an energy-efficient neural code from the molecular level to the circuit level. Here, we summarize the factors and biophysical mechanisms that could contribute to the energy-efficient neural code for processing input signals. The factors range from ion channel kinetics, body temperature, axonal propagation of action potentials, low-probability release of synaptic neurotransmitters, optimal input and noise, the size of neurons and neuronal clusters, excitation/inhibition balance, coding strategy, cortical wiring, and the organization of functional connectivity. Both experimental and computational evidence suggests that neural systems may use these factors to maximize the efficiency of energy consumption in processing neural signals. Studies indicate that efficient energy utilization may be universal in neuronal systems as an evolutionary consequence of the pressure of limited energy. As a result, neuronal connections may be wired in a highly economical manner to lower energy costs and space. Individual neurons within a network may encode independent stimulus components to allow a minimal number of neurons to represent whole stimulus characteristics efficiently. This basic principle may fundamentally change our view of how billions of neurons organize themselves into complex circuits to operate and generate the most powerful intelligent cognition in nature. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  11. Effect of the heterogeneous neuron and information transmission delay on stochastic resonance of neuronal networks

    NASA Astrophysics Data System (ADS)

    Wang, Qingyun; Zhang, Honghui; Chen, Guanrong

    2012-12-01

    We study the effect of heterogeneous neuron and information transmission delay on stochastic resonance of scale-free neuronal networks. For this purpose, we introduce the heterogeneity to the specified neuron with the highest degree. It is shown that in the absence of delay, an intermediate noise level can optimally assist spike firings of collective neurons so as to achieve stochastic resonance on scale-free neuronal networks for small and intermediate αh, which plays a heterogeneous role. Maxima of stochastic resonance measure are enhanced as αh increases, which implies that the heterogeneity can improve stochastic resonance. However, as αh is beyond a certain large value, no obvious stochastic resonance can be observed. If the information transmission delay is introduced to neuronal networks, stochastic resonance is dramatically affected. In particular, the tuned information transmission delay can induce multiple stochastic resonance, which can be manifested as well-expressed maximum in the measure for stochastic resonance, appearing every multiple of one half of the subthreshold stimulus period. Furthermore, we can observe that stochastic resonance at odd multiple of one half of the subthreshold stimulus period is subharmonic, as opposed to the case of even multiple of one half of the subthreshold stimulus period. More interestingly, multiple stochastic resonance can also be improved by the suitable heterogeneous neuron. Presented results can provide good insights into the understanding of the heterogeneous neuron and information transmission delay on realistic neuronal networks.

  12. Bistability induces episodic spike communication by inhibitory neurons in neuronal networks.

    PubMed

    Kazantsev, V B; Asatryan, S Yu

    2011-09-01

    Bistability is one of the important features of nonlinear dynamical systems. In neurodynamics, bistability has been found in basic Hodgkin-Huxley equations describing the cell membrane dynamics. When the neuron is clamped near its threshold, the stable rest potential may coexist with the stable limit cycle describing periodic spiking. However, this effect is often neglected in network computations where the neurons are typically reduced to threshold firing units (e.g., integrate-and-fire models). We found that the bistability may induce spike communication by inhibitory coupled neurons in the spiking network. The communication is realized in the form of episodic discharges with synchronous (correlated) spikes during the episodes. A spiking phase map is constructed to describe the synchronization and to estimate basic spike phase locking modes.

  13. Recent developments in VSD imaging of small neuronal networks

    PubMed Central

    Hill, Evan S.; Bruno, Angela M.

    2014-01-01

    Voltage-sensitive dye (VSD) imaging is a powerful technique that can provide, in single experiments, a large-scale view of network activity unobtainable with traditional sharp electrode recording methods. Here we review recent work using VSDs to study small networks and highlight several results from this approach. Topics covered include circuit mapping, network multifunctionality, the network basis of decision making, and the presence of variably participating neurons in networks. Analytical tools being developed and applied to large-scale VSD imaging data sets are discussed, and the future prospects for this exciting field are considered. PMID:25225295

  14. An improved method for growing neurons: Comparison with standard protocols.

    PubMed

    Pozzi, Diletta; Ban, Jelena; Iseppon, Federico; Torre, Vincent

    2017-03-15

    Since different culturing parameters - such as media composition or cell density - lead to different experimental results, it is important to define the protocol used for neuronal cultures. The vital role of astrocytes in maintaining homeostasis of neurons - both in vivo and in vitro - is well established: the majority of improved culturing conditions for primary dissociated neuronal cultures rely on astrocytes. Our culturing protocol is based on a novel serum-free preparation of astrocyte - conditioned medium (ACM). We compared the proposed ACM culturing method with other two commonly used methods Neurobasal/B27- and FBS- based media. We performed morphometric characterization by immunocytochemistry and functional analysis by calcium imaging for all three culture methods at 1, 7, 14 and 60days in vitro (DIV). ACM-based cultures gave the best results for all tested criteria, i.e. growth cone's size and shape, neuronal outgrowth and branching, network activity and synchronization, maturation and long-term survival. The differences were more pronounced when compared with FBS-based medium. Neurobasal/B27 cultures were comparable to ACM for young cultures (DIV1), but not for culturing times longer than DIV7. ACM-based cultures showed more robust neuronal outgrowth at DIV1. At DIV7 and 60, the activity of neuronal network grown in ACM had a more vigorous spontaneous electrical activity and a higher degree of synchronization. We propose our ACM-based culture protocol as an improved and more suitable method for both short- and long-term neuronal cultures. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Towards Reproducible Descriptions of Neuronal Network Models

    PubMed Central

    Nordlie, Eilen; Gewaltig, Marc-Oliver; Plesser, Hans Ekkehard

    2009-01-01

    Progress in science depends on the effective exchange of ideas among scientists. New ideas can be assessed and criticized in a meaningful manner only if they are formulated precisely. This applies to simulation studies as well as to experiments and theories. But after more than 50 years of neuronal network simulations, we still lack a clear and common understanding of the role of computational models in neuroscience as well as established practices for describing network models in publications. This hinders the critical evaluation of network models as well as their re-use. We analyze here 14 research papers proposing neuronal network models of different complexity and find widely varying approaches to model descriptions, with regard to both the means of description and the ordering and placement of material. We further observe great variation in the graphical representation of networks and the notation used in equations. Based on our observations, we propose a good model description practice, composed of guidelines for the organization of publications, a checklist for model descriptions, templates for tables presenting model structure, and guidelines for diagrams of networks. The main purpose of this good practice is to trigger a debate about the communication of neuronal network models in a manner comprehensible to humans, as opposed to machine-readable model description languages. We believe that the good model description practice proposed here, together with a number of other recent initiatives on data-, model-, and software-sharing, may lead to a deeper and more fruitful exchange of ideas among computational neuroscientists in years to come. We further hope that work on standardized ways of describing—and thinking about—complex neuronal networks will lead the scientific community to a clearer understanding of high-level concepts in network dynamics, and will thus lead to deeper insights into the function of the brain. PMID:19662159

  16. Degree Correlations Optimize Neuronal Network Sensitivity to Sub-Threshold Stimuli

    PubMed Central

    Schmeltzer, Christian; Kihara, Alexandre Hiroaki; Sokolov, Igor Michailovitsch; Rüdiger, Sten

    2015-01-01

    Information processing in the brain crucially depends on the topology of the neuronal connections. We investigate how the topology influences the response of a population of leaky integrate-and-fire neurons to a stimulus. We devise a method to calculate firing rates from a self-consistent system of equations taking into account the degree distribution and degree correlations in the network. We show that assortative degree correlations strongly improve the sensitivity for weak stimuli and propose that such networks possess an advantage in signal processing. We moreover find that there exists an optimum in assortativity at an intermediate level leading to a maximum in input/output mutual information. PMID:26115374

  17. Hybrid Scheme for Modeling Local Field Potentials from Point-Neuron Networks.

    PubMed

    Hagen, Espen; Dahmen, David; Stavrinou, Maria L; Lindén, Henrik; Tetzlaff, Tom; van Albada, Sacha J; Grün, Sonja; Diesmann, Markus; Einevoll, Gaute T

    2016-12-01

    With rapidly advancing multi-electrode recording technology, the local field potential (LFP) has again become a popular measure of neuronal activity in both research and clinical applications. Proper understanding of the LFP requires detailed mathematical modeling incorporating the anatomical and electrophysiological features of neurons near the recording electrode, as well as synaptic inputs from the entire network. Here we propose a hybrid modeling scheme combining efficient point-neuron network models with biophysical principles underlying LFP generation by real neurons. The LFP predictions rely on populations of network-equivalent multicompartment neuron models with layer-specific synaptic connectivity, can be used with an arbitrary number of point-neuron network populations, and allows for a full separation of simulated network dynamics and LFPs. We apply the scheme to a full-scale cortical network model for a ∼1 mm 2 patch of primary visual cortex, predict laminar LFPs for different network states, assess the relative LFP contribution from different laminar populations, and investigate effects of input correlations and neuron density on the LFP. The generic nature of the hybrid scheme and its public implementation in hybridLFPy form the basis for LFP predictions from other and larger point-neuron network models, as well as extensions of the current application with additional biological detail. © The Author 2016. Published by Oxford University Press.

  18. Phase synchronization of bursting neurons in clustered small-world networks

    NASA Astrophysics Data System (ADS)

    Batista, C. A. S.; Lameu, E. L.; Batista, A. M.; Lopes, S. R.; Pereira, T.; Zamora-López, G.; Kurths, J.; Viana, R. L.

    2012-07-01

    We investigate the collective dynamics of bursting neurons on clustered networks. The clustered network model is composed of subnetworks, each of them presenting the so-called small-world property. This model can also be regarded as a network of networks. In each subnetwork a neuron is connected to other ones with regular as well as random connections, the latter with a given intracluster probability. Moreover, in a given subnetwork each neuron has an intercluster probability to be connected to the other subnetworks. The local neuron dynamics has two time scales (fast and slow) and is modeled by a two-dimensional map. In such small-world network the neuron parameters are chosen to be slightly different such that, if the coupling strength is large enough, there may be synchronization of the bursting (slow) activity. We give bounds for the critical coupling strength to obtain global burst synchronization in terms of the network structure, that is, the probabilities of intracluster and intercluster connections. We find that, as the heterogeneity in the network is reduced, the network global synchronizability is improved. We show that the transitions to global synchrony may be abrupt or smooth depending on the intercluster probability.

  19. Connectomic constraints on computation in feedforward networks of spiking neurons.

    PubMed

    Ramaswamy, Venkatakrishnan; Banerjee, Arunava

    2014-10-01

    Several efforts are currently underway to decipher the connectome or parts thereof in a variety of organisms. Ascertaining the detailed physiological properties of all the neurons in these connectomes, however, is out of the scope of such projects. It is therefore unclear to what extent knowledge of the connectome alone will advance a mechanistic understanding of computation occurring in these neural circuits, especially when the high-level function of the said circuit is unknown. We consider, here, the question of how the wiring diagram of neurons imposes constraints on what neural circuits can compute, when we cannot assume detailed information on the physiological response properties of the neurons. We call such constraints-that arise by virtue of the connectome-connectomic constraints on computation. For feedforward networks equipped with neurons that obey a deterministic spiking neuron model which satisfies a small number of properties, we ask if just by knowing the architecture of a network, we can rule out computations that it could be doing, no matter what response properties each of its neurons may have. We show results of this form, for certain classes of network architectures. On the other hand, we also prove that with the limited set of properties assumed for our model neurons, there are fundamental limits to the constraints imposed by network structure. Thus, our theory suggests that while connectomic constraints might restrict the computational ability of certain classes of network architectures, we may require more elaborate information on the properties of neurons in the network, before we can discern such results for other classes of networks.

  20. Estimating network parameters from combined dynamics of firing rate and irregularity of single neurons.

    PubMed

    Hamaguchi, Kosuke; Riehle, Alexa; Brunel, Nicolas

    2011-01-01

    High firing irregularity is a hallmark of cortical neurons in vivo, and modeling studies suggest a balance of excitation and inhibition is necessary to explain this high irregularity. Such a balance must be generated, at least partly, from local interconnected networks of excitatory and inhibitory neurons, but the details of the local network structure are largely unknown. The dynamics of the neural activity depends on the local network structure; this in turn suggests the possibility of estimating network structure from the dynamics of the firing statistics. Here we report a new method to estimate properties of the local cortical network from the instantaneous firing rate and irregularity (CV(2)) under the assumption that recorded neurons are a part of a randomly connected sparse network. The firing irregularity, measured in monkey motor cortex, exhibits two features; many neurons show relatively stable firing irregularity in time and across different task conditions; the time-averaged CV(2) is widely distributed from quasi-regular to irregular (CV(2) = 0.3-1.0). For each recorded neuron, we estimate the three parameters of a local network [balance of local excitation-inhibition, number of recurrent connections per neuron, and excitatory postsynaptic potential (EPSP) size] that best describe the dynamics of the measured firing rates and irregularities. Our analysis shows that optimal parameter sets form a two-dimensional manifold in the three-dimensional parameter space that is confined for most of the neurons to the inhibition-dominated region. High irregularity neurons tend to be more strongly connected to the local network, either in terms of larger EPSP and inhibitory PSP size or larger number of recurrent connections, compared with the low irregularity neurons, for a given excitatory/inhibitory balance. Incorporating either synaptic short-term depression or conductance-based synapses leads many low CV(2) neurons to move to the excitation-dominated region as

  1. Ergodic properties of spiking neuronal networks with delayed interactions

    NASA Astrophysics Data System (ADS)

    Palmigiano, Agostina; Wolf, Fred

    The dynamical stability of neuronal networks, and the possibility of chaotic dynamics in the brain pose profound questions to the mechanisms underlying perception. Here we advance on the tractability of large neuronal networks of exactly solvable neuronal models with delayed pulse-coupled interactions. Pulse coupled delayed systems with an infinite dimensional phase space can be studied in equivalent systems of fixed and finite degrees of freedom by introducing a delayer variable for each neuron. A Jacobian of the equivalent system can be analytically obtained, and numerically evaluated. We find that depending on the action potential onset rapidness and the level of heterogeneities, the asynchronous irregular regime characteristic of balanced state networks loses stability with increasing delays to either a slow synchronous irregular or a fast synchronous irregular state. In networks of neurons with slow action potential onset, the transition to collective oscillations leads to an increase of the exponential rate of divergence of nearby trajectories and of the entropy production rate of the chaotic dynamics. The attractor dimension, instead of increasing linearly with increasing delay as reported in many other studies, decreases until eventually the network reaches full synchrony

  2. Autonomous Optimization of Targeted Stimulation of Neuronal Networks.

    PubMed

    Kumar, Sreedhar S; Wülfing, Jan; Okujeni, Samora; Boedecker, Joschka; Riedmiller, Martin; Egert, Ulrich

    2016-08-01

    Driven by clinical needs and progress in neurotechnology, targeted interaction with neuronal networks is of increasing importance. Yet, the dynamics of interaction between intrinsic ongoing activity in neuronal networks and their response to stimulation is unknown. Nonetheless, electrical stimulation of the brain is increasingly explored as a therapeutic strategy and as a means to artificially inject information into neural circuits. Strategies using regular or event-triggered fixed stimuli discount the influence of ongoing neuronal activity on the stimulation outcome and are therefore not optimal to induce specific responses reliably. Yet, without suitable mechanistic models, it is hardly possible to optimize such interactions, in particular when desired response features are network-dependent and are initially unknown. In this proof-of-principle study, we present an experimental paradigm using reinforcement-learning (RL) to optimize stimulus settings autonomously and evaluate the learned control strategy using phenomenological models. We asked how to (1) capture the interaction of ongoing network activity, electrical stimulation and evoked responses in a quantifiable 'state' to formulate a well-posed control problem, (2) find the optimal state for stimulation, and (3) evaluate the quality of the solution found. Electrical stimulation of generic neuronal networks grown from rat cortical tissue in vitro evoked bursts of action potentials (responses). We show that the dynamic interplay of their magnitudes and the probability to be intercepted by spontaneous events defines a trade-off scenario with a network-specific unique optimal latency maximizing stimulus efficacy. An RL controller was set to find this optimum autonomously. Across networks, stimulation efficacy increased in 90% of the sessions after learning and learned latencies strongly agreed with those predicted from open-loop experiments. Our results show that autonomous techniques can exploit quantitative

  3. Developmental time windows for axon growth influence neuronal network topology.

    PubMed

    Lim, Sol; Kaiser, Marcus

    2015-04-01

    Early brain connectivity development consists of multiple stages: birth of neurons, their migration and the subsequent growth of axons and dendrites. Each stage occurs within a certain period of time depending on types of neurons and cortical layers. Forming synapses between neurons either by growing axons starting at similar times for all neurons (much-overlapped time windows) or at different time points (less-overlapped) may affect the topological and spatial properties of neuronal networks. Here, we explore the extreme cases of axon formation during early development, either starting at the same time for all neurons (parallel, i.e., maximally overlapped time windows) or occurring for each neuron separately one neuron after another (serial, i.e., no overlaps in time windows). For both cases, the number of potential and established synapses remained comparable. Topological and spatial properties, however, differed: Neurons that started axon growth early on in serial growth achieved higher out-degrees, higher local efficiency and longer axon lengths while neurons demonstrated more homogeneous connectivity patterns for parallel growth. Second, connection probability decreased more rapidly with distance between neurons for parallel growth than for serial growth. Third, bidirectional connections were more numerous for parallel growth. Finally, we tested our predictions with C. elegans data. Together, this indicates that time windows for axon growth influence the topological and spatial properties of neuronal networks opening up the possibility to a posteriori estimate developmental mechanisms based on network properties of a developed network.

  4. The Dynamics of Networks of Identical Theta Neurons.

    PubMed

    Laing, Carlo R

    2018-02-05

    We consider finite and infinite all-to-all coupled networks of identical theta neurons. Two types of synaptic interactions are investigated: instantaneous and delayed (via first-order synaptic processing). Extensive use is made of the Watanabe/Strogatz (WS) ansatz for reducing the dimension of networks of identical sinusoidally-coupled oscillators. As well as the degeneracy associated with the constants of motion of the WS ansatz, we also find continuous families of solutions for instantaneously coupled neurons, resulting from the reversibility of the reduced model and the form of the synaptic input. We also investigate a number of similar related models. We conclude that the dynamics of networks of all-to-all coupled identical neurons can be surprisingly complicated.

  5. Neuronal networks: flip-flops in the brain.

    PubMed

    McCormick, David A

    2005-04-26

    Neuronal activity can rapidly flip-flop between stable states. Although these semi-stable states can be generated through interactions of neuronal networks, it is now known that they can also occur in vivo through intrinsic ionic currents.

  6. Cultured Neuronal Networks Express Complex Patterns of Activity and Morphological Memory

    NASA Astrophysics Data System (ADS)

    Raichman, Nadav; Rubinsky, Liel; Shein, Mark; Baruchi, Itay; Volman, Vladislav; Ben-Jacob, Eshel

    The following sections are included: * Cultured Neuronal Networks * Recording the Network Activity * Network Engineering * The Formation of Synchronized Bursting Events * The Characterization of the SBEs * Highly-Active Neurons * Function-Form Relations in Cultured Networks * Analyzing the SBEs Motifs * Network Repertoire * Network under Hypothermia * Summary * Acknowledgments * References

  7. Graph-based unsupervised segmentation algorithm for cultured neuronal networks' structure characterization and modeling.

    PubMed

    de Santos-Sierra, Daniel; Sendiña-Nadal, Irene; Leyva, Inmaculada; Almendral, Juan A; Ayali, Amir; Anava, Sarit; Sánchez-Ávila, Carmen; Boccaletti, Stefano

    2015-06-01

    Large scale phase-contrast images taken at high resolution through the life of a cultured neuronal network are analyzed by a graph-based unsupervised segmentation algorithm with a very low computational cost, scaling linearly with the image size. The processing automatically retrieves the whole network structure, an object whose mathematical representation is a matrix in which nodes are identified neurons or neurons' clusters, and links are the reconstructed connections between them. The algorithm is also able to extract any other relevant morphological information characterizing neurons and neurites. More importantly, and at variance with other segmentation methods that require fluorescence imaging from immunocytochemistry techniques, our non invasive measures entitle us to perform a longitudinal analysis during the maturation of a single culture. Such an analysis furnishes the way of individuating the main physical processes underlying the self-organization of the neurons' ensemble into a complex network, and drives the formulation of a phenomenological model yet able to describe qualitatively the overall scenario observed during the culture growth. © 2014 International Society for Advancement of Cytometry.

  8. On the Dynamics of the Spontaneous Activity in Neuronal Networks

    PubMed Central

    Bonifazi, Paolo; Ruaro, Maria Elisabetta; Torre, Vincent

    2007-01-01

    Most neuronal networks, even in the absence of external stimuli, produce spontaneous bursts of spikes separated by periods of reduced activity. The origin and functional role of these neuronal events are still unclear. The present work shows that the spontaneous activity of two very different networks, intact leech ganglia and dissociated cultures of rat hippocampal neurons, share several features. Indeed, in both networks: i) the inter-spike intervals distribution of the spontaneous firing of single neurons is either regular or periodic or bursting, with the fraction of bursting neurons depending on the network activity; ii) bursts of spontaneous spikes have the same broad distributions of size and duration; iii) the degree of correlated activity increases with the bin width, and the power spectrum of the network firing rate has a 1/f behavior at low frequencies, indicating the existence of long-range temporal correlations; iv) the activity of excitatory synaptic pathways mediated by NMDA receptors is necessary for the onset of the long-range correlations and for the presence of large bursts; v) blockage of inhibitory synaptic pathways mediated by GABAA receptors causes instead an increase in the correlation among neurons and leads to a burst distribution composed only of very small and very large bursts. These results suggest that the spontaneous electrical activity in neuronal networks with different architectures and functions can have very similar properties and common dynamics. PMID:17502919

  9. Constrained synaptic connectivity in functional mammalian neuronal networks grown on patterned surfaces.

    PubMed

    Wyart, Claire; Ybert, Christophe; Bourdieu, Laurent; Herr, Catherine; Prinz, Christelle; Chatenay, Didier

    2002-06-30

    The use of ordered neuronal networks in vitro is a promising approach to study the development and the activity of small neuronal assemblies. However, in previous attempts, sufficient growth control and physiological maturation of neurons could not be achieved. Here we describe an original protocol in which polylysine patterns confine the adhesion of cellular bodies to prescribed spots and the neuritic growth to thin lines. Hippocampal neurons in these networks are maintained healthy in serum free medium up to 5 weeks in vitro. Electrophysiology and immunochemistry show that neurons exhibit mature excitatory and inhibitory synapses and calcium imaging reveals spontaneous activity of neurons in isolated networks. We demonstrate that neurons in these geometrical networks form functional synapses preferentially to their first neighbors. We have, therefore, established a simple and robust protocol to constrain both the location of neuronal cell bodies and their pattern of connectivity. Moreover, the long term maintenance of the geometry and the physiology of the networks raises the possibility of new applications for systematic screening of pharmacological agents and for electronic to neuron devices.

  10. Convergent neuromodulation onto a network neuron can have divergent effects at the network level

    PubMed Central

    Kintos, Nickolas; Nusbaum, Michael P.; Nadim, Farzan

    2016-01-01

    Different neuromodulators often target the same ion channel. When such modulators act on different neuron types, this convergent action can enable a rhythmic network to produce distinct outputs. Less clear are the functional consequences when two neuromodulators influence the same ion channel in the same neuron. We examine the consequences of this seeming redundancy using a mathematical model of the crab gastric mill (chewing) network. This network is activated in vitro by the projection neuron MCN1, which elicits a half-center bursting oscillation between the reciprocally-inhibitory neurons LG and Int1. We focus on two neuropeptides which modulate this network, including a MCN1 neurotransmitter and the hormone crustacean cardioactive peptide (CCAP). Both activate the same voltage-gated current (IMI) in the LG neuron. However, IMI-MCN1, resulting from MCN1 released neuropeptide, has phasic dynamics in its maximal conductance due to LG presynaptic inhibition of MCN1, while IMI-CCAP retains the same maximal conductance in both phases of the gastric mill rhythm. Separation of time scales allows us to produce a 2D model from which phase plane analysis shows that, as in the biological system, IMI-MCN1 and IMI-CCAP primarily influence the durations of opposing phases of this rhythm. Furthermore, IMI-MCN1 influences the rhythmic output in a manner similar to the Int1-to-LG synapse, whereas IMI-CCAP has an influence similar to the LG-to-Int1 synapse. These results show that distinct neuromodulators which target the same voltage-gated ion channel in the same network neuron can nevertheless produce distinct effects at the network level, providing divergent neuromodulator actions on network activity. PMID:26798029

  11. Leader neurons in leaky integrate and fire neural network simulations.

    PubMed

    Zbinden, Cyrille

    2011-10-01

    In this paper, we highlight the topological properties of leader neurons whose existence is an experimental fact. Several experimental studies show the existence of leader neurons in population bursts of activity in 2D living neural networks (Eytan and Marom, J Neurosci 26(33):8465-8476, 2006; Eckmann et al., New J Phys 10(015011), 2008). A leader neuron is defined as a neuron which fires at the beginning of a burst (respectively network spike) more often than we expect by chance considering its mean firing rate. This means that leader neurons have some burst triggering power beyond a chance-level statistical effect. In this study, we characterize these leader neuron properties. This naturally leads us to simulate neural 2D networks. To build our simulations, we choose the leaky integrate and fire (lIF) neuron model (Gerstner and Kistler 2002; Cessac, J Math Biol 56(3):311-345, 2008), which allows fast simulations (Izhikevich, IEEE Trans Neural Netw 15(5):1063-1070, 2004; Gerstner and Naud, Science 326:379-380, 2009). The dynamics of our lIF model has got stable leader neurons in the burst population that we simulate. These leader neurons are excitatory neurons and have a low membrane potential firing threshold. Except for these two first properties, the conditions required for a neuron to be a leader neuron are difficult to identify and seem to depend on several parameters involved in the simulations themselves. However, a detailed linear analysis shows a trend of the properties required for a neuron to be a leader neuron. Our main finding is: A leader neuron sends signals to many excitatory neurons as well as to few inhibitory neurons and a leader neuron receives only signals from few other excitatory neurons. Our linear analysis exhibits five essential properties of leader neurons each with different relative importance. This means that considering a given neural network with a fixed mean number of connections per neuron, our analysis gives us a way of

  12. Dynamical state of the network determines the efficacy of single neuron properties in shaping the network activity

    PubMed Central

    Sahasranamam, Ajith; Vlachos, Ioannis; Aertsen, Ad; Kumar, Arvind

    2016-01-01

    Spike patterns are among the most common electrophysiological descriptors of neuron types. Surprisingly, it is not clear how the diversity in firing patterns of the neurons in a network affects its activity dynamics. Here, we introduce the state-dependent stochastic bursting neuron model allowing for a change in its firing patterns independent of changes in its input-output firing rate relationship. Using this model, we show that the effect of single neuron spiking on the network dynamics is contingent on the network activity state. While spike bursting can both generate and disrupt oscillations, these patterns are ineffective in large regions of the network state space in changing the network activity qualitatively. Finally, we show that when single-neuron properties are made dependent on the population activity, a hysteresis like dynamics emerges. This novel phenomenon has important implications for determining the network response to time-varying inputs and for the network sensitivity at different operating points. PMID:27212008

  13. Dynamical state of the network determines the efficacy of single neuron properties in shaping the network activity.

    PubMed

    Sahasranamam, Ajith; Vlachos, Ioannis; Aertsen, Ad; Kumar, Arvind

    2016-05-23

    Spike patterns are among the most common electrophysiological descriptors of neuron types. Surprisingly, it is not clear how the diversity in firing patterns of the neurons in a network affects its activity dynamics. Here, we introduce the state-dependent stochastic bursting neuron model allowing for a change in its firing patterns independent of changes in its input-output firing rate relationship. Using this model, we show that the effect of single neuron spiking on the network dynamics is contingent on the network activity state. While spike bursting can both generate and disrupt oscillations, these patterns are ineffective in large regions of the network state space in changing the network activity qualitatively. Finally, we show that when single-neuron properties are made dependent on the population activity, a hysteresis like dynamics emerges. This novel phenomenon has important implications for determining the network response to time-varying inputs and for the network sensitivity at different operating points.

  14. Numerical simulation of coherent resonance in a model network of Rulkov neurons

    NASA Astrophysics Data System (ADS)

    Andreev, Andrey V.; Runnova, Anastasia E.; Pisarchik, Alexander N.

    2018-04-01

    In this paper we study the spiking behaviour of a neuronal network consisting of Rulkov elements. We find that the regularity of this behaviour maximizes at a certain level of environment noise. This effect referred to as coherence resonance is demonstrated in a random complex network of Rulkov neurons. An external stimulus added to some of neurons excites them, and then activates other neurons in the network. The network coherence is also maximized at the certain stimulus amplitude.

  15. Autapse-Induced Spiral Wave in Network of Neurons under Noise

    PubMed Central

    Qin, Huixin; Ma, Jun; Wang, Chunni; Wu, Ying

    2014-01-01

    Autapse plays an important role in regulating the electric activity of neuron by feedbacking time-delayed current on the membrane of neuron. Autapses are considered in a local area of regular network of neurons to investigate the development of spatiotemporal pattern, and emergence of spiral wave is observed while it fails to grow up and occupy the network completely. It is found that spiral wave can be induced to occupy more area in the network under optimized noise on the network with periodical or no-flux boundary condition being used. The developed spiral wave with self-sustained property can regulate the collective behaviors of neurons as a pacemaker. To detect the collective behaviors, a statistical factor of synchronization is calculated to investigate the emergence of ordered state in the network. The network keeps ordered state when self-sustained spiral wave is formed under noise and autapse in local area of network, and it independent of the selection of periodical or no-flux boundary condition. The developed stable spiral wave could be helpful for memory due to the distinct self-sustained property. PMID:24967577

  16. Autapse-induced spiral wave in network of neurons under noise.

    PubMed

    Qin, Huixin; Ma, Jun; Wang, Chunni; Wu, Ying

    2014-01-01

    Autapse plays an important role in regulating the electric activity of neuron by feedbacking time-delayed current on the membrane of neuron. Autapses are considered in a local area of regular network of neurons to investigate the development of spatiotemporal pattern, and emergence of spiral wave is observed while it fails to grow up and occupy the network completely. It is found that spiral wave can be induced to occupy more area in the network under optimized noise on the network with periodical or no-flux boundary condition being used. The developed spiral wave with self-sustained property can regulate the collective behaviors of neurons as a pacemaker. To detect the collective behaviors, a statistical factor of synchronization is calculated to investigate the emergence of ordered state in the network. The network keeps ordered state when self-sustained spiral wave is formed under noise and autapse in local area of network, and it independent of the selection of periodical or no-flux boundary condition. The developed stable spiral wave could be helpful for memory due to the distinct self-sustained property.

  17. Synaptic Impairment and Robustness of Excitatory Neuronal Networks with Different Topologies

    PubMed Central

    Mirzakhalili, Ehsan; Gourgou, Eleni; Booth, Victoria; Epureanu, Bogdan

    2017-01-01

    Synaptic deficiencies are a known hallmark of neurodegenerative diseases, but the diagnosis of impaired synapses on the cellular level is not an easy task. Nonetheless, changes in the system-level dynamics of neuronal networks with damaged synapses can be detected using techniques that do not require high spatial resolution. This paper investigates how the structure/topology of neuronal networks influences their dynamics when they suffer from synaptic loss. We study different neuronal network structures/topologies by specifying their degree distributions. The modes of the degree distribution can be used to construct networks that consist of rich clubs and resemble small world networks, as well. We define two dynamical metrics to compare the activity of networks with different structures: persistent activity (namely, the self-sustained activity of the network upon removal of the initial stimulus) and quality of activity (namely, percentage of neurons that participate in the persistent activity of the network). Our results show that synaptic loss affects the persistent activity of networks with bimodal degree distributions less than it affects random networks. The robustness of neuronal networks enhances when the distance between the modes of the degree distribution increases, suggesting that the rich clubs of networks with distinct modes keep the whole network active. In addition, a tradeoff is observed between the quality of activity and the persistent activity. For a range of distributions, both of these dynamical metrics are considerably high for networks with bimodal degree distribution compared to random networks. We also propose three different scenarios of synaptic impairment, which may correspond to different pathological or biological conditions. Regardless of the network structure/topology, results demonstrate that synaptic loss has more severe effects on the activity of the network when impairments are correlated with the activity of the neurons. PMID

  18. Autonomous Optimization of Targeted Stimulation of Neuronal Networks

    PubMed Central

    Kumar, Sreedhar S.; Wülfing, Jan; Okujeni, Samora; Boedecker, Joschka; Riedmiller, Martin

    2016-01-01

    Driven by clinical needs and progress in neurotechnology, targeted interaction with neuronal networks is of increasing importance. Yet, the dynamics of interaction between intrinsic ongoing activity in neuronal networks and their response to stimulation is unknown. Nonetheless, electrical stimulation of the brain is increasingly explored as a therapeutic strategy and as a means to artificially inject information into neural circuits. Strategies using regular or event-triggered fixed stimuli discount the influence of ongoing neuronal activity on the stimulation outcome and are therefore not optimal to induce specific responses reliably. Yet, without suitable mechanistic models, it is hardly possible to optimize such interactions, in particular when desired response features are network-dependent and are initially unknown. In this proof-of-principle study, we present an experimental paradigm using reinforcement-learning (RL) to optimize stimulus settings autonomously and evaluate the learned control strategy using phenomenological models. We asked how to (1) capture the interaction of ongoing network activity, electrical stimulation and evoked responses in a quantifiable ‘state’ to formulate a well-posed control problem, (2) find the optimal state for stimulation, and (3) evaluate the quality of the solution found. Electrical stimulation of generic neuronal networks grown from rat cortical tissue in vitro evoked bursts of action potentials (responses). We show that the dynamic interplay of their magnitudes and the probability to be intercepted by spontaneous events defines a trade-off scenario with a network-specific unique optimal latency maximizing stimulus efficacy. An RL controller was set to find this optimum autonomously. Across networks, stimulation efficacy increased in 90% of the sessions after learning and learned latencies strongly agreed with those predicted from open-loop experiments. Our results show that autonomous techniques can exploit

  19. A real-time hybrid neuron network for highly parallel cognitive systems.

    PubMed

    Christiaanse, Gerrit Jan; Zjajo, Amir; Galuzzi, Carlo; van Leuken, Rene

    2016-08-01

    For comprehensive understanding of how neurons communicate with each other, new tools need to be developed that can accurately mimic the behaviour of such neurons and neuron networks under `real-time' constraints. In this paper, we propose an easily customisable, highly pipelined, neuron network design, which executes optimally scheduled floating-point operations for maximal amount of biophysically plausible neurons per FPGA family type. To reduce the required amount of resources without adverse effect on the calculation latency, a single exponent instance is used for multiple neuron calculation operations. Experimental results indicate that the proposed network design allows the simulation of up to 1188 neurons on Virtex7 (XC7VX550T) device in brain real-time yielding a speed-up of x12.4 compared to the state-of-the art.

  20. Intrinsic Neuronal Properties Switch the Mode of Information Transmission in Networks

    PubMed Central

    Gjorgjieva, Julijana; Mease, Rebecca A.; Moody, William J.; Fairhall, Adrienne L.

    2014-01-01

    Diverse ion channels and their dynamics endow single neurons with complex biophysical properties. These properties determine the heterogeneity of cell types that make up the brain, as constituents of neural circuits tuned to perform highly specific computations. How do biophysical properties of single neurons impact network function? We study a set of biophysical properties that emerge in cortical neurons during the first week of development, eventually allowing these neurons to adaptively scale the gain of their response to the amplitude of the fluctuations they encounter. During the same time period, these same neurons participate in large-scale waves of spontaneously generated electrical activity. We investigate the potential role of experimentally observed changes in intrinsic neuronal properties in determining the ability of cortical networks to propagate waves of activity. We show that such changes can strongly affect the ability of multi-layered feedforward networks to represent and transmit information on multiple timescales. With properties modeled on those observed at early stages of development, neurons are relatively insensitive to rapid fluctuations and tend to fire synchronously in response to wave-like events of large amplitude. Following developmental changes in voltage-dependent conductances, these same neurons become efficient encoders of fast input fluctuations over few layers, but lose the ability to transmit slower, population-wide input variations across many layers. Depending on the neurons' intrinsic properties, noise plays different roles in modulating neuronal input-output curves, which can dramatically impact network transmission. The developmental change in intrinsic properties supports a transformation of a networks function from the propagation of network-wide information to one in which computations are scaled to local activity. This work underscores the significance of simple changes in conductance parameters in governing how neurons

  1. A new cross-correlation algorithm for the analysis of "in vitro" neuronal network activity aimed at pharmacological studies.

    PubMed

    Biffi, E; Menegon, A; Regalia, G; Maida, S; Ferrigno, G; Pedrocchi, A

    2011-08-15

    Modern drug discovery for Central Nervous System pathologies has recently focused its attention to in vitro neuronal networks as models for the study of neuronal activities. Micro Electrode Arrays (MEAs), a widely recognized tool for pharmacological investigations, enable the simultaneous study of the spiking activity of discrete regions of a neuronal culture, providing an insight into the dynamics of networks. Taking advantage of MEAs features and making the most of the cross-correlation analysis to assess internal parameters of a neuronal system, we provide an efficient method for the evaluation of comprehensive neuronal network activity. We developed an intra network burst correlation algorithm, we evaluated its sensitivity and we explored its potential use in pharmacological studies. Our results demonstrate the high sensitivity of this algorithm and the efficacy of this methodology in pharmacological dose-response studies, with the advantage of analyzing the effect of drugs on the comprehensive correlative properties of integrated neuronal networks. Copyright © 2011 Elsevier B.V. All rights reserved.

  2. Convergent neuromodulation onto a network neuron can have divergent effects at the network level.

    PubMed

    Kintos, Nickolas; Nusbaum, Michael P; Nadim, Farzan

    2016-04-01

    Different neuromodulators often target the same ion channel. When such modulators act on different neuron types, this convergent action can enable a rhythmic network to produce distinct outputs. Less clear are the functional consequences when two neuromodulators influence the same ion channel in the same neuron. We examine the consequences of this seeming redundancy using a mathematical model of the crab gastric mill (chewing) network. This network is activated in vitro by the projection neuron MCN1, which elicits a half-center bursting oscillation between the reciprocally-inhibitory neurons LG and Int1. We focus on two neuropeptides which modulate this network, including a MCN1 neurotransmitter and the hormone crustacean cardioactive peptide (CCAP). Both activate the same voltage-gated current (I MI ) in the LG neuron. However, I MI-MCN1 , resulting from MCN1 released neuropeptide, has phasic dynamics in its maximal conductance due to LG presynaptic inhibition of MCN1, while I MI-CCAP retains the same maximal conductance in both phases of the gastric mill rhythm. Separation of time scales allows us to produce a 2D model from which phase plane analysis shows that, as in the biological system, I MI-MCN1 and I MI-CCAP primarily influence the durations of opposing phases of this rhythm. Furthermore, I MI-MCN1 influences the rhythmic output in a manner similar to the Int1-to-LG synapse, whereas I MI-CCAP has an influence similar to the LG-to-Int1 synapse. These results show that distinct neuromodulators which target the same voltage-gated ion channel in the same network neuron can nevertheless produce distinct effects at the network level, providing divergent neuromodulator actions on network activity.

  3. Associative memory in phasing neuron networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nair, Niketh S; Bochove, Erik J.; Braiman, Yehuda

    2014-01-01

    We studied pattern formation in a network of coupled Hindmarsh-Rose model neurons and introduced a new model for associative memory retrieval using networks of Kuramoto oscillators. Hindmarsh-Rose Neural Networks can exhibit a rich set of collective dynamics that can be controlled by their connectivity. Specifically, we showed an instance of Hebb's rule where spiking was correlated with network topology. Based on this, we presented a simple model of associative memory in coupled phase oscillators.

  4. Neural network with dynamically adaptable neurons

    NASA Technical Reports Server (NTRS)

    Tawel, Raoul (Inventor)

    1994-01-01

    This invention is an adaptive neuron for use in neural network processors. The adaptive neuron participates in the supervised learning phase of operation on a co-equal basis with the synapse matrix elements by adaptively changing its gain in a similar manner to the change of weights in the synapse IO elements. In this manner, training time is decreased by as much as three orders of magnitude.

  5. Bayesian Networks Predict Neuronal Transdifferentiation.

    PubMed

    Ainsworth, Richard I; Ai, Rizi; Ding, Bo; Li, Nan; Zhang, Kai; Wang, Wei

    2018-05-30

    We employ the language of Bayesian networks to systematically construct gene-regulation topologies from deep-sequencing single-nucleus RNA-Seq data for human neurons. From the perspective of the cell-state potential landscape, we identify attractors that correspond closely to different neuron subtypes. Attractors are also recovered for cell states from an independent data set confirming our models accurate description of global genetic regulations across differing cell types of the neocortex (not included in the training data). Our model recovers experimentally confirmed genetic regulations and community analysis reveals genetic associations in common pathways. Via a comprehensive scan of all theoretical three-gene perturbations of gene knockout and overexpression, we discover novel neuronal trans-differrentiation recipes (including perturbations of SATB2, GAD1, POU6F2 and ADARB2) for excitatory projection neuron and inhibitory interneuron subtypes. Copyright © 2018, G3: Genes, Genomes, Genetics.

  6. Self-Organized Supercriticality and Oscillations in Networks of Stochastic Spiking Neurons

    NASA Astrophysics Data System (ADS)

    Costa, Ariadne; Brochini, Ludmila; Kinouchi, Osame

    2017-08-01

    Networks of stochastic spiking neurons are interesting models in the area of Theoretical Neuroscience, presenting both continuous and discontinuous phase transitions. Here we study fully connected networks analytically, numerically and by computational simulations. The neurons have dynamic gains that enable the network to converge to a stationary slightly supercritical state (self-organized supercriticality or SOSC) in the presence of the continuous transition. We show that SOSC, which presents power laws for neuronal avalanches plus some large events, is robust as a function of the main parameter of the neuronal gain dynamics. We discuss the possible applications of the idea of SOSC to biological phenomena like epilepsy and dragon king avalanches. We also find that neuronal gains can produce collective oscillations that coexists with neuronal avalanches, with frequencies compatible with characteristic brain rhythms.

  7. Collective Behavior of Place and Non-place Neurons in the Hippocampal Network.

    PubMed

    Meshulam, Leenoy; Gauthier, Jeffrey L; Brody, Carlos D; Tank, David W; Bialek, William

    2017-12-06

    Discussions of the hippocampus often focus on place cells, but many neurons are not place cells in any given environment. Here we describe the collective activity in such mixed populations, treating place and non-place cells on the same footing. We start with optical imaging experiments on CA1 in mice as they run along a virtual linear track and use maximum entropy methods to approximate the distribution of patterns of activity in the population, matching the correlations between pairs of cells but otherwise assuming as little structure as possible. We find that these simple models accurately predict the activity of each neuron from the state of all the other neurons in the network, regardless of how well that neuron codes for position. Our results suggest that understanding the neural activity may require not only knowledge of the external variables modulating it but also of the internal network state. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Can simple rules control development of a pioneer vertebrate neuronal network generating behavior?

    PubMed

    Roberts, Alan; Conte, Deborah; Hull, Mike; Merrison-Hort, Robert; al Azad, Abul Kalam; Buhl, Edgar; Borisyuk, Roman; Soffe, Stephen R

    2014-01-08

    How do the pioneer networks in the axial core of the vertebrate nervous system first develop? Fundamental to understanding any full-scale neuronal network is knowledge of the constituent neurons, their properties, synaptic interconnections, and normal activity. Our novel strategy uses basic developmental rules to generate model networks that retain individual neuron and synapse resolution and are capable of reproducing correct, whole animal responses. We apply our developmental strategy to young Xenopus tadpoles, whose brainstem and spinal cord share a core vertebrate plan, but at a tractable complexity. Following detailed anatomical and physiological measurements to complete a descriptive library of each type of spinal neuron, we build models of their axon growth controlled by simple chemical gradients and physical barriers. By adding dendrites and allowing probabilistic formation of synaptic connections, we reconstruct network connectivity among up to 2000 neurons. When the resulting "network" is populated by model neurons and synapses, with properties based on physiology, it can respond to sensory stimulation by mimicking tadpole swimming behavior. This functioning model represents the most complete reconstruction of a vertebrate neuronal network that can reproduce the complex, rhythmic behavior of a whole animal. The findings validate our novel developmental strategy for generating realistic networks with individual neuron- and synapse-level resolution. We use it to demonstrate how early functional neuronal connectivity and behavior may in life result from simple developmental "rules," which lay out a scaffold for the vertebrate CNS without specific neuron-to-neuron recognition.

  9. Transition to Chaos in Random Neuronal Networks

    NASA Astrophysics Data System (ADS)

    Kadmon, Jonathan; Sompolinsky, Haim

    2015-10-01

    Firing patterns in the central nervous system often exhibit strong temporal irregularity and considerable heterogeneity in time-averaged response properties. Previous studies suggested that these properties are the outcome of the intrinsic chaotic dynamics of the neural circuits. Indeed, simplified rate-based neuronal networks with synaptic connections drawn from Gaussian distribution and sigmoidal nonlinearity are known to exhibit chaotic dynamics when the synaptic gain (i.e., connection variance) is sufficiently large. In the limit of an infinitely large network, there is a sharp transition from a fixed point to chaos, as the synaptic gain reaches a critical value. Near the onset, chaotic fluctuations are slow, analogous to the ubiquitous, slow irregular fluctuations observed in the firing rates of many cortical circuits. However, the existence of a transition from a fixed point to chaos in neuronal circuit models with more realistic architectures and firing dynamics has not been established. In this work, we investigate rate-based dynamics of neuronal circuits composed of several subpopulations with randomly diluted connections. Nonzero connections are either positive for excitatory neurons or negative for inhibitory ones, while single neuron output is strictly positive with output rates rising as a power law above threshold, in line with known constraints in many biological systems. Using dynamic mean field theory, we find the phase diagram depicting the regimes of stable fixed-point, unstable-dynamic, and chaotic-rate fluctuations. We focus on the latter and characterize the properties of systems near this transition. We show that dilute excitatory-inhibitory architectures exhibit the same onset to chaos as the single population with Gaussian connectivity. In these architectures, the large mean excitatory and inhibitory inputs dynamically balance each other, amplifying the effect of the residual fluctuations. Importantly, the existence of a transition to chaos

  10. Biological conservation law as an emerging functionality in dynamical neuronal networks.

    PubMed

    Podobnik, Boris; Jusup, Marko; Tiganj, Zoran; Wang, Wen-Xu; Buldú, Javier M; Stanley, H Eugene

    2017-11-07

    Scientists strive to understand how functionalities, such as conservation laws, emerge in complex systems. Living complex systems in particular create high-ordered functionalities by pairing up low-ordered complementary processes, e.g., one process to build and the other to correct. We propose a network mechanism that demonstrates how collective statistical laws can emerge at a macro (i.e., whole-network) level even when they do not exist at a unit (i.e., network-node) level. Drawing inspiration from neuroscience, we model a highly stylized dynamical neuronal network in which neurons fire either randomly or in response to the firing of neighboring neurons. A synapse connecting two neighboring neurons strengthens when both of these neurons are excited and weakens otherwise. We demonstrate that during this interplay between the synaptic and neuronal dynamics, when the network is near a critical point, both recurrent spontaneous and stimulated phase transitions enable the phase-dependent processes to replace each other and spontaneously generate a statistical conservation law-the conservation of synaptic strength. This conservation law is an emerging functionality selected by evolution and is thus a form of biological self-organized criticality in which the key dynamical modes are collective.

  11. Biological conservation law as an emerging functionality in dynamical neuronal networks

    PubMed Central

    Podobnik, Boris; Tiganj, Zoran; Wang, Wen-Xu; Buldú, Javier M.

    2017-01-01

    Scientists strive to understand how functionalities, such as conservation laws, emerge in complex systems. Living complex systems in particular create high-ordered functionalities by pairing up low-ordered complementary processes, e.g., one process to build and the other to correct. We propose a network mechanism that demonstrates how collective statistical laws can emerge at a macro (i.e., whole-network) level even when they do not exist at a unit (i.e., network-node) level. Drawing inspiration from neuroscience, we model a highly stylized dynamical neuronal network in which neurons fire either randomly or in response to the firing of neighboring neurons. A synapse connecting two neighboring neurons strengthens when both of these neurons are excited and weakens otherwise. We demonstrate that during this interplay between the synaptic and neuronal dynamics, when the network is near a critical point, both recurrent spontaneous and stimulated phase transitions enable the phase-dependent processes to replace each other and spontaneously generate a statistical conservation law—the conservation of synaptic strength. This conservation law is an emerging functionality selected by evolution and is thus a form of biological self-organized criticality in which the key dynamical modes are collective. PMID:29078286

  12. Developing neuronal networks: Self-organized criticality predicts the future

    NASA Astrophysics Data System (ADS)

    Pu, Jiangbo; Gong, Hui; Li, Xiangning; Luo, Qingming

    2013-01-01

    Self-organized criticality emerged in neural activity is one of the key concepts to describe the formation and the function of developing neuronal networks. The relationship between critical dynamics and neural development is both theoretically and experimentally appealing. However, whereas it is well-known that cortical networks exhibit a rich repertoire of activity patterns at different stages during in vitro maturation, dynamical activity patterns through the entire neural development still remains unclear. Here we show that a series of metastable network states emerged in the developing and ``aging'' process of hippocampal networks cultured from dissociated rat neurons. The unidirectional sequence of state transitions could be only observed in networks showing power-law scaling of distributed neuronal avalanches. Our data suggest that self-organized criticality may guide spontaneous activity into a sequential succession of homeostatically-regulated transient patterns during development, which may help to predict the tendency of neural development at early ages in the future.

  13. Synchronization in a non-uniform network of excitatory spiking neurons

    NASA Astrophysics Data System (ADS)

    Echeveste, Rodrigo; Gros, Claudius

    Spontaneous synchronization of pulse coupled elements is ubiquitous in nature and seems to be of vital importance for life. Networks of pacemaker cells in the heart, extended populations of southeast asian fireflies, and neuronal oscillations in cortical networks, are examples of this. In the present work, a rich repertoire of dynamical states with different degrees of synchronization are found in a network of excitatory-only spiking neurons connected in a non-uniform fashion. In particular, uncorrelated and partially correlated states are found without the need for inhibitory neurons or external currents. The phase transitions between these states, as well the robustness, stability, and response of the network to external stimulus are studied.

  14. The Drosophila Clock Neuron Network Features Diverse Coupling Modes and Requires Network-wide Coherence for Robust Circadian Rhythms.

    PubMed

    Yao, Zepeng; Bennett, Amelia J; Clem, Jenna L; Shafer, Orie T

    2016-12-13

    In animals, networks of clock neurons containing molecular clocks orchestrate daily rhythms in physiology and behavior. However, how various types of clock neurons communicate and coordinate with one another to produce coherent circadian rhythms is not well understood. Here, we investigate clock neuron coupling in the brain of Drosophila and demonstrate that the fly's various groups of clock neurons display unique and complex coupling relationships to core pacemaker neurons. Furthermore, we find that coordinated free-running rhythms require molecular clock synchrony not only within the well-characterized lateral clock neuron classes but also between lateral clock neurons and dorsal clock neurons. These results uncover unexpected patterns of coupling in the clock neuron network and reveal that robust free-running behavioral rhythms require a coherence of molecular oscillations across most of the fly's clock neuron network. Copyright © 2016 The Author(s). Published by Elsevier Inc. All rights reserved.

  15. Numbers And Gains Of Neurons In Winner-Take-All Networks

    NASA Technical Reports Server (NTRS)

    Brown, Timothy X.

    1993-01-01

    Report presents theoretical study of gains required in neurons to implement winner-take-all electronic neural network of given size and related question of maximum size of winner-take-all network in which neurons have specified sigmoid transfer or response function with specified gain.

  16. A distance constrained synaptic plasticity model of C. elegans neuronal network

    NASA Astrophysics Data System (ADS)

    Badhwar, Rahul; Bagler, Ganesh

    2017-03-01

    Brain research has been driven by enquiry for principles of brain structure organization and its control mechanisms. The neuronal wiring map of C. elegans, the only complete connectome available till date, presents an incredible opportunity to learn basic governing principles that drive structure and function of its neuronal architecture. Despite its apparently simple nervous system, C. elegans is known to possess complex functions. The nervous system forms an important underlying framework which specifies phenotypic features associated to sensation, movement, conditioning and memory. In this study, with the help of graph theoretical models, we investigated the C. elegans neuronal network to identify network features that are critical for its control. The 'driver neurons' are associated with important biological functions such as reproduction, signalling processes and anatomical structural development. We created 1D and 2D network models of C. elegans neuronal system to probe the role of features that confer controllability and small world nature. The simple 1D ring model is critically poised for the number of feed forward motifs, neuronal clustering and characteristic path-length in response to synaptic rewiring, indicating optimal rewiring. Using empirically observed distance constraint in the neuronal network as a guiding principle, we created a distance constrained synaptic plasticity model that simultaneously explains small world nature, saturation of feed forward motifs as well as observed number of driver neurons. The distance constrained model suggests optimum long distance synaptic connections as a key feature specifying control of the network.

  17. Experiments in clustered neuronal networks: A paradigm for complex modular dynamics

    NASA Astrophysics Data System (ADS)

    Teller, Sara; Soriano, Jordi

    2016-06-01

    Uncovering the interplay activity-connectivity is one of the major challenges in neuroscience. To deepen in the understanding of how a neuronal circuit shapes network dynamics, neuronal cultures have emerged as remarkable systems given their accessibility and easy manipulation. An attractive configuration of these in vitro systems consists in an ensemble of interconnected clusters of neurons. Using calcium fluorescence imaging to monitor spontaneous activity in these clustered neuronal networks, we were able to draw functional maps and reveal their topological features. We also observed that these networks exhibit a hierarchical modular dynamics, in which clusters fire in small groups that shape characteristic communities in the network. The structure and stability of these communities is sensitive to chemical or physical action, and therefore their analysis may serve as a proxy for network health. Indeed, the combination of all these approaches is helping to develop models to quantify damage upon network degradation, with promising applications for the study of neurological disorders in vitro.

  18. Transient sequences in a hypernetwork generated by an adaptive network of spiking neurons.

    PubMed

    Maslennikov, Oleg V; Shchapin, Dmitry S; Nekorkin, Vladimir I

    2017-06-28

    We propose a model of an adaptive network of spiking neurons that gives rise to a hypernetwork of its dynamic states at the upper level of description. Left to itself, the network exhibits a sequence of transient clustering which relates to a traffic in the hypernetwork in the form of a random walk. Receiving inputs the system is able to generate reproducible sequences corresponding to stimulus-specific paths in the hypernetwork. We illustrate these basic notions by a simple network of discrete-time spiking neurons together with its FPGA realization and analyse their properties.This article is part of the themed issue 'Mathematical methods in medicine: neuroscience, cardiology and pathology'. © 2017 The Author(s).

  19. Multiscale modeling of brain dynamics: from single neurons and networks to mathematical tools.

    PubMed

    Siettos, Constantinos; Starke, Jens

    2016-09-01

    The extreme complexity of the brain naturally requires mathematical modeling approaches on a large variety of scales; the spectrum ranges from single neuron dynamics over the behavior of groups of neurons to neuronal network activity. Thus, the connection between the microscopic scale (single neuron activity) to macroscopic behavior (emergent behavior of the collective dynamics) and vice versa is a key to understand the brain in its complexity. In this work, we attempt a review of a wide range of approaches, ranging from the modeling of single neuron dynamics to machine learning. The models include biophysical as well as data-driven phenomenological models. The discussed models include Hodgkin-Huxley, FitzHugh-Nagumo, coupled oscillators (Kuramoto oscillators, Rössler oscillators, and the Hindmarsh-Rose neuron), Integrate and Fire, networks of neurons, and neural field equations. In addition to the mathematical models, important mathematical methods in multiscale modeling and reconstruction of the causal connectivity are sketched. The methods include linear and nonlinear tools from statistics, data analysis, and time series analysis up to differential equations, dynamical systems, and bifurcation theory, including Granger causal connectivity analysis, phase synchronization connectivity analysis, principal component analysis (PCA), independent component analysis (ICA), and manifold learning algorithms such as ISOMAP, and diffusion maps and equation-free techniques. WIREs Syst Biol Med 2016, 8:438-458. doi: 10.1002/wsbm.1348 For further resources related to this article, please visit the WIREs website. © 2016 Wiley Periodicals, Inc.

  20. Relationship between inter-stimulus-intervals and intervals of autonomous activities in a neuronal network.

    PubMed

    Ito, Hidekatsu; Minoshima, Wataru; Kudoh, Suguru N

    2015-08-01

    To investigate relationships between neuronal network activity and electrical stimulus, we analyzed autonomous activity before and after electrical stimulus. Recordings of autonomous activity were performed using dissociated culture of rat hippocampal neurons on a multi-electrodes array (MEA) dish. Single stimulus and pared stimuli were applied to a cultured neuronal network. Single stimulus was applied every 1 min, and paired stimuli was performed by two sequential stimuli every 1 min. As a result, the patterns of synchronized activities of a neuronal network were changed after stimulus. Especially, long range synchronous activities were induced by paired stimuli. When 1 s inter-stimulus-intervals (ISI) and 1.5 s ISI paired stimuli are applied to a neuronal network, relatively long range synchronous activities expressed in case of 1.5 s ISI. Temporal synchronous activity of neuronal network is changed according to inter-stimulus-intervals (ISI) of electrical stimulus. In other words, dissociated neuronal network can maintain given information in temporal pattern and a certain type of an information maintenance mechanism was considered to be implemented in a semi-artificial dissociated neuronal network. The result is useful toward manipulation technology of neuronal activity in a brain system.

  1. Phase-locking and bistability in neuronal networks with synaptic depression

    NASA Astrophysics Data System (ADS)

    Akcay, Zeynep; Huang, Xinxian; Nadim, Farzan; Bose, Amitabha

    2018-02-01

    We consider a recurrent network of two oscillatory neurons that are coupled with inhibitory synapses. We use the phase response curves of the neurons and the properties of short-term synaptic depression to define Poincaré maps for the activity of the network. The fixed points of these maps correspond to phase-locked modes of the network. Using these maps, we analyze the conditions that allow short-term synaptic depression to lead to the existence of bistable phase-locked, periodic solutions. We show that bistability arises when either the phase response curve of the neuron or the short-term depression profile changes steeply enough. The results apply to any Type I oscillator and we illustrate our findings using the Quadratic Integrate-and-Fire and Morris-Lecar neuron models.

  2. Nonlinear multiplicative dendritic integration in neuron and network models

    PubMed Central

    Zhang, Danke; Li, Yuanqing; Rasch, Malte J.; Wu, Si

    2013-01-01

    Neurons receive inputs from thousands of synapses distributed across dendritic trees of complex morphology. It is known that dendritic integration of excitatory and inhibitory synapses can be highly non-linear in reality and can heavily depend on the exact location and spatial arrangement of inhibitory and excitatory synapses on the dendrite. Despite this known fact, most neuron models used in artificial neural networks today still only describe the voltage potential of a single somatic compartment and assume a simple linear summation of all individual synaptic inputs. We here suggest a new biophysical motivated derivation of a single compartment model that integrates the non-linear effects of shunting inhibition, where an inhibitory input on the route of an excitatory input to the soma cancels or “shunts” the excitatory potential. In particular, our integration of non-linear dendritic processing into the neuron model follows a simple multiplicative rule, suggested recently by experiments, and allows for strict mathematical treatment of network effects. Using our new formulation, we further devised a spiking network model where inhibitory neurons act as global shunting gates, and show that the network exhibits persistent activity in a low firing regime. PMID:23658543

  3. Intrinsic protective mechanisms of the neuron-glia network against glioma invasion.

    PubMed

    Iwadate, Yasuo; Fukuda, Kazumasa; Matsutani, Tomoo; Saeki, Naokatsu

    2016-04-01

    Gliomas arising in the brain parenchyma infiltrate into the surrounding brain and break down established complex neuron-glia networks. However, mounting evidence suggests that initially the network microenvironment of the adult central nervous system (CNS) is innately non-permissive to glioma cell invasion. The main players are inhibitory molecules in CNS myelin, as well as proteoglycans associated with astrocytes. Neural stem cells, and neurons themselves, possess inhibitory functions against neighboring tumor cells. These mechanisms have evolved to protect the established neuron-glia network, which is necessary for brain function. Greater insight into the interaction between glioma cells and the surrounding neuron-glia network is crucial for developing new therapies for treating these devastating tumors while preserving the important and complex neural functions of patients. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Temporal coding in a silicon network of integrate-and-fire neurons.

    PubMed

    Liu, Shih-Chii; Douglas, Rodney

    2004-09-01

    Spatio-temporal processing of spike trains by neuronal networks depends on a variety of mechanisms distributed across synapses, dendrites, and somata. In natural systems, the spike trains and the processing mechanisms cohere though their common physical instantiation. This coherence is lost when the natural system is encoded for simulation on a general purpose computer. By contrast, analog VLSI circuits are, like neurons, inherently related by their real-time physics, and so, could provide a useful substrate for exploring neuronlike event-based processing. Here, we describe a hybrid analog-digital VLSI chip comprising a set of integrate-and-fire neurons and short-term dynamical synapses that can be configured into simple network architectures with some properties of neocortical neuronal circuits. We show that, despite considerable fabrication variance in the properties of individual neurons, the chip offers a viable substrate for exploring real-time spike-based processing in networks of neurons.

  5. The frequency preference of neurons and synapses in a recurrent oscillatory network.

    PubMed

    Tseng, Hua-an; Martinez, Diana; Nadim, Farzan

    2014-09-17

    A variety of neurons and synapses shows a maximal response at a preferred frequency, generally considered to be important in shaping network activity. We are interested in whether all neurons and synapses in a recurrent oscillatory network can have preferred frequencies and, if so, whether these frequencies are the same or correlated, and whether they influence the network activity. We address this question using identified neurons in the pyloric network of the crab Cancer borealis. Previous work has shown that the pyloric pacemaker neurons exhibit membrane potential resonance whose resonance frequency is correlated with the network frequency. The follower lateral pyloric (LP) neuron makes reciprocally inhibitory synapses with the pacemakers. We find that LP shows resonance at a higher frequency than the pacemakers and the network frequency falls between the two. We also find that the reciprocal synapses between the pacemakers and LP have preferred frequencies but at significantly lower values. The preferred frequency of the LP to pacemaker synapse is correlated with the presynaptic preferred frequency, which is most pronounced when the peak voltage of the LP waveform is within the dynamic range of the synaptic activation curve and a shift in the activation curve by the modulatory neuropeptide proctolin shifts the frequency preference. Proctolin also changes the power of the LP neuron resonance without significantly changing the resonance frequency. These results indicate that different neuron types and synapses in a network may have distinct preferred frequencies, which are subject to neuromodulation and may interact to shape network oscillations. Copyright © 2014 the authors 0270-6474/14/3412933-13$15.00/0.

  6. Long-term optical stimulation of channelrhodopsin-expressing neurons to study network plasticity

    PubMed Central

    Lignani, Gabriele; Ferrea, Enrico; Difato, Francesco; Amarù, Jessica; Ferroni, Eleonora; Lugarà, Eleonora; Espinoza, Stefano; Gainetdinov, Raul R.; Baldelli, Pietro; Benfenati, Fabio

    2013-01-01

    Neuronal plasticity produces changes in excitability, synaptic transmission, and network architecture in response to external stimuli. Network adaptation to environmental conditions takes place in time scales ranging from few seconds to days, and modulates the entire network dynamics. To study the network response to defined long-term experimental protocols, we setup a system that combines optical and electrophysiological tools embedded in a cell incubator. Primary hippocampal neurons transduced with lentiviruses expressing channelrhodopsin-2/H134R were subjected to various photostimulation protocols in a time window in the order of days. To monitor the effects of light-induced gating of network activity, stimulated transduced neurons were simultaneously recorded using multi-electrode arrays (MEAs). The developed experimental model allows discerning short-term, long-lasting, and adaptive plasticity responses of the same neuronal network to distinct stimulation frequencies applied over different temporal windows. PMID:23970852

  7. Long-term optical stimulation of channelrhodopsin-expressing neurons to study network plasticity.

    PubMed

    Lignani, Gabriele; Ferrea, Enrico; Difato, Francesco; Amarù, Jessica; Ferroni, Eleonora; Lugarà, Eleonora; Espinoza, Stefano; Gainetdinov, Raul R; Baldelli, Pietro; Benfenati, Fabio

    2013-01-01

    Neuronal plasticity produces changes in excitability, synaptic transmission, and network architecture in response to external stimuli. Network adaptation to environmental conditions takes place in time scales ranging from few seconds to days, and modulates the entire network dynamics. To study the network response to defined long-term experimental protocols, we setup a system that combines optical and electrophysiological tools embedded in a cell incubator. Primary hippocampal neurons transduced with lentiviruses expressing channelrhodopsin-2/H134R were subjected to various photostimulation protocols in a time window in the order of days. To monitor the effects of light-induced gating of network activity, stimulated transduced neurons were simultaneously recorded using multi-electrode arrays (MEAs). The developed experimental model allows discerning short-term, long-lasting, and adaptive plasticity responses of the same neuronal network to distinct stimulation frequencies applied over different temporal windows.

  8. Method Accelerates Training Of Some Neural Networks

    NASA Technical Reports Server (NTRS)

    Shelton, Robert O.

    1992-01-01

    Three-layer networks trained faster provided two conditions are satisfied: numbers of neurons in layers are such that majority of work done in synaptic connections between input and hidden layers, and number of neurons in input layer at least as great as number of training pairs of input and output vectors. Based on modified version of back-propagation method.

  9. Spike Code Flow in Cultured Neuronal Networks.

    PubMed

    Tamura, Shinichi; Nishitani, Yoshi; Hosokawa, Chie; Miyoshi, Tomomitsu; Sawai, Hajime; Kamimura, Takuya; Yagi, Yasushi; Mizuno-Matsumoto, Yuko; Chen, Yen-Wei

    2016-01-01

    We observed spike trains produced by one-shot electrical stimulation with 8 × 8 multielectrodes in cultured neuronal networks. Each electrode accepted spikes from several neurons. We extracted the short codes from spike trains and obtained a code spectrum with a nominal time accuracy of 1%. We then constructed code flow maps as movies of the electrode array to observe the code flow of "1101" and "1011," which are typical pseudorandom sequence such as that we often encountered in a literature and our experiments. They seemed to flow from one electrode to the neighboring one and maintained their shape to some extent. To quantify the flow, we calculated the "maximum cross-correlations" among neighboring electrodes, to find the direction of maximum flow of the codes with lengths less than 8. Normalized maximum cross-correlations were almost constant irrespective of code. Furthermore, if the spike trains were shuffled in interval orders or in electrodes, they became significantly small. Thus, the analysis suggested that local codes of approximately constant shape propagated and conveyed information across the network. Hence, the codes can serve as visible and trackable marks of propagating spike waves as well as evaluating information flow in the neuronal network.

  10. A microfluidic platform for controlled biochemical stimulation of twin neuronal networks.

    PubMed

    Biffi, Emilia; Piraino, Francesco; Pedrocchi, Alessandra; Fiore, Gianfranco B; Ferrigno, Giancarlo; Redaelli, Alberto; Menegon, Andrea; Rasponi, Marco

    2012-06-01

    Spatially and temporally resolved delivery of soluble factors is a key feature for pharmacological applications. In this framework, microfluidics coupled to multisite electrophysiology offers great advantages in neuropharmacology and toxicology. In this work, a microfluidic device for biochemical stimulation of neuronal networks was developed. A micro-chamber for cell culturing, previously developed and tested for long term neuronal growth by our group, was provided with a thin wall, which partially divided the cell culture region in two sub-compartments. The device was reversibly coupled to a flat micro electrode array and used to culture primary neurons in the same microenvironment. We demonstrated that the two fluidically connected compartments were able to originate two parallel neuronal networks with similar electrophysiological activity but functionally independent. Furthermore, the device allowed to connect the outlet port to a syringe pump and to transform the static culture chamber in a perfused one. At 14 days invitro, sub-networks were independently stimulated with a test molecule, tetrodotoxin, a neurotoxin known to block action potentials, by means of continuous delivery. Electrical activity recordings proved the ability of the device configuration to selectively stimulate each neuronal network individually. The proposed microfluidic approach represents an innovative methodology to perform biological, pharmacological, and electrophysiological experiments on neuronal networks. Indeed, it allows for controlled delivery of substances to cells, and it overcomes the limitations due to standard drug stimulation techniques. Finally, the twin network configuration reduces biological variability, which has important outcomes on pharmacological and drug screening.

  11. Extracting neuronal functional network dynamics via adaptive Granger causality analysis.

    PubMed

    Sheikhattar, Alireza; Miran, Sina; Liu, Ji; Fritz, Jonathan B; Shamma, Shihab A; Kanold, Patrick O; Babadi, Behtash

    2018-04-24

    Quantifying the functional relations between the nodes in a network based on local observations is a key challenge in studying complex systems. Most existing time series analysis techniques for this purpose provide static estimates of the network properties, pertain to stationary Gaussian data, or do not take into account the ubiquitous sparsity in the underlying functional networks. When applied to spike recordings from neuronal ensembles undergoing rapid task-dependent dynamics, they thus hinder a precise statistical characterization of the dynamic neuronal functional networks underlying adaptive behavior. We develop a dynamic estimation and inference paradigm for extracting functional neuronal network dynamics in the sense of Granger, by integrating techniques from adaptive filtering, compressed sensing, point process theory, and high-dimensional statistics. We demonstrate the utility of our proposed paradigm through theoretical analysis, algorithm development, and application to synthetic and real data. Application of our techniques to two-photon Ca 2+ imaging experiments from the mouse auditory cortex reveals unique features of the functional neuronal network structures underlying spontaneous activity at unprecedented spatiotemporal resolution. Our analysis of simultaneous recordings from the ferret auditory and prefrontal cortical areas suggests evidence for the role of rapid top-down and bottom-up functional dynamics across these areas involved in robust attentive behavior.

  12. Neuronal and network computation in the brain

    NASA Astrophysics Data System (ADS)

    Babloyantz, A.

    1999-03-01

    The concepts and methods of non-linear dynamics have been a powerful tool for studying some gamow aspects of brain dynamics. In this paper we show how, from time series analysis of electroencepholograms in sick and healthy subjects, chaotic nature of brain activity could be unveiled. This finding gave rise to the concept of spatiotemporal cortical chaotic networks which in turn was the foundation for a simple brain-like device which is able to become attentive, perform pattern recognition and motion detection. A new method of time series analysis is also proposed which demonstrates for the first time the existence of neuronal code in interspike intervals of coclear cells.

  13. Dynamics of moment neuronal networks.

    PubMed

    Feng, Jianfeng; Deng, Yingchun; Rossoni, Enrico

    2006-04-01

    A theoretical framework is developed for moment neuronal networks (MNNs). Within this framework, the behavior of the system of spiking neurons is specified in terms of the first- and second-order statistics of their interspike intervals, i.e., the mean, the variance, and the cross correlations of spike activity. Since neurons emit and receive spike trains which can be described by renewal--but generally non-Poisson--processes, we first derive a suitable diffusion-type approximation of such processes. Two approximation schemes are introduced: the usual approximation scheme (UAS) and the Ornstein-Uhlenbeck scheme. It is found that both schemes approximate well the input-output characteristics of spiking models such as the IF and the Hodgkin-Huxley models. The MNN framework is then developed according to the UAS scheme, and its predictions are tested on a few examples.

  14. Training a Network of Electronic Neurons for Control of a Mobile Robot

    NASA Astrophysics Data System (ADS)

    Vromen, T. G. M.; Steur, E.; Nijmeijer, H.

    An adaptive training procedure is developed for a network of electronic neurons, which controls a mobile robot driving around in an unknown environment while avoiding obstacles. The neuronal network controls the angular velocity of the wheels of the robot based on the sensor readings. The nodes in the neuronal network controller are clusters of neurons rather than single neurons. The adaptive training procedure ensures that the input-output behavior of the clusters is identical, even though the constituting neurons are nonidentical and have, in isolation, nonidentical responses to the same input. In particular, we let the neurons interact via a diffusive coupling, and the proposed training procedure modifies the diffusion interaction weights such that the neurons behave synchronously with a predefined response. The working principle of the training procedure is experimentally validated and results of an experiment with a mobile robot that is completely autonomously driving in an unknown environment with obstacles are presented.

  15. Sustained synchronized neuronal network activity in a human astrocyte co-culture system

    PubMed Central

    Kuijlaars, Jacobine; Oyelami, Tutu; Diels, Annick; Rohrbacher, Jutta; Versweyveld, Sofie; Meneghello, Giulia; Tuefferd, Marianne; Verstraelen, Peter; Detrez, Jan R.; Verschuuren, Marlies; De Vos, Winnok H.; Meert, Theo; Peeters, Pieter J.; Cik, Miroslav; Nuydens, Rony; Brône, Bert; Verheyen, An

    2016-01-01

    Impaired neuronal network function is a hallmark of neurodevelopmental and neurodegenerative disorders such as autism, schizophrenia, and Alzheimer’s disease and is typically studied using genetically modified cellular and animal models. Weak predictive capacity and poor translational value of these models urge for better human derived in vitro models. The implementation of human induced pluripotent stem cells (hiPSCs) allows studying pathologies in differentiated disease-relevant and patient-derived neuronal cells. However, the differentiation process and growth conditions of hiPSC-derived neurons are non-trivial. In order to study neuronal network formation and (mal)function in a fully humanized system, we have established an in vitro co-culture model of hiPSC-derived cortical neurons and human primary astrocytes that recapitulates neuronal network synchronization and connectivity within three to four weeks after final plating. Live cell calcium imaging, electrophysiology and high content image analyses revealed an increased maturation of network functionality and synchronicity over time for co-cultures compared to neuronal monocultures. The cells express GABAergic and glutamatergic markers and respond to inhibitors of both neurotransmitter pathways in a functional assay. The combination of this co-culture model with quantitative imaging of network morphofunction is amenable to high throughput screening for lead discovery and drug optimization for neurological diseases. PMID:27819315

  16. Self-organization of synchronous activity propagation in neuronal networks driven by local excitation

    PubMed Central

    Bayati, Mehdi; Valizadeh, Alireza; Abbassian, Abdolhossein; Cheng, Sen

    2015-01-01

    Many experimental and theoretical studies have suggested that the reliable propagation of synchronous neural activity is crucial for neural information processing. The propagation of synchronous firing activity in so-called synfire chains has been studied extensively in feed-forward networks of spiking neurons. However, it remains unclear how such neural activity could emerge in recurrent neuronal networks through synaptic plasticity. In this study, we investigate whether local excitation, i.e., neurons that fire at a higher frequency than the other, spontaneously active neurons in the network, can shape a network to allow for synchronous activity propagation. We use two-dimensional, locally connected and heterogeneous neuronal networks with spike-timing dependent plasticity (STDP). We find that, in our model, local excitation drives profound network changes within seconds. In the emergent network, neural activity propagates synchronously through the network. This activity originates from the site of the local excitation and propagates through the network. The synchronous activity propagation persists, even when the local excitation is removed, since it derives from the synaptic weight matrix. Importantly, once this connectivity is established it remains stable even in the presence of spontaneous activity. Our results suggest that synfire-chain-like activity can emerge in a relatively simple way in realistic neural networks by locally exciting the desired origin of the neuronal sequence. PMID:26089794

  17. Reliability and synchronization in a delay-coupled neuronal network with synaptic plasticity

    NASA Astrophysics Data System (ADS)

    Pérez, Toni; Uchida, Atsushi

    2011-06-01

    We investigate the characteristics of reliability and synchronization of a neuronal network of delay-coupled integrate and fire neurons. Reliability and synchronization appear in separated regions of the phase space of the parameters considered. The effect of including synaptic plasticity and different delay values between the connections are also considered. We found that plasticity strongly changes the characteristics of reliability and synchronization in the parameter space of the coupling strength and the drive amplitude for the neuronal network. We also found that delay does not affect the reliability of the network but has a determinant influence on the synchronization of the neurons.

  18. Cluster synchronization in networks of neurons with chemical synapses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Juang, Jonq, E-mail: jjuang@math.nctu.edu.tw; Liang, Yu-Hao, E-mail: moonsea.am96g@g2.nctu.edu.tw

    2014-03-15

    In this work, we study the cluster synchronization of chemically coupled and generally formulated networks which are allowed to be nonidentical. The sufficient condition for the existence of stably synchronous clusters is derived. Specifically, we only need to check the stability of the origins of m decoupled linear systems. Here, m is the number of subpopulations. Examples of nonidentical networks such as Hindmarsh-Rose (HR) neurons with various choices of parameters in different subpopulations, or HR neurons in one subpopulation and FitzHugh-Nagumo neurons in the other subpopulation are provided. Explicit threshold for the coupling strength that guarantees the stably cluster synchronizationmore » can be obtained.« less

  19. Qualitative validation of the reduction from two reciprocally coupled neurons to one self-coupled neuron in a respiratory network model.

    PubMed

    Dunmyre, Justin R

    2011-06-01

    The pre-Bötzinger complex of the mammalian brainstem is a heterogeneous neuronal network, and individual neurons within the network have varying strengths of the persistent sodium and calcium-activated nonspecific cationic currents. Individually, these currents have been the focus of modeling efforts. Previously, Dunmyre et al. (J Comput Neurosci 1-24, 2011) proposed a model and studied the interactions of these currents within one self-coupled neuron. In this work, I consider two identical, reciprocally coupled model neurons and validate the reduction to the self-coupled case. I find that all of the dynamics of the two model neuron network and the regions of parameter space where these distinct dynamics are found are qualitatively preserved in the reduction to the self-coupled case.

  20. Neural Dynamics as Sampling: A Model for Stochastic Computation in Recurrent Networks of Spiking Neurons

    PubMed Central

    Buesing, Lars; Bill, Johannes; Nessler, Bernhard; Maass, Wolfgang

    2011-01-01

    The organization of computations in networks of spiking neurons in the brain is still largely unknown, in particular in view of the inherently stochastic features of their firing activity and the experimentally observed trial-to-trial variability of neural systems in the brain. In principle there exists a powerful computational framework for stochastic computations, probabilistic inference by sampling, which can explain a large number of macroscopic experimental data in neuroscience and cognitive science. But it has turned out to be surprisingly difficult to create a link between these abstract models for stochastic computations and more detailed models of the dynamics of networks of spiking neurons. Here we create such a link and show that under some conditions the stochastic firing activity of networks of spiking neurons can be interpreted as probabilistic inference via Markov chain Monte Carlo (MCMC) sampling. Since common methods for MCMC sampling in distributed systems, such as Gibbs sampling, are inconsistent with the dynamics of spiking neurons, we introduce a different approach based on non-reversible Markov chains that is able to reflect inherent temporal processes of spiking neuronal activity through a suitable choice of random variables. We propose a neural network model and show by a rigorous theoretical analysis that its neural activity implements MCMC sampling of a given distribution, both for the case of discrete and continuous time. This provides a step towards closing the gap between abstract functional models of cortical computation and more detailed models of networks of spiking neurons. PMID:22096452

  1. Quantitative 3D investigation of Neuronal network in mouse spinal cord model

    NASA Astrophysics Data System (ADS)

    Bukreeva, I.; Campi, G.; Fratini, M.; Spanò, R.; Bucci, D.; Battaglia, G.; Giove, F.; Bravin, A.; Uccelli, A.; Venturi, C.; Mastrogiacomo, M.; Cedola, A.

    2017-01-01

    The investigation of the neuronal network in mouse spinal cord models represents the basis for the research on neurodegenerative diseases. In this framework, the quantitative analysis of the single elements in different districts is a crucial task. However, conventional 3D imaging techniques do not have enough spatial resolution and contrast to allow for a quantitative investigation of the neuronal network. Exploiting the high coherence and the high flux of synchrotron sources, X-ray Phase-Contrast multiscale-Tomography allows for the 3D investigation of the neuronal microanatomy without any aggressive sample preparation or sectioning. We investigated healthy-mouse neuronal architecture by imaging the 3D distribution of the neuronal-network with a spatial resolution of 640 nm. The high quality of the obtained images enables a quantitative study of the neuronal structure on a subject-by-subject basis. We developed and applied a spatial statistical analysis on the motor neurons to obtain quantitative information on their 3D arrangement in the healthy-mice spinal cord. Then, we compared the obtained results with a mouse model of multiple sclerosis. Our approach paves the way to the creation of a “database” for the characterization of the neuronal network main features for a comparative investigation of neurodegenerative diseases and therapies.

  2. Exact event-driven implementation for recurrent networks of stochastic perfect integrate-and-fire neurons.

    PubMed

    Taillefumier, Thibaud; Touboul, Jonathan; Magnasco, Marcelo

    2012-12-01

    In vivo cortical recording reveals that indirectly driven neural assemblies can produce reliable and temporally precise spiking patterns in response to stereotyped stimulation. This suggests that despite being fundamentally noisy, the collective activity of neurons conveys information through temporal coding. Stochastic integrate-and-fire models delineate a natural theoretical framework to study the interplay of intrinsic neural noise and spike timing precision. However, there are inherent difficulties in simulating their networks' dynamics in silico with standard numerical discretization schemes. Indeed, the well-posedness of the evolution of such networks requires temporally ordering every neuronal interaction, whereas the order of interactions is highly sensitive to the random variability of spiking times. Here, we answer these issues for perfect stochastic integrate-and-fire neurons by designing an exact event-driven algorithm for the simulation of recurrent networks, with delayed Dirac-like interactions. In addition to being exact from the mathematical standpoint, our proposed method is highly efficient numerically. We envision that our algorithm is especially indicated for studying the emergence of polychronized motifs in networks evolving under spike-timing-dependent plasticity with intrinsic noise.

  3. On the continuous differentiability of inter-spike intervals of synaptically connected cortical spiking neurons in a neuronal network.

    PubMed

    Kumar, Gautam; Kothare, Mayuresh V

    2013-12-01

    We derive conditions for continuous differentiability of inter-spike intervals (ISIs) of spiking neurons with respect to parameters (decision variables) of an external stimulating input current that drives a recurrent network of synaptically connected neurons. The dynamical behavior of individual neurons is represented by a class of discontinuous single-neuron models. We report here that ISIs of neurons in the network are continuously differentiable with respect to decision variables if (1) a continuously differentiable trajectory of the membrane potential exists between consecutive action potentials with respect to time and decision variables and (2) the partial derivative of the membrane potential of spiking neurons with respect to time is not equal to the partial derivative of their firing threshold with respect to time at the time of action potentials. Our theoretical results are supported by showing fulfillment of these conditions for a class of known bidimensional spiking neuron models.

  4. Short-term memory in networks of dissociated cortical neurons.

    PubMed

    Dranias, Mark R; Ju, Han; Rajaram, Ezhilarasan; VanDongen, Antonius M J

    2013-01-30

    Short-term memory refers to the ability to store small amounts of stimulus-specific information for a short period of time. It is supported by both fading and hidden memory processes. Fading memory relies on recurrent activity patterns in a neuronal network, whereas hidden memory is encoded using synaptic mechanisms, such as facilitation, which persist even when neurons fall silent. We have used a novel computational and optogenetic approach to investigate whether these same memory processes hypothesized to support pattern recognition and short-term memory in vivo, exist in vitro. Electrophysiological activity was recorded from primary cultures of dissociated rat cortical neurons plated on multielectrode arrays. Cultures were transfected with ChannelRhodopsin-2 and optically stimulated using random dot stimuli. The pattern of neuronal activity resulting from this stimulation was analyzed using classification algorithms that enabled the identification of stimulus-specific memories. Fading memories for different stimuli, encoded in ongoing neural activity, persisted and could be distinguished from each other for as long as 1 s after stimulation was terminated. Hidden memories were detected by altered responses of neurons to additional stimulation, and this effect persisted longer than 1 s. Interestingly, network bursts seem to eliminate hidden memories. These results are similar to those that have been reported from similar experiments in vivo and demonstrate that mechanisms of information processing and short-term memory can be studied using cultured neuronal networks, thereby setting the stage for therapeutic applications using this platform.

  5. Barreloid Borders and Neuronal Activity Shape Panglial Gap Junction-Coupled Networks in the Mouse Thalamus.

    PubMed

    Claus, Lena; Philippot, Camille; Griemsmann, Stephanie; Timmermann, Aline; Jabs, Ronald; Henneberger, Christian; Kettenmann, Helmut; Steinhäuser, Christian

    2018-01-01

    The ventral posterior nucleus of the thalamus plays an important role in somatosensory information processing. It contains elongated cellular domains called barreloids, which are the structural basis for the somatotopic organization of vibrissae representation. So far, the organization of glial networks in these barreloid structures and its modulation by neuronal activity has not been studied. We have developed a method to visualize thalamic barreloid fields in acute slices. Combining electrophysiology, immunohistochemistry, and electroporation in transgenic mice with cell type-specific fluorescence labeling, we provide the first structure-function analyses of barreloidal glial gap junction networks. We observed coupled networks, which comprised both astrocytes and oligodendrocytes. The spread of tracers or a fluorescent glucose derivative through these networks was dependent on neuronal activity and limited by the barreloid borders, which were formed by uncoupled or weakly coupled oligodendrocytes. Neuronal somata were distributed homogeneously across barreloid fields with their processes running in parallel to the barreloid borders. Many astrocytes and oligodendrocytes were not part of the panglial networks. Thus, oligodendrocytes are the cellular elements limiting the communicating panglial network to a single barreloid, which might be important to ensure proper metabolic support to active neurons located within a particular vibrissae signaling pathway. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  6. Macroscopic self-oscillations and aging transition in a network of synaptically coupled quadratic integrate-and-fire neurons.

    PubMed

    Ratas, Irmantas; Pyragas, Kestutis

    2016-09-01

    We analyze the dynamics of a large network of coupled quadratic integrate-and-fire neurons, which represent the canonical model for class I neurons near the spiking threshold. The network is heterogeneous in that it includes both inherently spiking and excitable neurons. The coupling is global via synapses that take into account the finite width of synaptic pulses. Using a recently developed reduction method based on the Lorentzian ansatz, we derive a closed system of equations for the neuron's firing rate and the mean membrane potential, which are exact in the infinite-size limit. The bifurcation analysis of the reduced equations reveals a rich scenario of asymptotic behavior, the most interesting of which is the macroscopic limit-cycle oscillations. It is shown that the finite width of synaptic pulses is a necessary condition for the existence of such oscillations. The robustness of the oscillations against aging damage, which transforms spiking neurons into nonspiking neurons, is analyzed. The validity of the reduced equations is confirmed by comparing their solutions with the solutions of microscopic equations for the finite-size networks.

  7. Mechano-sensitization of mammalian neuronal networks through expression of the bacterial large-conductance mechanosensitive ion channel

    PubMed Central

    Contestabile, Andrea; Moroni, Monica; Hallinan, Grace I.; Palazzolo, Gemma; Chad, John; Deinhardt, Katrin; Carugo, Dario

    2018-01-01

    ABSTRACT Development of remote stimulation techniques for neuronal tissues represents a challenging goal. Among the potential methods, mechanical stimuli are the most promising vectors to convey information non-invasively into intact brain tissue. In this context, selective mechano-sensitization of neuronal circuits would pave the way to develop a new cell-type-specific stimulation approach. We report here, for the first time, the development and characterization of mechano-sensitized neuronal networks through the heterologous expression of an engineered bacterial large-conductance mechanosensitive ion channel (MscL). The neuronal functional expression of the MscL was validated through patch-clamp recordings upon application of calibrated suction pressures. Moreover, we verified the effective development of in-vitro neuronal networks expressing the engineered MscL in terms of cell survival, number of synaptic puncta and spontaneous network activity. The pure mechanosensitivity of the engineered MscL, with its wide genetic modification library, may represent a versatile tool to further develop a mechano-genetic approach. This article has an associated First Person interview with the first author of the paper. PMID:29361543

  8. Complexity in neuronal noise depends on network interconnectivity.

    PubMed

    Serletis, Demitre; Zalay, Osbert C; Valiante, Taufik A; Bardakjian, Berj L; Carlen, Peter L

    2011-06-01

    "Noise," or noise-like activity (NLA), defines background electrical membrane potential fluctuations at the cellular level of the nervous system, comprising an important aspect of brain dynamics. Using whole-cell voltage recordings from fast-spiking stratum oriens interneurons and stratum pyramidale neurons located in the CA3 region of the intact mouse hippocampus, we applied complexity measures from dynamical systems theory (i.e., 1/f(γ) noise and correlation dimension) and found evidence for complexity in neuronal NLA, ranging from high- to low-complexity dynamics. Importantly, these high- and low-complexity signal features were largely dependent on gap junction and chemical synaptic transmission. Progressive neuronal isolation from the surrounding local network via gap junction blockade (abolishing gap junction-dependent spikelets) and then chemical synaptic blockade (abolishing excitatory and inhibitory post-synaptic potentials), or the reverse order of these treatments, resulted in emergence of high-complexity NLA dynamics. Restoring local network interconnectivity via blockade washout resulted in resolution to low-complexity behavior. These results suggest that the observed increase in background NLA complexity is the result of reduced network interconnectivity, thereby highlighting the potential importance of the NLA signal to the study of network state transitions arising in normal and abnormal brain dynamics (such as in epilepsy, for example).

  9. Synchronization transition in neuronal networks composed of chaotic or non-chaotic oscillators.

    PubMed

    Xu, Kesheng; Maidana, Jean Paul; Castro, Samy; Orio, Patricio

    2018-05-30

    Chaotic dynamics has been shown in the dynamics of neurons and neural networks, in experimental data and numerical simulations. Theoretical studies have proposed an underlying role of chaos in neural systems. Nevertheless, whether chaotic neural oscillators make a significant contribution to network behaviour and whether the dynamical richness of neural networks is sensitive to the dynamics of isolated neurons, still remain open questions. We investigated synchronization transitions in heterogeneous neural networks of neurons connected by electrical coupling in a small world topology. The nodes in our model are oscillatory neurons that - when isolated - can exhibit either chaotic or non-chaotic behaviour, depending on conductance parameters. We found that the heterogeneity of firing rates and firing patterns make a greater contribution than chaos to the steepness of the synchronization transition curve. We also show that chaotic dynamics of the isolated neurons do not always make a visible difference in the transition to full synchrony. Moreover, macroscopic chaos is observed regardless of the dynamics nature of the neurons. However, performing a Functional Connectivity Dynamics analysis, we show that chaotic nodes can promote what is known as multi-stable behaviour, where the network dynamically switches between a number of different semi-synchronized, metastable states.

  10. Dynamic range in small-world networks of Hodgkin-Huxley neurons with chemical synapses

    NASA Astrophysics Data System (ADS)

    Batista, C. A. S.; Viana, R. L.; Lopes, S. R.; Batista, A. M.

    2014-09-01

    According to Stevens' law the relationship between stimulus and response is a power-law within an interval called the dynamic range. The dynamic range of sensory organs is found to be larger than that of a single neuron, suggesting that the network structure plays a key role in the behavior of both the scaling exponent and the dynamic range of neuron assemblies. In order to verify computationally the relationships between stimulus and response for spiking neurons, we investigate small-world networks of neurons described by the Hodgkin-Huxley equations connected by chemical synapses. We found that the dynamic range increases with the network size, suggesting that the enhancement of the dynamic range observed in sensory organs, with respect to single neurons, is an emergent property of complex network dynamics.

  11. Networks within networks: The neuronal control of breathing

    PubMed Central

    Garcia, Alfredo J.; Zanella, Sebastien; Koch, Henner; Doi, Atsushi; Ramirez, Jan-Marino

    2013-01-01

    Breathing emerges through complex network interactions involving neurons distributed throughout the nervous system. The respiratory rhythm generating network is composed of micro networks functioning within larger networks to generate distinct rhythms and patterns that characterize breathing. The pre-Bötzinger complex, a rhythm generating network located within the ventrolateral medulla assumes a core function without which respiratory rhythm generation and breathing cease altogether. It contains subnetworks with distinct synaptic and intrinsic membrane properties that give rise to different types of respiratory rhythmic activities including eupneic, sigh, and gasping activities. While critical aspects of these rhythmic activities are preserved when isolated in in vitro preparations, the pre-Bötzinger complex functions in the behaving animal as part of a larger network that receives important inputs from areas such as the pons and parafacial nucleus. The respiratory network is also an integrator of modulatory and sensory inputs that imbue the network with the important ability to adapt to changes in the behavioral, metabolic, and developmental conditions of the organism. This review summarizes our current understanding of these interactions and relates the emerging concepts to insights gained in other rhythm generating networks. PMID:21333801

  12. Matrix stiffness modulates formation and activity of neuronal networks of controlled architectures.

    PubMed

    Lantoine, Joséphine; Grevesse, Thomas; Villers, Agnès; Delhaye, Geoffrey; Mestdagh, Camille; Versaevel, Marie; Mohammed, Danahe; Bruyère, Céline; Alaimo, Laura; Lacour, Stéphanie P; Ris, Laurence; Gabriele, Sylvain

    2016-05-01

    The ability to construct easily in vitro networks of primary neurons organized with imposed topologies is required for neural tissue engineering as well as for the development of neuronal interfaces with desirable characteristics. However, accumulating evidence suggests that the mechanical properties of the culture matrix can modulate important neuronal functions such as growth, extension, branching and activity. Here we designed robust and reproducible laminin-polylysine grid micropatterns on cell culture substrates that have similar biochemical properties but a 100-fold difference in Young's modulus to investigate the role of the matrix rigidity on the formation and activity of cortical neuronal networks. We found that cell bodies of primary cortical neurons gradually accumulate in circular islands, whereas axonal extensions spread on linear tracks to connect circular islands. Our findings indicate that migration of cortical neurons is enhanced on soft substrates, leading to a faster formation of neuronal networks. Furthermore, the pre-synaptic density was two times higher on stiff substrates and consistently the number of action potentials and miniature synaptic currents was enhanced on stiff substrates. Taken together, our results provide compelling evidence to indicate that matrix stiffness is a key parameter to modulate the growth dynamics, synaptic density and electrophysiological activity of cortical neuronal networks, thus providing useful information on scaffold design for neural tissue engineering. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Establishment of a Human Neuronal Network Assessment System by Using a Human Neuron/Astrocyte Co-Culture Derived from Fetal Neural Stem/Progenitor Cells.

    PubMed

    Fukushima, Kazuyuki; Miura, Yuji; Sawada, Kohei; Yamazaki, Kazuto; Ito, Masashi

    2016-01-01

    Using human cell models mimicking the central nervous system (CNS) provides a better understanding of the human CNS, and it is a key strategy to improve success rates in CNS drug development. In the CNS, neurons function as networks in which astrocytes play important roles. Thus, an assessment system of neuronal network functions in a co-culture of human neurons and astrocytes has potential to accelerate CNS drug development. We previously demonstrated that human hippocampus-derived neural stem/progenitor cells (HIP-009 cells) were a novel tool to obtain human neurons and astrocytes in the same culture. In this study, we applied HIP-009 cells to a multielectrode array (MEA) system to detect neuronal signals as neuronal network functions. We observed spontaneous firings of HIP-009 neurons, and validated functional formation of neuronal networks pharmacologically. By using this assay system, we investigated effects of several reference compounds, including agonists and antagonists of glutamate and γ-aminobutyric acid receptors, and sodium, potassium, and calcium channels, on neuronal network functions using firing and burst numbers, and synchrony as readouts. These results indicate that the HIP-009/MEA assay system is applicable to the pharmacological assessment of drug candidates affecting synaptic functions for CNS drug development. © 2015 Society for Laboratory Automation and Screening.

  14. The Role of Adult-Born Neurons in the Constantly Changing Olfactory Bulb Network

    PubMed Central

    Malvaut, Sarah; Saghatelyan, Armen

    2016-01-01

    The adult mammalian brain is remarkably plastic and constantly undergoes structurofunctional modifications in response to environmental stimuli. In many regions plasticity is manifested by modifications in the efficacy of existing synaptic connections or synapse formation and elimination. In a few regions, however, plasticity is brought by the addition of new neurons that integrate into established neuronal networks. This type of neuronal plasticity is particularly prominent in the olfactory bulb (OB) where thousands of neuronal progenitors are produced on a daily basis in the subventricular zone (SVZ) and migrate along the rostral migratory stream (RMS) towards the OB. In the OB, these neuronal precursors differentiate into local interneurons, mature, and functionally integrate into the bulbar network by establishing output synapses with principal neurons. Despite continuous progress, it is still not well understood how normal functioning of the OB is preserved in the constantly remodelling bulbar network and what role adult-born neurons play in odor behaviour. In this review we will discuss different levels of morphofunctional plasticity effected by adult-born neurons and their functional role in the adult OB and also highlight the possibility that different subpopulations of adult-born cells may fulfill distinct functions in the OB neuronal network and odor behaviour. PMID:26839709

  15. Causal Interrogation of Neuronal Networks and Behavior through Virally Transduced Ivermectin Receptors.

    PubMed

    Obenhaus, Horst A; Rozov, Andrei; Bertocchi, Ilaria; Tang, Wannan; Kirsch, Joachim; Betz, Heinrich; Sprengel, Rolf

    2016-01-01

    The causal interrogation of neuronal networks involved in specific behaviors requires the spatially and temporally controlled modulation of neuronal activity. For long-term manipulation of neuronal activity, chemogenetic tools provide a reasonable alternative to short-term optogenetic approaches. Here we show that virus mediated gene transfer of the ivermectin (IVM) activated glycine receptor mutant GlyRα1 (AG) can be used for the selective and reversible silencing of specific neuronal networks in mice. In the striatum, dorsal hippocampus, and olfactory bulb, GlyRα1 (AG) promoted IVM dependent effects in representative behavioral assays. Moreover, GlyRα1 (AG) mediated silencing had a strong and reversible impact on neuronal ensemble activity and c-Fos activation in the olfactory bulb. Together our results demonstrate that long-term, reversible and re-inducible neuronal silencing via GlyRα1 (AG) is a promising tool for the interrogation of network mechanisms underlying the control of behavior and memory formation.

  16. Biological modelling of a computational spiking neural network with neuronal avalanches

    NASA Astrophysics Data System (ADS)

    Li, Xiumin; Chen, Qing; Xue, Fangzheng

    2017-05-01

    In recent years, an increasing number of studies have demonstrated that networks in the brain can self-organize into a critical state where dynamics exhibit a mixture of ordered and disordered patterns. This critical branching phenomenon is termed neuronal avalanches. It has been hypothesized that the homeostatic level balanced between stability and plasticity of this critical state may be the optimal state for performing diverse neural computational tasks. However, the critical region for high performance is narrow and sensitive for spiking neural networks (SNNs). In this paper, we investigated the role of the critical state in neural computations based on liquid-state machines, a biologically plausible computational neural network model for real-time computing. The computational performance of an SNN when operating at the critical state and, in particular, with spike-timing-dependent plasticity for updating synaptic weights is investigated. The network is found to show the best computational performance when it is subjected to critical dynamic states. Moreover, the active-neuron-dominant structure refined from synaptic learning can remarkably enhance the robustness of the critical state and further improve computational accuracy. These results may have important implications in the modelling of spiking neural networks with optimal computational performance. This article is part of the themed issue `Mathematical methods in medicine: neuroscience, cardiology and pathology'.

  17. Probabilistic Inference in General Graphical Models through Sampling in Stochastic Networks of Spiking Neurons

    PubMed Central

    Pecevski, Dejan; Buesing, Lars; Maass, Wolfgang

    2011-01-01

    An important open problem of computational neuroscience is the generic organization of computations in networks of neurons in the brain. We show here through rigorous theoretical analysis that inherent stochastic features of spiking neurons, in combination with simple nonlinear computational operations in specific network motifs and dendritic arbors, enable networks of spiking neurons to carry out probabilistic inference through sampling in general graphical models. In particular, it enables them to carry out probabilistic inference in Bayesian networks with converging arrows (“explaining away”) and with undirected loops, that occur in many real-world tasks. Ubiquitous stochastic features of networks of spiking neurons, such as trial-to-trial variability and spontaneous activity, are necessary ingredients of the underlying computational organization. We demonstrate through computer simulations that this approach can be scaled up to neural emulations of probabilistic inference in fairly large graphical models, yielding some of the most complex computations that have been carried out so far in networks of spiking neurons. PMID:22219717

  18. Extracting functionally feedforward networks from a population of spiking neurons

    PubMed Central

    Vincent, Kathleen; Tauskela, Joseph S.; Thivierge, Jean-Philippe

    2012-01-01

    Neuronal avalanches are a ubiquitous form of activity characterized by spontaneous bursts whose size distribution follows a power-law. Recent theoretical models have replicated power-law avalanches by assuming the presence of functionally feedforward connections (FFCs) in the underlying dynamics of the system. Accordingly, avalanches are generated by a feedforward chain of activation that persists despite being embedded in a larger, massively recurrent circuit. However, it is unclear to what extent networks of living neurons that exhibit power-law avalanches rely on FFCs. Here, we employed a computational approach to reconstruct the functional connectivity of cultured cortical neurons plated on multielectrode arrays (MEAs) and investigated whether pharmacologically induced alterations in avalanche dynamics are accompanied by changes in FFCs. This approach begins by extracting a functional network of directed links between pairs of neurons, and then evaluates the strength of FFCs using Schur decomposition. In a first step, we examined the ability of this approach to extract FFCs from simulated spiking neurons. The strength of FFCs obtained in strictly feedforward networks diminished monotonically as links were gradually rewired at random. Next, we estimated the FFCs of spontaneously active cortical neuron cultures in the presence of either a control medium, a GABAA receptor antagonist (PTX), or an AMPA receptor antagonist combined with an NMDA receptor antagonist (APV/DNQX). The distribution of avalanche sizes in these cultures was modulated by this pharmacology, with a shallower power-law under PTX (due to the prominence of larger avalanches) and a steeper power-law under APV/DNQX (due to avalanches recruiting fewer neurons) relative to control cultures. The strength of FFCs increased in networks after application of PTX, consistent with an amplification of feedforward activity during avalanches. Conversely, FFCs decreased after application of APV/DNQX, consistent

  19. Extracting functionally feedforward networks from a population of spiking neurons.

    PubMed

    Vincent, Kathleen; Tauskela, Joseph S; Thivierge, Jean-Philippe

    2012-01-01

    Neuronal avalanches are a ubiquitous form of activity characterized by spontaneous bursts whose size distribution follows a power-law. Recent theoretical models have replicated power-law avalanches by assuming the presence of functionally feedforward connections (FFCs) in the underlying dynamics of the system. Accordingly, avalanches are generated by a feedforward chain of activation that persists despite being embedded in a larger, massively recurrent circuit. However, it is unclear to what extent networks of living neurons that exhibit power-law avalanches rely on FFCs. Here, we employed a computational approach to reconstruct the functional connectivity of cultured cortical neurons plated on multielectrode arrays (MEAs) and investigated whether pharmacologically induced alterations in avalanche dynamics are accompanied by changes in FFCs. This approach begins by extracting a functional network of directed links between pairs of neurons, and then evaluates the strength of FFCs using Schur decomposition. In a first step, we examined the ability of this approach to extract FFCs from simulated spiking neurons. The strength of FFCs obtained in strictly feedforward networks diminished monotonically as links were gradually rewired at random. Next, we estimated the FFCs of spontaneously active cortical neuron cultures in the presence of either a control medium, a GABA(A) receptor antagonist (PTX), or an AMPA receptor antagonist combined with an NMDA receptor antagonist (APV/DNQX). The distribution of avalanche sizes in these cultures was modulated by this pharmacology, with a shallower power-law under PTX (due to the prominence of larger avalanches) and a steeper power-law under APV/DNQX (due to avalanches recruiting fewer neurons) relative to control cultures. The strength of FFCs increased in networks after application of PTX, consistent with an amplification of feedforward activity during avalanches. Conversely, FFCs decreased after application of APV

  20. Engineered 3D vascular and neuronal networks in a microfluidic platform.

    PubMed

    Osaki, Tatsuya; Sivathanu, Vivek; Kamm, Roger D

    2018-03-26

    Neurovascular coupling plays a key role in the pathogenesis of neurodegenerative disorders including motor neuron disease (MND). In vitro models provide an opportunity to understand the pathogenesis of MND, and offer the potential for drug screening. Here, we describe a new 3D microvascular and neuronal network model in a microfluidic platform to investigate interactions between these two systems. Both 3D networks were established by co-culturing human embryonic stem (ES)-derived MN spheroids and endothelial cells (ECs) in microfluidic devices. Co-culture with ECs improves neurite elongation and neuronal connectivity as measured by Ca 2+ oscillation. This improvement was regulated not only by paracrine signals such as brain-derived neurotrophic factor secreted by ECs but also through direct cell-cell interactions via the delta-notch pathway, promoting neuron differentiation and neuroprotection. Bi-directional signaling was observed in that the neural networks also affected vascular network formation under perfusion culture. This in vitro model could enable investigations of neuro-vascular coupling, essential to understanding the pathogenesis of neurodegenerative diseases including MNDs such as amyotrophic lateral sclerosis.

  1. [Neuronal and synaptic properties: fundamentals of network plasticity].

    PubMed

    Le Masson, G

    2000-02-01

    Neurons, within the nervous system, are organized in different neural networks through synaptic connections. Two fundamental components are dynamically interacting in these functional units. The first one are the neurons themselves, and far from being simple action potential generators, they are capable of complex electrical integrative properties due to various types, number, distribution and modulation of voltage-gated ionic channels. The second elements are the synapses where a similar complexity and plasticity is found. Identifying both cellular and synaptic intrinsic properties is necessary to understand the links between neural networks behavior and physiological function, and is a useful step towards a better control of neurological diseases.

  2. Novel transcriptional networks regulated by CLOCK in human neurons.

    PubMed

    Fontenot, Miles R; Berto, Stefano; Liu, Yuxiang; Werthmann, Gordon; Douglas, Connor; Usui, Noriyoshi; Gleason, Kelly; Tamminga, Carol A; Takahashi, Joseph S; Konopka, Genevieve

    2017-11-01

    The molecular mechanisms underlying human brain evolution are not fully understood; however, previous work suggested that expression of the transcription factor CLOCK in the human cortex might be relevant to human cognition and disease. In this study, we investigated this novel transcriptional role for CLOCK in human neurons by performing chromatin immunoprecipitation sequencing for endogenous CLOCK in adult neocortices and RNA sequencing following CLOCK knockdown in differentiated human neurons in vitro. These data suggested that CLOCK regulates the expression of genes involved in neuronal migration, and a functional assay showed that CLOCK knockdown increased neuronal migratory distance. Furthermore, dysregulation of CLOCK disrupts coexpressed networks of genes implicated in neuropsychiatric disorders, and the expression of these networks is driven by hub genes with human-specific patterns of expression. These data support a role for CLOCK-regulated transcriptional cascades involved in human brain evolution and function. © 2017 Fontenot et al.; Published by Cold Spring Harbor Laboratory Press.

  3. Human embryonic stem cell-derived neurons adopt and regulate the activity of an established neural network

    PubMed Central

    Weick, Jason P.; Liu, Yan; Zhang, Su-Chun

    2011-01-01

    Whether hESC-derived neurons can fully integrate with and functionally regulate an existing neural network remains unknown. Here, we demonstrate that hESC-derived neurons receive unitary postsynaptic currents both in vitro and in vivo and adopt the rhythmic firing behavior of mouse cortical networks via synaptic integration. Optical stimulation of hESC-derived neurons expressing Channelrhodopsin-2 elicited both inhibitory and excitatory postsynaptic currents and triggered network bursting in mouse neurons. Furthermore, light stimulation of hESC-derived neurons transplanted to the hippocampus of adult mice triggered postsynaptic currents in host pyramidal neurons in acute slice preparations. Thus, hESC-derived neurons can participate in and modulate neural network activity through functional synaptic integration, suggesting they are capable of contributing to neural network information processing both in vitro and in vivo. PMID:22106298

  4. High-resolution CMOS MEA platform to study neurons at subcellular, cellular, and network levels†

    PubMed Central

    Müller, Jan; Ballini, Marco; Livi, Paolo; Chen, Yihui; Radivojevic, Milos; Shadmani, Amir; Viswam, Vijay; Jones, Ian L.; Fiscella, Michele; Diggelmann, Roland; Stettler, Alexander; Frey, Urs; Bakkum, Douglas J.; Hierlemann, Andreas

    2017-01-01

    Studies on information processing and learning properties of neuronal networks would benefit from simultaneous and parallel access to the activity of a large fraction of all neurons in such networks. Here, we present a CMOS-based device, capable of simultaneously recording the electrical activity of over a thousand cells in in vitro neuronal networks. The device provides sufficiently high spatiotemporal resolution to enable, at the same time, access to neuronal preparations on subcellular, cellular, and network level. The key feature is a rapidly reconfigurable array of 26 400 microelectrodes arranged at low pitch (17.5 μm) within a large overall sensing area (3.85 × 2.10 mm2). An arbitrary subset of the electrodes can be simultaneously connected to 1024 low-noise readout channels as well as 32 stimulation units. Each electrode or electrode subset can be used to electrically stimulate or record the signals of virtually any neuron on the array. We demonstrate the applicability and potential of this device for various different experimental paradigms: large-scale recordings from whole networks of neurons as well as investigations of axonal properties of individual neurons. PMID:25973786

  5. High-resolution CMOS MEA platform to study neurons at subcellular, cellular, and network levels.

    PubMed

    Müller, Jan; Ballini, Marco; Livi, Paolo; Chen, Yihui; Radivojevic, Milos; Shadmani, Amir; Viswam, Vijay; Jones, Ian L; Fiscella, Michele; Diggelmann, Roland; Stettler, Alexander; Frey, Urs; Bakkum, Douglas J; Hierlemann, Andreas

    2015-07-07

    Studies on information processing and learning properties of neuronal networks would benefit from simultaneous and parallel access to the activity of a large fraction of all neurons in such networks. Here, we present a CMOS-based device, capable of simultaneously recording the electrical activity of over a thousand cells in in vitro neuronal networks. The device provides sufficiently high spatiotemporal resolution to enable, at the same time, access to neuronal preparations on subcellular, cellular, and network level. The key feature is a rapidly reconfigurable array of 26 400 microelectrodes arranged at low pitch (17.5 μm) within a large overall sensing area (3.85 × 2.10 mm(2)). An arbitrary subset of the electrodes can be simultaneously connected to 1024 low-noise readout channels as well as 32 stimulation units. Each electrode or electrode subset can be used to electrically stimulate or record the signals of virtually any neuron on the array. We demonstrate the applicability and potential of this device for various different experimental paradigms: large-scale recordings from whole networks of neurons as well as investigations of axonal properties of individual neurons.

  6. The formation mechanism of defects, spiral wave in the network of neurons.

    PubMed

    Wu, Xinyi; Ma, Jun

    2013-01-01

    A regular network of neurons is constructed by using the Morris-Lecar (ML) neuron with the ion channels being considered, and the potential mechnism of the formation of a spiral wave is investigated in detail. Several spiral waves are initiated by blocking the target wave with artificial defects and/or partial blocking (poisoning) in ion channels. Furthermore, possible conditions for spiral wave formation and the effect of partial channel blocking are discussed completely. Our results are summarized as follows. 1) The emergence of a target wave depends on the transmembrane currents with diversity, which mapped from the external forcing current and this kind of diversity is associated with spatial heterogeneity in the media. 2) Distinct spiral wave could be induced to occupy the network when the target wave is broken by partially blocking the ion channels of a fraction of neurons (local poisoned area), and these generated spiral waves are similar with the spiral waves induced by artificial defects. It is confirmed that partial channel blocking of some neurons in the network could play a similar role in breaking a target wave as do artificial defects; 3) Channel noise and additive Gaussian white noise are also considered, and it is confirmed that spiral waves are also induced in the network in the presence of noise. According to the results mentioned above, we conclude that appropriate poisoning in ion channels of neurons in the network acts as 'defects' on the evolution of the spatiotemporal pattern, and accounts for the emergence of a spiral wave in the network of neurons. These results could be helpful to understand the potential cause of the formation and development of spiral waves in the cortex of a neuronal system.

  7. The Formation Mechanism of Defects, Spiral Wave in the Network of Neurons

    PubMed Central

    Wu, Xinyi; Ma, Jun

    2013-01-01

    A regular network of neurons is constructed by using the Morris-Lecar (ML) neuron with the ion channels being considered, and the potential mechnism of the formation of a spiral wave is investigated in detail. Several spiral waves are initiated by blocking the target wave with artificial defects and/or partial blocking (poisoning) in ion channels. Furthermore, possible conditions for spiral wave formation and the effect of partial channel blocking are discussed completely. Our results are summarized as follows. 1) The emergence of a target wave depends on the transmembrane currents with diversity, which mapped from the external forcing current and this kind of diversity is associated with spatial heterogeneity in the media. 2) Distinct spiral wave could be induced to occupy the network when the target wave is broken by partially blocking the ion channels of a fraction of neurons (local poisoned area), and these generated spiral waves are similar with the spiral waves induced by artificial defects. It is confirmed that partial channel blocking of some neurons in the network could play a similar role in breaking a target wave as do artificial defects; 3) Channel noise and additive Gaussian white noise are also considered, and it is confirmed that spiral waves are also induced in the network in the presence of noise. According to the results mentioned above, we conclude that appropriate poisoning in ion channels of neurons in the network acts as ‘defects’ on the evolution of the spatiotemporal pattern, and accounts for the emergence of a spiral wave in the network of neurons. These results could be helpful to understand the potential cause of the formation and development of spiral waves in the cortex of a neuronal system. PMID:23383179

  8. Interplay between population firing stability and single neuron dynamics in hippocampal networks

    PubMed Central

    Slomowitz, Edden; Styr, Boaz; Vertkin, Irena; Milshtein-Parush, Hila; Nelken, Israel; Slutsky, Michael; Slutsky, Inna

    2015-01-01

    Neuronal circuits' ability to maintain the delicate balance between stability and flexibility in changing environments is critical for normal neuronal functioning. However, to what extent individual neurons and neuronal populations maintain internal firing properties remains largely unknown. In this study, we show that distributions of spontaneous population firing rates and synchrony are subject to accurate homeostatic control following increase of synaptic inhibition in cultured hippocampal networks. Reduction in firing rate triggered synaptic and intrinsic adaptive responses operating as global homeostatic mechanisms to maintain firing macro-stability, without achieving local homeostasis at the single-neuron level. Adaptive mechanisms, while stabilizing population firing properties, reduced short-term facilitation essential for synaptic discrimination of input patterns. Thus, invariant ongoing population dynamics emerge from intrinsically unstable activity patterns of individual neurons and synapses. The observed differences in the precision of homeostatic control at different spatial scales challenge cell-autonomous theory of network homeostasis and suggest the existence of network-wide regulation rules. DOI: http://dx.doi.org/10.7554/eLife.04378.001 PMID:25556699

  9. Inference of neuronal network spike dynamics and topology from calcium imaging data

    PubMed Central

    Lütcke, Henry; Gerhard, Felipe; Zenke, Friedemann; Gerstner, Wulfram; Helmchen, Fritjof

    2013-01-01

    Two-photon calcium imaging enables functional analysis of neuronal circuits by inferring action potential (AP) occurrence (“spike trains”) from cellular fluorescence signals. It remains unclear how experimental parameters such as signal-to-noise ratio (SNR) and acquisition rate affect spike inference and whether additional information about network structure can be extracted. Here we present a simulation framework for quantitatively assessing how well spike dynamics and network topology can be inferred from noisy calcium imaging data. For simulated AP-evoked calcium transients in neocortical pyramidal cells, we analyzed the quality of spike inference as a function of SNR and data acquisition rate using a recently introduced peeling algorithm. Given experimentally attainable values of SNR and acquisition rate, neural spike trains could be reconstructed accurately and with up to millisecond precision. We then applied statistical neuronal network models to explore how remaining uncertainties in spike inference affect estimates of network connectivity and topological features of network organization. We define the experimental conditions suitable for inferring whether the network has a scale-free structure and determine how well hub neurons can be identified. Our findings provide a benchmark for future calcium imaging studies that aim to reliably infer neuronal network properties. PMID:24399936

  10. Collective behavior of networks with linear (VLSI) integrate-and-fire neurons.

    PubMed

    Fusi, S; Mattia, M

    1999-04-01

    We analyze in detail the statistical properties of the spike emission process of a canonical integrate-and-fire neuron, with a linear integrator and a lower bound for the depolarization, as often used in VLSI implementations (Mead, 1989). The spike statistics of such neurons appear to be qualitatively similar to conventional (exponential) integrate-and-fire neurons, which exhibit a wide variety of characteristics observed in cortical recordings. We also show that, contrary to current opinion, the dynamics of a network composed of such neurons has two stable fixed points, even in the purely excitatory network, corresponding to two different states of reverberating activity. The analytical results are compared with numerical simulations and are found to be in good agreement.

  11. Impact of Partial Time Delay on Temporal Dynamics of Watts-Strogatz Small-World Neuronal Networks

    NASA Astrophysics Data System (ADS)

    Yan, Hao; Sun, Xiaojuan

    2017-06-01

    In this paper, we mainly discuss effects of partial time delay on temporal dynamics of Watts-Strogatz (WS) small-world neuronal networks by controlling two parameters. One is the time delay τ and the other is the probability of partial time delay pdelay. Temporal dynamics of WS small-world neuronal networks are discussed with the aid of temporal coherence and mean firing rate. With the obtained simulation results, it is revealed that for small time delay τ, the probability pdelay could weaken temporal coherence and increase mean firing rate of neuronal networks, which indicates that it could improve neuronal firings of the neuronal networks while destroying firing regularity. For large time delay τ, temporal coherence and mean firing rate do not have great changes with respect to pdelay. Time delay τ always has great influence on both temporal coherence and mean firing rate no matter what is the value of pdelay. Moreover, with the analysis of spike trains and histograms of interspike intervals of neurons inside neuronal networks, it is found that the effects of partial time delays on temporal coherence and mean firing rate could be the result of locking between the period of neuronal firing activities and the value of time delay τ. In brief, partial time delay could have great influence on temporal dynamics of the neuronal networks.

  12. Clustering promotes switching dynamics in networks of noisy neurons

    NASA Astrophysics Data System (ADS)

    Franović, Igor; Klinshov, Vladimir

    2018-02-01

    Macroscopic variability is an emergent property of neural networks, typically manifested in spontaneous switching between the episodes of elevated neuronal activity and the quiescent episodes. We investigate the conditions that facilitate switching dynamics, focusing on the interplay between the different sources of noise and heterogeneity of the network topology. We consider clustered networks of rate-based neurons subjected to external and intrinsic noise and derive an effective model where the network dynamics is described by a set of coupled second-order stochastic mean-field systems representing each of the clusters. The model provides an insight into the different contributions to effective macroscopic noise and qualitatively indicates the parameter domains where switching dynamics may occur. By analyzing the mean-field model in the thermodynamic limit, we demonstrate that clustering promotes multistability, which gives rise to switching dynamics in a considerably wider parameter region compared to the case of a non-clustered network with sparse random connection topology.

  13. Information Transmission and Anderson Localization in two-dimensional networks of firing-rate neurons

    NASA Astrophysics Data System (ADS)

    Natale, Joseph; Hentschel, George

    Firing-rate networks offer a coarse model of signal propagation in the brain. Here we analyze sparse, 2D planar firing-rate networks with no synapses beyond a certain cutoff distance. Additionally, we impose Dale's Principle to ensure that each neuron makes only or inhibitory outgoing connections. Using spectral methods, we find that the number of neurons participating in excitations of the network becomes insignificant whenever the connectivity cutoff is tuned to a value near or below the average interneuron separation. Further, neural activations exceeding a certain threshold stay confined to a small region of space. This behavior is an instance of Anderson localization, a disorder-induced phase transition by which an information channel is rendered unable to transmit signals. We discuss several potential implications of localization for both local and long-range computation in the brain. This work was supported in part by Grants JSMF/ 220020321 and NSF/IOS/1208126.

  14. Granger causality network reconstruction of conductance-based integrate-and-fire neuronal systems.

    PubMed

    Zhou, Douglas; Xiao, Yanyang; Zhang, Yaoyu; Xu, Zhiqin; Cai, David

    2014-01-01

    Reconstruction of anatomical connectivity from measured dynamical activities of coupled neurons is one of the fundamental issues in the understanding of structure-function relationship of neuronal circuitry. Many approaches have been developed to address this issue based on either electrical or metabolic data observed in experiment. The Granger causality (GC) analysis remains one of the major approaches to explore the dynamical causal connectivity among individual neurons or neuronal populations. However, it is yet to be clarified how such causal connectivity, i.e., the GC connectivity, can be mapped to the underlying anatomical connectivity in neuronal networks. We perform the GC analysis on the conductance-based integrate-and-fire (I&F) neuronal networks to obtain their causal connectivity. Through numerical experiments, we find that the underlying synaptic connectivity amongst individual neurons or subnetworks, can be successfully reconstructed by the GC connectivity constructed from voltage time series. Furthermore, this reconstruction is insensitive to dynamical regimes and can be achieved without perturbing systems and prior knowledge of neuronal model parameters. Surprisingly, the synaptic connectivity can even be reconstructed by merely knowing the raster of systems, i.e., spike timing of neurons. Using spike-triggered correlation techniques, we establish a direct mapping between the causal connectivity and the synaptic connectivity for the conductance-based I&F neuronal networks, and show the GC is quadratically related to the coupling strength. The theoretical approach we develop here may provide a framework for examining the validity of the GC analysis in other settings.

  15. Granger Causality Network Reconstruction of Conductance-Based Integrate-and-Fire Neuronal Systems

    PubMed Central

    Zhou, Douglas; Xiao, Yanyang; Zhang, Yaoyu; Xu, Zhiqin; Cai, David

    2014-01-01

    Reconstruction of anatomical connectivity from measured dynamical activities of coupled neurons is one of the fundamental issues in the understanding of structure-function relationship of neuronal circuitry. Many approaches have been developed to address this issue based on either electrical or metabolic data observed in experiment. The Granger causality (GC) analysis remains one of the major approaches to explore the dynamical causal connectivity among individual neurons or neuronal populations. However, it is yet to be clarified how such causal connectivity, i.e., the GC connectivity, can be mapped to the underlying anatomical connectivity in neuronal networks. We perform the GC analysis on the conductance-based integrate-and-fire (IF) neuronal networks to obtain their causal connectivity. Through numerical experiments, we find that the underlying synaptic connectivity amongst individual neurons or subnetworks, can be successfully reconstructed by the GC connectivity constructed from voltage time series. Furthermore, this reconstruction is insensitive to dynamical regimes and can be achieved without perturbing systems and prior knowledge of neuronal model parameters. Surprisingly, the synaptic connectivity can even be reconstructed by merely knowing the raster of systems, i.e., spike timing of neurons. Using spike-triggered correlation techniques, we establish a direct mapping between the causal connectivity and the synaptic connectivity for the conductance-based IF neuronal networks, and show the GC is quadratically related to the coupling strength. The theoretical approach we develop here may provide a framework for examining the validity of the GC analysis in other settings. PMID:24586285

  16. Inhibitory neurons promote robust critical firing dynamics in networks of integrate-and-fire neurons.

    PubMed

    Lu, Zhixin; Squires, Shane; Ott, Edward; Girvan, Michelle

    2016-12-01

    We study the firing dynamics of a discrete-state and discrete-time version of an integrate-and-fire neuronal network model with both excitatory and inhibitory neurons. When the integer-valued state of a neuron exceeds a threshold value, the neuron fires, sends out state-changing signals to its connected neurons, and returns to the resting state. In this model, a continuous phase transition from non-ceaseless firing to ceaseless firing is observed. At criticality, power-law distributions of avalanche size and duration with the previously derived exponents, -3/2 and -2, respectively, are observed. Using a mean-field approach, we show analytically how the critical point depends on model parameters. Our main result is that the combined presence of both inhibitory neurons and integrate-and-fire dynamics greatly enhances the robustness of critical power-law behavior (i.e., there is an increased range of parameters, including both sub- and supercritical values, for which several decades of power-law behavior occurs).

  17. Cytokines and cytokine networks target neurons to modulate long-term potentiation

    PubMed Central

    Prieto, G. Aleph; Cotman, Carl W.

    2017-01-01

    Cytokines play crucial roles in the communication between brain cells including neurons and glia, as well as in the brain-periphery interactions. In the brain, cytokines modulate long-term potentiation (LTP), a cellular correlate of memory. Whether cytokines regulate LTP by direct effects on neurons or by indirect mechanisms mediated by non-neuronal cells is poorly understood. Elucidating neuron-specific effects of cytokines has been challenging because most brain cells express cytokine receptors. Moreover, cytokines commonly increase the expression of multiple cytokines in their target cells, thus increasing the complexity of brain cytokine networks even after single-cytokine challenges. Here, we review evidence on both direct and indirect-mediated modulation of LTP by cytokines. We also describe novel approaches based on neuron- and synaptosome-enriched systems to identify cytokines able to directly modulate LTP, by targeting neurons and synapses. These approaches can test multiple samples in parallel, thus allowing the study of multiple cytokines simultaneously. Hence, a cytokine networks perspective coupled with neuron-specific analysis may contribute to delineation of maps of the modulation of LTP by cytokines. PMID:28377062

  18. Neural networks with multiple general neuron models: a hybrid computational intelligence approach using Genetic Programming.

    PubMed

    Barton, Alan J; Valdés, Julio J; Orchard, Robert

    2009-01-01

    Classical neural networks are composed of neurons whose nature is determined by a certain function (the neuron model), usually pre-specified. In this paper, a type of neural network (NN-GP) is presented in which: (i) each neuron may have its own neuron model in the form of a general function, (ii) any layout (i.e network interconnection) is possible, and (iii) no bias nodes or weights are associated to the connections, neurons or layers. The general functions associated to a neuron are learned by searching a function space. They are not provided a priori, but are rather built as part of an Evolutionary Computation process based on Genetic Programming. The resulting network solutions are evaluated based on a fitness measure, which may, for example, be based on classification or regression errors. Two real-world examples are presented to illustrate the promising behaviour on classification problems via construction of a low-dimensional representation of a high-dimensional parameter space associated to the set of all network solutions.

  19. Qualitative-Modeling-Based Silicon Neurons and Their Networks

    PubMed Central

    Kohno, Takashi; Sekikawa, Munehisa; Li, Jing; Nanami, Takuya; Aihara, Kazuyuki

    2016-01-01

    The ionic conductance models of neuronal cells can finely reproduce a wide variety of complex neuronal activities. However, the complexity of these models has prompted the development of qualitative neuron models. They are described by differential equations with a reduced number of variables and their low-dimensional polynomials, which retain the core mathematical structures. Such simple models form the foundation of a bottom-up approach in computational and theoretical neuroscience. We proposed a qualitative-modeling-based approach for designing silicon neuron circuits, in which the mathematical structures in the polynomial-based qualitative models are reproduced by differential equations with silicon-native expressions. This approach can realize low-power-consuming circuits that can be configured to realize various classes of neuronal cells. In this article, our qualitative-modeling-based silicon neuron circuits for analog and digital implementations are quickly reviewed. One of our CMOS analog silicon neuron circuits can realize a variety of neuronal activities with a power consumption less than 72 nW. The square-wave bursting mode of this circuit is explained. Another circuit can realize Class I and II neuronal activities with about 3 nW. Our digital silicon neuron circuit can also realize these classes. An auto-associative memory realized on an all-to-all connected network of these silicon neurons is also reviewed, in which the neuron class plays important roles in its performance. PMID:27378842

  20. Biological modelling of a computational spiking neural network with neuronal avalanches.

    PubMed

    Li, Xiumin; Chen, Qing; Xue, Fangzheng

    2017-06-28

    In recent years, an increasing number of studies have demonstrated that networks in the brain can self-organize into a critical state where dynamics exhibit a mixture of ordered and disordered patterns. This critical branching phenomenon is termed neuronal avalanches. It has been hypothesized that the homeostatic level balanced between stability and plasticity of this critical state may be the optimal state for performing diverse neural computational tasks. However, the critical region for high performance is narrow and sensitive for spiking neural networks (SNNs). In this paper, we investigated the role of the critical state in neural computations based on liquid-state machines, a biologically plausible computational neural network model for real-time computing. The computational performance of an SNN when operating at the critical state and, in particular, with spike-timing-dependent plasticity for updating synaptic weights is investigated. The network is found to show the best computational performance when it is subjected to critical dynamic states. Moreover, the active-neuron-dominant structure refined from synaptic learning can remarkably enhance the robustness of the critical state and further improve computational accuracy. These results may have important implications in the modelling of spiking neural networks with optimal computational performance.This article is part of the themed issue 'Mathematical methods in medicine: neuroscience, cardiology and pathology'. © 2017 The Author(s).

  1. A Hox regulatory network establishes motor neuron pool identity and target-muscle connectivity.

    PubMed

    Dasen, Jeremy S; Tice, Bonnie C; Brenner-Morton, Susan; Jessell, Thomas M

    2005-11-04

    Spinal motor neurons acquire specialized "pool" identities that determine their ability to form selective connections with target muscles in the limb, but the molecular basis of this striking example of neuronal specificity has remained unclear. We show here that a Hox transcriptional regulatory network specifies motor neuron pool identity and connectivity. Two interdependent sets of Hox regulatory interactions operate within motor neurons, one assigning rostrocaudal motor pool position and a second directing motor pool diversity at a single segmental level. This Hox regulatory network directs the downstream transcriptional identity of motor neuron pools and defines the pattern of target-muscle connectivity.

  2. A combined Bodian-Nissl stain for improved network analysis in neuronal cell culture.

    PubMed

    Hightower, M; Gross, G W

    1985-11-01

    Bodian and Nissl procedures were combined to stain dissociated mouse spinal cord cells cultured on coverslips. The Bodian technique stains fine neuronal processes in great detail as well as an intracellular fibrillar network concentrated around the nucleus and in proximal neurites. The Nissl stain clearly delimits neuronal cytoplasm in somata and in large dendrites. A combination of these techniques allows the simultaneous depiction of neuronal perikarya and all afferent and efferent processes. Costaining with little background staining by either procedure suggests high specificity for neurons. This procedure could be exploited for routine network analysis of cultured neurons.

  3. Synchronous firing patterns of induced pluripotent stem cell-derived cortical neurons depend on the network structure consisting of excitatory and inhibitory neurons.

    PubMed

    Iida, Shoko; Shimba, Kenta; Sakai, Koji; Kotani, Kiyoshi; Jimbo, Yasuhiko

    2018-06-18

    The balance between glutamate-mediated excitation and GABA-mediated inhibition is critical to cortical functioning. However, the contribution of network structure consisting of the both neurons to cortical functioning has not been elucidated. We aimed to evaluate the relationship between the network structure and functional activity patterns in vitro. We used mouse induced pluripotent stem cells (iPSCs) to construct three types of neuronal populations; excitatory-rich (Exc), inhibitory-rich (Inh), and control (Cont). Then, we analyzed the activity patterns of these neuronal populations using microelectrode arrays (MEAs). Inhibitory synaptic densities differed between the three types of iPSC-derived neuronal populations, and the neurons showed spontaneously synchronized bursting activity with functional maturation for one month. Moreover, different firing patterns were observed between the three populations; Exc demonstrated the highest firing rates, including frequent, long, and dominant bursts. In contrast, Inh demonstrated the lowest firing rates and the least dominant bursts. Synchronized bursts were enhanced by disinhibition via GABA A receptor blockade. The present study, using iPSC-derived neurons and MEAs, for the first time show that synchronized bursting of cortical networks in vitro depends on the network structure consisting of excitatory and inhibitory neurons. Copyright © 2018 Elsevier Inc. All rights reserved.

  4. A simplified protocol for differentiation of electrophysiologically mature neuronal networks from human induced pluripotent stem cells.

    PubMed

    Gunhanlar, N; Shpak, G; van der Kroeg, M; Gouty-Colomer, L A; Munshi, S T; Lendemeijer, B; Ghazvini, M; Dupont, C; Hoogendijk, W J G; Gribnau, J; de Vrij, F M S; Kushner, S A

    2018-05-01

    Progress in elucidating the molecular and cellular pathophysiology of neuropsychiatric disorders has been hindered by the limited availability of living human brain tissue. The emergence of induced pluripotent stem cells (iPSCs) has offered a unique alternative strategy using patient-derived functional neuronal networks. However, methods for reliably generating iPSC-derived neurons with mature electrophysiological characteristics have been difficult to develop. Here, we report a simplified differentiation protocol that yields electrophysiologically mature iPSC-derived cortical lineage neuronal networks without the need for astrocyte co-culture or specialized media. This protocol generates a consistent 60:40 ratio of neurons and astrocytes that arise from a common forebrain neural progenitor. Whole-cell patch-clamp recordings of 114 neurons derived from three independent iPSC lines confirmed their electrophysiological maturity, including resting membrane potential (-58.2±1.0 mV), capacitance (49.1±2.9 pF), action potential (AP) threshold (-50.9±0.5 mV) and AP amplitude (66.5±1.3 mV). Nearly 100% of neurons were capable of firing APs, of which 79% had sustained trains of mature APs with minimal accommodation (peak AP frequency: 11.9±0.5 Hz) and 74% exhibited spontaneous synaptic activity (amplitude, 16.03±0.82 pA; frequency, 1.09±0.17 Hz). We expect this protocol to be of broad applicability for implementing iPSC-based neuronal network models of neuropsychiatric disorders.

  5. Efficient Transmission of Subthreshold Signals in Complex Networks of Spiking Neurons

    PubMed Central

    Torres, Joaquin J.; Elices, Irene; Marro, J.

    2015-01-01

    We investigate the efficient transmission and processing of weak, subthreshold signals in a realistic neural medium in the presence of different levels of the underlying noise. Assuming Hebbian weights for maximal synaptic conductances—that naturally balances the network with excitatory and inhibitory synapses—and considering short-term synaptic plasticity affecting such conductances, we found different dynamic phases in the system. This includes a memory phase where population of neurons remain synchronized, an oscillatory phase where transitions between different synchronized populations of neurons appears and an asynchronous or noisy phase. When a weak stimulus input is applied to each neuron, increasing the level of noise in the medium we found an efficient transmission of such stimuli around the transition and critical points separating different phases for well-defined different levels of stochasticity in the system. We proved that this intriguing phenomenon is quite robust, as it occurs in different situations including several types of synaptic plasticity, different type and number of stored patterns and diverse network topologies, namely, diluted networks and complex topologies such as scale-free and small-world networks. We conclude that the robustness of the phenomenon in different realistic scenarios, including spiking neurons, short-term synaptic plasticity and complex networks topologies, make very likely that it could also occur in actual neural systems as recent psycho-physical experiments suggest. PMID:25799449

  6. Solving Constraint-Satisfaction Problems with Distributed Neocortical-Like Neuronal Networks.

    PubMed

    Rutishauser, Ueli; Slotine, Jean-Jacques; Douglas, Rodney J

    2018-05-01

    Finding actions that satisfy the constraints imposed by both external inputs and internal representations is central to decision making. We demonstrate that some important classes of constraint satisfaction problems (CSPs) can be solved by networks composed of homogeneous cooperative-competitive modules that have connectivity similar to motifs observed in the superficial layers of neocortex. The winner-take-all modules are sparsely coupled by programming neurons that embed the constraints onto the otherwise homogeneous modular computational substrate. We show rules that embed any instance of the CSP's planar four-color graph coloring, maximum independent set, and sudoku on this substrate and provide mathematical proofs that guarantee these graph coloring problems will convergence to a solution. The network is composed of nonsaturating linear threshold neurons. Their lack of right saturation allows the overall network to explore the problem space driven through the unstable dynamics generated by recurrent excitation. The direction of exploration is steered by the constraint neurons. While many problems can be solved using only linear inhibitory constraints, network performance on hard problems benefits significantly when these negative constraints are implemented by nonlinear multiplicative inhibition. Overall, our results demonstrate the importance of instability rather than stability in network computation and offer insight into the computational role of dual inhibitory mechanisms in neural circuits.

  7. A novel environmental chamber for neuronal network multisite recordings.

    PubMed

    Biffi, E; Regalia, G; Ghezzi, D; De Ceglia, R; Menegon, A; Ferrigno, G; Fiore, G B; Pedrocchi, A

    2012-10-01

    Environmental stability is a critical issue for neuronal networks in vitro. Hence, the ability to control the physical and chemical environment of cell cultures during electrophysiological measurements is an important requirement in the experimental design. In this work, we describe the development and the experimental verification of a closed chamber for multisite electrophysiology and optical monitoring. The chamber provides stable temperature, pH and humidity and guarantees cell viability comparable to standard incubators. Besides, it integrates the electronics for long-term neuronal activity recording. The system is portable and adaptable for multiple network housings, which allows performing parallel experiments in the same environment. Our results show that this device can be a solution for long-term electrophysiology, for dual network experiments and for coupled optical and electrical measurements. Copyright © 2012 Wiley Periodicals, Inc.

  8. Memristor-based neural networks: Synaptic versus neuronal stochasticity

    NASA Astrophysics Data System (ADS)

    Naous, Rawan; AlShedivat, Maruan; Neftci, Emre; Cauwenberghs, Gert; Salama, Khaled Nabil

    2016-11-01

    In neuromorphic circuits, stochasticity in the cortex can be mapped into the synaptic or neuronal components. The hardware emulation of these stochastic neural networks are currently being extensively studied using resistive memories or memristors. The ionic process involved in the underlying switching behavior of the memristive elements is considered as the main source of stochasticity of its operation. Building on its inherent variability, the memristor is incorporated into abstract models of stochastic neurons and synapses. Two approaches of stochastic neural networks are investigated. Aside from the size and area perspective, the impact on the system performance, in terms of accuracy, recognition rates, and learning, among these two approaches and where the memristor would fall into place are the main comparison points to be considered.

  9. Self-organized criticality occurs in non-conservative neuronal networks during `up' states

    NASA Astrophysics Data System (ADS)

    Millman, Daniel; Mihalas, Stefan; Kirkwood, Alfredo; Niebur, Ernst

    2010-10-01

    During sleep, under anaesthesia and in vitro, cortical neurons in sensory, motor, association and executive areas fluctuate between so-called up and down states, which are characterized by distinct membrane potentials and spike rates. Another phenomenon observed in preparations similar to those that exhibit up and down states-such as anaesthetized rats, brain slices and cultures devoid of sensory input, as well as awake monkey cortex-is self-organized criticality (SOC). SOC is characterized by activity `avalanches' with a branching parameter near unity and size distribution that obeys a power law with a critical exponent of about -3/2. Recent work has demonstrated SOC in conservative neuronal network models, but critical behaviour breaks down when biologically realistic `leaky' neurons are introduced. Here, we report robust SOC behaviour in networks of non-conservative leaky integrate-and-fire neurons with short-term synaptic depression. We show analytically and numerically that these networks typically have two stable activity levels, corresponding to up and down states, that the networks switch spontaneously between these states and that up states are critical and down states are subcritical.

  10. Chimera-like states in a neuronal network model of the cat brain

    NASA Astrophysics Data System (ADS)

    Santos, M. S.; Szezech, J. D.; Borges, F. S.; Iarosz, K. C.; Caldas, I. L.; Batista, A. M.; Viana, R. L.; Kurths, J.

    2017-08-01

    Neuronal systems have been modeled by complex networks in different description levels. Recently, it has been verified that networks can simultaneously exhibit one coherent and other incoherent domain, known as chimera states. In this work, we study the existence of chimera states in a network considering the connectivity matrix based on the cat cerebral cortex. The cerebral cortex of the cat can be separated in 65 cortical areas organised into the four cognitive regions: visual, auditory, somatosensory-motor and frontolimbic. We consider a network where the local dynamics is given by the Hindmarsh-Rose model. The Hindmarsh-Rose equations are a well known model of neuronal activity that has been considered to simulate membrane potential in neuron. Here, we analyse under which conditions chimera states are present, as well as the affects induced by intensity of coupling on them. We observe the existence of chimera states in that incoherent structure can be composed of desynchronised spikes or desynchronised bursts. Moreover, we find that chimera states with desynchronised bursts are more robust to neuronal noise than with desynchronised spikes.

  11. Impact of delays on the synchronization transitions of modular neuronal networks with hybrid synapses

    NASA Astrophysics Data System (ADS)

    Liu, Chen; Wang, Jiang; Yu, Haitao; Deng, Bin; Wei, Xile; Tsang, Kaiming; Chan, Wailok

    2013-09-01

    The combined effects of the information transmission delay and the ratio of the electrical and chemical synapses on the synchronization transitions in the hybrid modular neuronal network are investigated in this paper. Numerical results show that the synchronization of neuron activities can be either promoted or destroyed as the information transmission delay increases, irrespective of the probability of electrical synapses in the hybrid-synaptic network. Interestingly, when the number of the electrical synapses exceeds a certain level, further increasing its proportion can obviously enhance the spatiotemporal synchronization transitions. Moreover, the coupling strength has a significant effect on the synchronization transition. The dominated type of the synapse always has a more profound effect on the emergency of the synchronous behaviors. Furthermore, the results of the modular neuronal network structures demonstrate that excessive partitioning of the modular network may result in the dramatic detriment of neuronal synchronization. Considering that information transmission delays are inevitable in intra- and inter-neuronal networks communication, the obtained results may have important implications for the exploration of the synchronization mechanism underlying several neural system diseases such as Parkinson's Disease.

  12. Synchronization in a chaotic neural network with time delay depending on the spatial distance between neurons

    NASA Astrophysics Data System (ADS)

    Tang, Guoning; Xu, Kesheng; Jiang, Luoluo

    2011-10-01

    The synchronization is investigated in a two-dimensional Hindmarsh-Rose neuronal network by introducing a global coupling scheme with time delay, where the length of time delay is proportional to the spatial distance between neurons. We find that the time delay always disturbs synchronization of the neuronal network. When both the coupling strength and length of time delay per unit distance (i.e., enlargement factor) are large enough, the time delay induces the abnormal membrane potential oscillations in neurons. Specifically, the abnormal membrane potential oscillations for the symmetrically placed neurons form an antiphase, so that the large coupling strength and enlargement factor lead to the desynchronization of the neuronal network. The complete and intermittently complete synchronization of the neuronal network are observed for the right choice of parameters. The physical mechanism underlying these phenomena is analyzed.

  13. Excitement and synchronization of small-world neuronal networks with short-term synaptic plasticity.

    PubMed

    Han, Fang; Wiercigroch, Marian; Fang, Jian-An; Wang, Zhijie

    2011-10-01

    Excitement and synchronization of electrically and chemically coupled Newman-Watts (NW) small-world neuronal networks with a short-term synaptic plasticity described by a modified Oja learning rule are investigated. For each type of neuronal network, the variation properties of synaptic weights are examined first. Then the effects of the learning rate, the coupling strength and the shortcut-adding probability on excitement and synchronization of the neuronal network are studied. It is shown that the synaptic learning suppresses the over-excitement, helps synchronization for the electrically coupled network but impairs synchronization for the chemically coupled one. Both the introduction of shortcuts and the increase of the coupling strength improve synchronization and they are helpful in increasing the excitement for the chemically coupled network, but have little effect on the excitement of the electrically coupled one.

  14. A modeling comparison of projection neuron- and neuromodulator-elicited oscillations in a central pattern generating network.

    PubMed

    Kintos, Nickolas; Nusbaum, Michael P; Nadim, Farzan

    2008-06-01

    Many central pattern generating networks are influenced by synaptic input from modulatory projection neurons. The network response to a projection neuron is sometimes mimicked by bath applying the neuronally-released modulator, despite the absence of network interactions with the projection neuron. One interesting example occurs in the crab stomatogastric ganglion (STG), where bath applying the neuropeptide pyrokinin (PK) elicits a gastric mill rhythm which is similar to that elicited by the projection neuron modulatory commissural neuron 1 (MCN1), despite the absence of PK in MCN1 and the fact that MCN1 is not active during the PK-elicited rhythm. MCN1 terminals have fast and slow synaptic actions on the gastric mill network and are presynaptically inhibited by this network in the STG. These local connections are inactive in the PK-elicited rhythm, and the mechanism underlying this rhythm is unknown. We use mathematical and biophysically-realistic modeling to propose potential mechanisms by which PK can elicit a gastric mill rhythm that is similar to the MCN1-elicited rhythm. We analyze slow-wave network oscillations using simplified mathematical models and, in parallel, develop biophysically-realistic models that account for fast, action potential-driven oscillations and some spatial structure of the network neurons. Our results illustrate how the actions of bath-applied neuromodulators can mimic those of descending projection neurons through mathematically similar but physiologically distinct mechanisms.

  15. Field coupling-induced pattern formation in two-layer neuronal network

    NASA Astrophysics Data System (ADS)

    Qin, Huixin; Wang, Chunni; Cai, Ning; An, Xinlei; Alzahrani, Faris

    2018-07-01

    The exchange of charged ions across membrane can generate fluctuation of membrane potential and also complex effect of electromagnetic induction. Diversity in excitability of neurons induces different modes selection and dynamical responses to external stimuli. Based on a neuron model with electromagnetic induction, which is described by magnetic flux and memristor, a two-layer network is proposed to discuss the pattern control and wave propagation in the network. In each layer, gap junction coupling is applied to connect the neurons, while field coupling is considered between two layers of the network. The field coupling is approached by using coupling of magnetic flux, which is associated with distribution of electromagnetic field. It is found that appropriate intensity of field coupling can enhance wave propagation from one layer to another one, and beautiful spatial patterns are formed. The developed target wave in the second layer shows some difference from target wave triggered in the first layer of the network when two layers are considered by different excitabilities. The potential mechanism could be pacemaker-like driving from the first layer will be encoded by the second layer.

  16. Burst synchronization transitions in a neuronal network of subnetworks

    NASA Astrophysics Data System (ADS)

    Sun, Xiaojuan; Lei, Jinzhi; Perc, Matjaž; Kurths, Jürgen; Chen, Guanrong

    2011-03-01

    In this paper, the transitions of burst synchronization are explored in a neuronal network consisting of subnetworks. The studied network is composed of electrically coupled bursting Hindmarsh-Rose neurons. Numerical results show that two types of burst synchronization transitions can be induced not only by the variations of intra- and intercoupling strengths but also by changing the probability of random links between different subnetworks and the number of subnetworks. Furthermore, we find that the underlying mechanisms for these two bursting synchronization transitions are different: one is due to the change of spike numbers per burst, while the other is caused by the change of the bursting type. Considering that changes in the coupling strengths and neuronal connections are closely interlaced with brain plasticity, the presented results could have important implications for the role of the brain plasticity in some functional behavior that are associated with synchronization.

  17. On the performance of voltage stepping for the simulation of adaptive, nonlinear integrate-and-fire neuronal networks.

    PubMed

    Kaabi, Mohamed Ghaith; Tonnelier, Arnaud; Martinez, Dominique

    2011-05-01

    In traditional event-driven strategies, spike timings are analytically given or calculated with arbitrary precision (up to machine precision). Exact computation is possible only for simplified neuron models, mainly the leaky integrate-and-fire model. In a recent paper, Zheng, Tonnelier, and Martinez (2009) introduced an approximate event-driven strategy, named voltage stepping, that allows the generic simulation of nonlinear spiking neurons. Promising results were achieved in the simulation of single quadratic integrate-and-fire neurons. Here, we assess the performance of voltage stepping in network simulations by considering more complex neurons (quadratic integrate-and-fire neurons with adaptation) coupled with multiple synapses. To handle the discrete nature of synaptic interactions, we recast voltage stepping in a general framework, the discrete event system specification. The efficiency of the method is assessed through simulations and comparisons with a modified time-stepping scheme of the Runge-Kutta type. We demonstrated numerically that the original order of voltage stepping is preserved when simulating connected spiking neurons, independent of the network activity and connectivity.

  18. Cytokines and cytokine networks target neurons to modulate long-term potentiation.

    PubMed

    Prieto, G Aleph; Cotman, Carl W

    2017-04-01

    Cytokines play crucial roles in the communication between brain cells including neurons and glia, as well as in the brain-periphery interactions. In the brain, cytokines modulate long-term potentiation (LTP), a cellular correlate of memory. Whether cytokines regulate LTP by direct effects on neurons or by indirect mechanisms mediated by non-neuronal cells is poorly understood. Elucidating neuron-specific effects of cytokines has been challenging because most brain cells express cytokine receptors. Moreover, cytokines commonly increase the expression of multiple cytokines in their target cells, thus increasing the complexity of brain cytokine networks even after single-cytokine challenges. Here, we review evidence on both direct and indirect-mediated modulation of LTP by cytokines. We also describe novel approaches based on neuron- and synaptosome-enriched systems to identify cytokines able to directly modulate LTP, by targeting neurons and synapses. These approaches can test multiple samples in parallel, thus allowing the study of multiple cytokines simultaneously. Hence, a cytokine networks perspective coupled with neuron-specific analysis may contribute to delineation of maps of the modulation of LTP by cytokines. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers.

    PubMed

    Jordan, Jakob; Ippen, Tammo; Helias, Moritz; Kitayama, Itaru; Sato, Mitsuhisa; Igarashi, Jun; Diesmann, Markus; Kunkel, Susanne

    2018-01-01

    State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems.

  20. Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers

    PubMed Central

    Jordan, Jakob; Ippen, Tammo; Helias, Moritz; Kitayama, Itaru; Sato, Mitsuhisa; Igarashi, Jun; Diesmann, Markus; Kunkel, Susanne

    2018-01-01

    State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems. PMID:29503613

  1. On the applicability of STDP-based learning mechanisms to spiking neuron network models

    NASA Astrophysics Data System (ADS)

    Sboev, A.; Vlasov, D.; Serenko, A.; Rybka, R.; Moloshnikov, I.

    2016-11-01

    The ways to creating practically effective method for spiking neuron networks learning, that would be appropriate for implementing in neuromorphic hardware and at the same time based on the biologically plausible plasticity rules, namely, on STDP, are discussed. The influence of the amount of correlation between input and output spike trains on the learnability by different STDP rules is evaluated. A usability of alternative combined learning schemes, involving artificial and spiking neuron models is demonstrated on the iris benchmark task and on the practical task of gender recognition.

  2. An FPGA Platform for Real-Time Simulation of Spiking Neuronal Networks

    PubMed Central

    Pani, Danilo; Meloni, Paolo; Tuveri, Giuseppe; Palumbo, Francesca; Massobrio, Paolo; Raffo, Luigi

    2017-01-01

    In the last years, the idea to dynamically interface biological neurons with artificial ones has become more and more urgent. The reason is essentially due to the design of innovative neuroprostheses where biological cell assemblies of the brain can be substituted by artificial ones. For closed-loop experiments with biological neuronal networks interfaced with in silico modeled networks, several technological challenges need to be faced, from the low-level interfacing between the living tissue and the computational model to the implementation of the latter in a suitable form for real-time processing. Field programmable gate arrays (FPGAs) can improve flexibility when simple neuronal models are required, obtaining good accuracy, real-time performance, and the possibility to create a hybrid system without any custom hardware, just programming the hardware to achieve the required functionality. In this paper, this possibility is explored presenting a modular and efficient FPGA design of an in silico spiking neural network exploiting the Izhikevich model. The proposed system, prototypically implemented on a Xilinx Virtex 6 device, is able to simulate a fully connected network counting up to 1,440 neurons, in real-time, at a sampling rate of 10 kHz, which is reasonable for small to medium scale extra-cellular closed-loop experiments. PMID:28293163

  3. Thermodynamics and signatures of criticality in a network of neurons.

    PubMed

    Tkačik, Gašper; Mora, Thierry; Marre, Olivier; Amodei, Dario; Palmer, Stephanie E; Berry, Michael J; Bialek, William

    2015-09-15

    The activity of a neural network is defined by patterns of spiking and silence from the individual neurons. Because spikes are (relatively) sparse, patterns of activity with increasing numbers of spikes are less probable, but, with more spikes, the number of possible patterns increases. This tradeoff between probability and numerosity is mathematically equivalent to the relationship between entropy and energy in statistical physics. We construct this relationship for populations of up to N = 160 neurons in a small patch of the vertebrate retina, using a combination of direct and model-based analyses of experiments on the response of this network to naturalistic movies. We see signs of a thermodynamic limit, where the entropy per neuron approaches a smooth function of the energy per neuron as N increases. The form of this function corresponds to the distribution of activity being poised near an unusual kind of critical point. We suggest further tests of criticality, and give a brief discussion of its functional significance.

  4. Effects of channel noise on firing coherence of small-world Hodgkin-Huxley neuronal networks

    NASA Astrophysics Data System (ADS)

    Sun, X. J.; Lei, J. Z.; Perc, M.; Lu, Q. S.; Lv, S. J.

    2011-01-01

    We investigate the effects of channel noise on firing coherence of Watts-Strogatz small-world networks consisting of biophysically realistic HH neurons having a fraction of blocked voltage-gated sodium and potassium ion channels embedded in their neuronal membranes. The intensity of channel noise is determined by the number of non-blocked ion channels, which depends on the fraction of working ion channels and the membrane patch size with the assumption of homogeneous ion channel density. We find that firing coherence of the neuronal network can be either enhanced or reduced depending on the source of channel noise. As shown in this paper, sodium channel noise reduces firing coherence of neuronal networks; in contrast, potassium channel noise enhances it. Furthermore, compared with potassium channel noise, sodium channel noise plays a dominant role in affecting firing coherence of the neuronal network. Moreover, we declare that the observed phenomena are independent of the rewiring probability.

  5. The formation and distribution of hippocampal synapses on patterned neuronal networks

    NASA Astrophysics Data System (ADS)

    Dowell-Mesfin, Natalie M.

    Communication within the central nervous system is highly orchestrated with neurons forming trillions of specialized junctions called synapses. In vivo, biochemical and topographical cues can regulate neuronal growth. Biochemical cues also influence synaptogenesis and synaptic plasticity. The effects of topography on the development of synapses have been less studied. In vitro, neuronal growth is unorganized and complex making it difficult to study the development of networks. Patterned topographical cues guide and control the growth of neuronal processes (axons and dendrites) into organized networks. The aim of this dissertation was to determine if patterned topographical cues can influence synapse formation and distribution. Standard fabrication and compression molding procedures were used to produce silicon masters and polystyrene replicas with topographical cues presented as 1 mum high pillars with diameters of 0.5 and 2.0 mum and gaps of 1.0 to 5.0 mum. Embryonic rat hippocampal neurons grown unto patterned surfaces. A developmental analysis with immunocytochemistry was used to assess the distribution of pre- and post-synaptic proteins. Activity-dependent pre-synaptic vesicle uptake using functional imaging dyes was also performed. Adaptive filtering computer algorithms identified synapses by segmenting juxtaposed pairs of pre- and post-synaptic labels. Synapse number and area were automatically extracted from each deconvolved data set. In addition, neuronal processes were traced automatically to assess changes in synapse distribution. The results of these experiments demonstrated that patterned topographic cues can induce organized and functional neuronal networks that can serve as models for the study of synapse formation and plasticity as well as for the development of neuroprosthetic devices.

  6. Simultaneous multi-patch-clamp and extracellular-array recordings: Single neuron reflects network activity.

    PubMed

    Vardi, Roni; Goldental, Amir; Sardi, Shira; Sheinin, Anton; Kanter, Ido

    2016-11-08

    The increasing number of recording electrodes enhances the capability of capturing the network's cooperative activity, however, using too many monitors might alter the properties of the measured neural network and induce noise. Using a technique that merges simultaneous multi-patch-clamp and multi-electrode array recordings of neural networks in-vitro, we show that the membrane potential of a single neuron is a reliable and super-sensitive probe for monitoring such cooperative activities and their detailed rhythms. Specifically, the membrane potential and the spiking activity of a single neuron are either highly correlated or highly anti-correlated with the time-dependent macroscopic activity of the entire network. This surprising observation also sheds light on the cooperative origin of neuronal burst in cultured networks. Our findings present an alternative flexible approach to the technique based on a massive tiling of networks by large-scale arrays of electrodes to monitor their activity.

  7. Computational Models of Neuron-Astrocyte Interactions Lead to Improved Efficacy in the Performance of Neural Networks

    PubMed Central

    Alvarellos-González, Alberto; Pazos, Alejandro; Porto-Pazos, Ana B.

    2012-01-01

    The importance of astrocytes, one part of the glial system, for information processing in the brain has recently been demonstrated. Regarding information processing in multilayer connectionist systems, it has been shown that systems which include artificial neurons and astrocytes (Artificial Neuron-Glia Networks) have well-known advantages over identical systems including only artificial neurons. Since the actual impact of astrocytes in neural network function is unknown, we have investigated, using computational models, different astrocyte-neuron interactions for information processing; different neuron-glia algorithms have been implemented for training and validation of multilayer Artificial Neuron-Glia Networks oriented toward classification problem resolution. The results of the tests performed suggest that all the algorithms modelling astrocyte-induced synaptic potentiation improved artificial neural network performance, but their efficacy depended on the complexity of the problem. PMID:22649480

  8. Computational models of neuron-astrocyte interactions lead to improved efficacy in the performance of neural networks.

    PubMed

    Alvarellos-González, Alberto; Pazos, Alejandro; Porto-Pazos, Ana B

    2012-01-01

    The importance of astrocytes, one part of the glial system, for information processing in the brain has recently been demonstrated. Regarding information processing in multilayer connectionist systems, it has been shown that systems which include artificial neurons and astrocytes (Artificial Neuron-Glia Networks) have well-known advantages over identical systems including only artificial neurons. Since the actual impact of astrocytes in neural network function is unknown, we have investigated, using computational models, different astrocyte-neuron interactions for information processing; different neuron-glia algorithms have been implemented for training and validation of multilayer Artificial Neuron-Glia Networks oriented toward classification problem resolution. The results of the tests performed suggest that all the algorithms modelling astrocyte-induced synaptic potentiation improved artificial neural network performance, but their efficacy depended on the complexity of the problem.

  9. A multiscale method for a robust detection of the default mode network

    NASA Astrophysics Data System (ADS)

    Baquero, Katherine; Gómez, Francisco; Cifuentes, Christian; Guldenmund, Pieter; Demertzi, Athena; Vanhaudenhuyse, Audrey; Gosseries, Olivia; Tshibanda, Jean-Flory; Noirhomme, Quentin; Laureys, Steven; Soddu, Andrea; Romero, Eduardo

    2013-11-01

    The Default Mode Network (DMN) is a resting state network widely used for the analysis and diagnosis of mental disorders. It is normally detected in fMRI data, but for its detection in data corrupted by motion artefacts or low neuronal activity, the use of a robust analysis method is mandatory. In fMRI it has been shown that the signal-to-noise ratio (SNR) and the detection sensitivity of neuronal regions is increased with di erent smoothing kernels sizes. Here we propose to use a multiscale decomposition based of a linear scale-space representation for the detection of the DMN. Three main points are proposed in this methodology: rst, the use of fMRI data at di erent smoothing scale-spaces, second, detection of independent neuronal components of the DMN at each scale by using standard preprocessing methods and ICA decomposition at scale-level, and nally, a weighted contribution of each scale by the Goodness of Fit measurement. This method was applied to a group of control subjects and was compared with a standard preprocesing baseline. The detection of the DMN was improved at single subject level and at group level. Based on these results, we suggest to use this methodology to enhance the detection of the DMN in data perturbed with artefacts or applied to subjects with low neuronal activity. Furthermore, the multiscale method could be extended for the detection of other resting state neuronal networks.

  10. Spiking, Bursting, and Population Dynamics in a Network of Growth Transform Neurons.

    PubMed

    Gangopadhyay, Ahana; Chakrabartty, Shantanu

    2018-06-01

    This paper investigates the dynamical properties of a network of neurons, each of which implements an asynchronous mapping based on polynomial growth transforms. In the first part of this paper, we present a geometric approach for visualizing the dynamics of the network where each of the neurons traverses a trajectory in a dual optimization space, whereas the network itself traverses a trajectory in an equivalent primal optimization space. We show that as the network learns to solve basic classification tasks, different choices of primal-dual mapping produce unique but interpretable neural dynamics like noise shaping, spiking, and bursting. While the proposed framework is general enough, in this paper, we demonstrate its use for designing support vector machines (SVMs) that exhibit noise-shaping properties similar to those of modulators, and for designing SVMs that learn to encode information using spikes and bursts. It is demonstrated that the emergent switching, spiking, and burst dynamics produced by each neuron encodes its respective margin of separation from a classification hyperplane whose parameters are encoded by the network population dynamics. We believe that the proposed growth transform neuron model and the underlying geometric framework could serve as an important tool to connect well-established machine learning algorithms like SVMs to neuromorphic principles like spiking, bursting, population encoding, and noise shaping.

  11. Effects of partial time delays on phase synchronization in Watts-Strogatz small-world neuronal networks

    NASA Astrophysics Data System (ADS)

    Sun, Xiaojuan; Perc, Matjaž; Kurths, Jürgen

    2017-05-01

    In this paper, we study effects of partial time delays on phase synchronization in Watts-Strogatz small-world neuronal networks. Our focus is on the impact of two parameters, namely the time delay τ and the probability of partial time delay pdelay, whereby the latter determines the probability with which a connection between two neurons is delayed. Our research reveals that partial time delays significantly affect phase synchronization in this system. In particular, partial time delays can either enhance or decrease phase synchronization and induce synchronization transitions with changes in the mean firing rate of neurons, as well as induce switching between synchronized neurons with period-1 firing to synchronized neurons with period-2 firing. Moreover, in comparison to a neuronal network where all connections are delayed, we show that small partial time delay probabilities have especially different influences on phase synchronization of neuronal networks.

  12. Effects of partial time delays on phase synchronization in Watts-Strogatz small-world neuronal networks.

    PubMed

    Sun, Xiaojuan; Perc, Matjaž; Kurths, Jürgen

    2017-05-01

    In this paper, we study effects of partial time delays on phase synchronization in Watts-Strogatz small-world neuronal networks. Our focus is on the impact of two parameters, namely the time delay τ and the probability of partial time delay p delay , whereby the latter determines the probability with which a connection between two neurons is delayed. Our research reveals that partial time delays significantly affect phase synchronization in this system. In particular, partial time delays can either enhance or decrease phase synchronization and induce synchronization transitions with changes in the mean firing rate of neurons, as well as induce switching between synchronized neurons with period-1 firing to synchronized neurons with period-2 firing. Moreover, in comparison to a neuronal network where all connections are delayed, we show that small partial time delay probabilities have especially different influences on phase synchronization of neuronal networks.

  13. Neural control of heart rate: the role of neuronal networking.

    PubMed

    Kember, G; Armour, J A; Zamir, M

    2011-05-21

    Neural control of heart rate, particularly its sympathetic component, is generally thought to reside primarily in the central nervous system, though accumulating evidence suggests that intrathoracic extracardiac and intrinsic cardiac ganglia are also involved. We propose an integrated model in which the control of heart rate is achieved via three neuronal "levels" representing three control centers instead of the conventional one. Most importantly, in this model control is effected through networking between neuronal populations within and among these layers. The results obtained indicate that networking serves to process demands for systemic blood flow before transducing them to cardiac motor neurons. This provides the heart with a measure of protection against the possibility of "overdrive" implied by the currently held centrally driven system. The results also show that localized networking instabilities can lead to sporadic low frequency oscillations that have the characteristics of the well-known Mayer waves. The sporadic nature of Mayer waves has been unexplained so far and is of particular interest in clinical diagnosis. Copyright © 2011 Elsevier Ltd. All rights reserved.

  14. Parallel network simulations with NEURON.

    PubMed

    Migliore, M; Cannia, C; Lytton, W W; Markram, Henry; Hines, M L

    2006-10-01

    The NEURON simulation environment has been extended to support parallel network simulations. Each processor integrates the equations for its subnet over an interval equal to the minimum (interprocessor) presynaptic spike generation to postsynaptic spike delivery connection delay. The performance of three published network models with very different spike patterns exhibits superlinear speedup on Beowulf clusters and demonstrates that spike communication overhead is often less than the benefit of an increased fraction of the entire problem fitting into high speed cache. On the EPFL IBM Blue Gene, almost linear speedup was obtained up to 100 processors. Increasing one model from 500 to 40,000 realistic cells exhibited almost linear speedup on 2,000 processors, with an integration time of 9.8 seconds and communication time of 1.3 seconds. The potential for speed-ups of several orders of magnitude makes practical the running of large network simulations that could otherwise not be explored.

  15. Parallel Network Simulations with NEURON

    PubMed Central

    Migliore, M.; Cannia, C.; Lytton, W.W; Markram, Henry; Hines, M. L.

    2009-01-01

    The NEURON simulation environment has been extended to support parallel network simulations. Each processor integrates the equations for its subnet over an interval equal to the minimum (interprocessor) presynaptic spike generation to postsynaptic spike delivery connection delay. The performance of three published network models with very different spike patterns exhibits superlinear speedup on Beowulf clusters and demonstrates that spike communication overhead is often less than the benefit of an increased fraction of the entire problem fitting into high speed cache. On the EPFL IBM Blue Gene, almost linear speedup was obtained up to 100 processors. Increasing one model from 500 to 40,000 realistic cells exhibited almost linear speedup on 2000 processors, with an integration time of 9.8 seconds and communication time of 1.3 seconds. The potential for speed-ups of several orders of magnitude makes practical the running of large network simulations that could otherwise not be explored. PMID:16732488

  16. Drifting States and Synchronization Induced Chaos in Autonomous Networks of Excitable Neurons.

    PubMed

    Echeveste, Rodrigo; Gros, Claudius

    2016-01-01

    The study of balanced networks of excitatory and inhibitory neurons has led to several open questions. On the one hand it is yet unclear whether the asynchronous state observed in the brain is autonomously generated, or if it results from the interplay between external drivings and internal dynamics. It is also not known, which kind of network variabilities will lead to irregular spiking and which to synchronous firing states. Here we show how isolated networks of purely excitatory neurons generically show asynchronous firing whenever a minimal level of structural variability is present together with a refractory period. Our autonomous networks are composed of excitable units, in the form of leaky integrators spiking only in response to driving currents, remaining otherwise quiet. For a non-uniform network, composed exclusively of excitatory neurons, we find a rich repertoire of self-induced dynamical states. We show in particular that asynchronous drifting states may be stabilized in purely excitatory networks whenever a refractory period is present. Other states found are either fully synchronized or mixed, containing both drifting and synchronized components. The individual neurons considered are excitable and hence do not dispose of intrinsic natural firing frequencies. An effective network-wide distribution of natural frequencies is however generated autonomously through self-consistent feedback loops. The asynchronous drifting state is, additionally, amenable to an analytic solution. We find two types of asynchronous activity, with the individual neurons spiking regularly in the pure drifting state, albeit with a continuous distribution of firing frequencies. The activity of the drifting component, however, becomes irregular in the mixed state, due to the periodic driving of the synchronized component. We propose a new tool for the study of chaos in spiking neural networks, which consists of an analysis of the time series of pairs of consecutive interspike

  17. Drifting States and Synchronization Induced Chaos in Autonomous Networks of Excitable Neurons

    PubMed Central

    Echeveste, Rodrigo; Gros, Claudius

    2016-01-01

    The study of balanced networks of excitatory and inhibitory neurons has led to several open questions. On the one hand it is yet unclear whether the asynchronous state observed in the brain is autonomously generated, or if it results from the interplay between external drivings and internal dynamics. It is also not known, which kind of network variabilities will lead to irregular spiking and which to synchronous firing states. Here we show how isolated networks of purely excitatory neurons generically show asynchronous firing whenever a minimal level of structural variability is present together with a refractory period. Our autonomous networks are composed of excitable units, in the form of leaky integrators spiking only in response to driving currents, remaining otherwise quiet. For a non-uniform network, composed exclusively of excitatory neurons, we find a rich repertoire of self-induced dynamical states. We show in particular that asynchronous drifting states may be stabilized in purely excitatory networks whenever a refractory period is present. Other states found are either fully synchronized or mixed, containing both drifting and synchronized components. The individual neurons considered are excitable and hence do not dispose of intrinsic natural firing frequencies. An effective network-wide distribution of natural frequencies is however generated autonomously through self-consistent feedback loops. The asynchronous drifting state is, additionally, amenable to an analytic solution. We find two types of asynchronous activity, with the individual neurons spiking regularly in the pure drifting state, albeit with a continuous distribution of firing frequencies. The activity of the drifting component, however, becomes irregular in the mixed state, due to the periodic driving of the synchronized component. We propose a new tool for the study of chaos in spiking neural networks, which consists of an analysis of the time series of pairs of consecutive interspike

  18. Emergence of network structure due to spike-timing-dependent plasticity in recurrent neuronal networks IV: structuring synaptic pathways among recurrent connections.

    PubMed

    Gilson, Matthieu; Burkitt, Anthony N; Grayden, David B; Thomas, Doreen A; van Hemmen, J Leo

    2009-12-01

    In neuronal networks, the changes of synaptic strength (or weight) performed by spike-timing-dependent plasticity (STDP) are hypothesized to give rise to functional network structure. This article investigates how this phenomenon occurs for the excitatory recurrent connections of a network with fixed input weights that is stimulated by external spike trains. We develop a theoretical framework based on the Poisson neuron model to analyze the interplay between the neuronal activity (firing rates and the spike-time correlations) and the learning dynamics, when the network is stimulated by correlated pools of homogeneous Poisson spike trains. STDP can lead to both a stabilization of all the neuron firing rates (homeostatic equilibrium) and a robust weight specialization. The pattern of specialization for the recurrent weights is determined by a relationship between the input firing-rate and correlation structures, the network topology, the STDP parameters and the synaptic response properties. We find conditions for feed-forward pathways or areas with strengthened self-feedback to emerge in an initially homogeneous recurrent network.

  19. Simultaneous multi-patch-clamp and extracellular-array recordings: Single neuron reflects network activity

    NASA Astrophysics Data System (ADS)

    Vardi, Roni; Goldental, Amir; Sardi, Shira; Sheinin, Anton; Kanter, Ido

    2016-11-01

    The increasing number of recording electrodes enhances the capability of capturing the network’s cooperative activity, however, using too many monitors might alter the properties of the measured neural network and induce noise. Using a technique that merges simultaneous multi-patch-clamp and multi-electrode array recordings of neural networks in-vitro, we show that the membrane potential of a single neuron is a reliable and super-sensitive probe for monitoring such cooperative activities and their detailed rhythms. Specifically, the membrane potential and the spiking activity of a single neuron are either highly correlated or highly anti-correlated with the time-dependent macroscopic activity of the entire network. This surprising observation also sheds light on the cooperative origin of neuronal burst in cultured networks. Our findings present an alternative flexible approach to the technique based on a massive tiling of networks by large-scale arrays of electrodes to monitor their activity.

  20. Neuronal network disintegration: common pathways linking neurodegenerative diseases.

    PubMed

    Ahmed, Rebekah M; Devenney, Emma M; Irish, Muireann; Ittner, Arne; Naismith, Sharon; Ittner, Lars M; Rohrer, Jonathan D; Halliday, Glenda M; Eisen, Andrew; Hodges, John R; Kiernan, Matthew C

    2016-11-01

    Neurodegeneration refers to a heterogeneous group of brain disorders that progressively evolve. It has been increasingly appreciated that many neurodegenerative conditions overlap at multiple levels and therefore traditional clinicopathological correlation approaches to better classify a disease have met with limited success. Neuronal network disintegration is fundamental to neurodegeneration, and concepts based around such a concept may better explain the overlap between their clinical and pathological phenotypes. In this Review, promoters of overlap in neurodegeneration incorporating behavioural, cognitive, metabolic, motor, and extrapyramidal presentations will be critically appraised. In addition, evidence that may support the existence of large-scale networks that might be contributing to phenotypic differentiation will be considered across a neurodegenerative spectrum. Disintegration of neuronal networks through different pathological processes, such as prion-like spread, may provide a better paradigm of disease and thereby facilitate the identification of novel therapies for neurodegeneration. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  1. A reanalysis of "Two types of asynchronous activity in networks of excitatory and inhibitory spiking neurons".

    PubMed

    Engelken, Rainer; Farkhooi, Farzad; Hansel, David; van Vreeswijk, Carl; Wolf, Fred

    2016-01-01

    Neuronal activity in the central nervous system varies strongly in time and across neuronal populations. It is a longstanding proposal that such fluctuations generically arise from chaotic network dynamics. Various theoretical studies predict that the rich dynamics of rate models operating in the chaotic regime can subserve circuit computation and learning. Neurons in the brain, however, communicate via spikes and it is a theoretical challenge to obtain similar rate fluctuations in networks of spiking neuron models. A recent study investigated spiking balanced networks of leaky integrate and fire (LIF) neurons and compared their dynamics to a matched rate network with identical topology, where single unit input-output functions were chosen from isolated LIF neurons receiving Gaussian white noise input. A mathematical analogy between the chaotic instability in networks of rate units and the spiking network dynamics was proposed. Here we revisit the behavior of the spiking LIF networks and these matched rate networks. We find expected hallmarks of a chaotic instability in the rate network: For supercritical coupling strength near the transition point, the autocorrelation time diverges. For subcritical coupling strengths, we observe critical slowing down in response to small external perturbations. In the spiking network, we found in contrast that the timescale of the autocorrelations is insensitive to the coupling strength and that rate deviations resulting from small input perturbations rapidly decay. The decay speed even accelerates for increasing coupling strength. In conclusion, our reanalysis demonstrates fundamental differences between the behavior of pulse-coupled spiking LIF networks and rate networks with matched topology and input-output function. In particular there is no indication of a corresponding chaotic instability in the spiking network.

  2. An integrate-and-fire model for synchronized bursting in a network of cultured cortical neurons.

    PubMed

    French, D A; Gruenstein, E I

    2006-12-01

    It has been suggested that spontaneous synchronous neuronal activity is an essential step in the formation of functional networks in the central nervous system. The key features of this type of activity consist of bursts of action potentials with associated spikes of elevated cytoplasmic calcium. These features are also observed in networks of rat cortical neurons that have been formed in culture. Experimental studies of these cultured networks have led to several hypotheses for the mechanisms underlying the observed synchronized oscillations. In this paper, bursting integrate-and-fire type mathematical models for regular spiking (RS) and intrinsic bursting (IB) neurons are introduced and incorporated through a small-world connection scheme into a two-dimensional excitatory network similar to those in the cultured network. This computer model exhibits spontaneous synchronous activity through mechanisms similar to those hypothesized for the cultured experimental networks. Traces of the membrane potential and cytoplasmic calcium from the model closely match those obtained from experiments. We also consider the impact on network behavior of the IB neurons, the geometry and the small world connection scheme.

  3. Integrated microfluidic platforms for investigating neuronal networks

    NASA Astrophysics Data System (ADS)

    Kim, Hyung Joon

    (multielectrode array) or nanowire electrode array to study electrophysiology in neuronal network. Also, "diode-like" microgrooves to control the number of neuronal processes is embedded in this platform. Chapter 6 concludes with a possible future direction of this work. Interfacing micro/nanotechnology with primary neuron culture would open many doors in fundamental neuroscience research and also biomedical innovation.

  4. Caged Neuron MEA: A system for long-term investigation of cultured neural network connectivity

    PubMed Central

    Erickson, Jonathan; Tooker, Angela; Tai, Y-C.; Pine, Jerome

    2008-01-01

    Traditional techniques for investigating cultured neural networks, such as the patch clamp and multi-electrode array, are limited by: 1) the number of identified cells which can be simultaneously electrically contacted, 2) the length of time for which cells can be studied, and 3) the lack of one-to-one neuron-to-electrode specificity. Here, we present a new device—the caged neuron multi-electrode array—which overcomes these limitations. This micro-machined device consists of an array of neurocages which mechanically trap a neuron near an extracellular electrode. While the cell body is trapped, the axon and dendrites can freely grow into the surrounding area to form a network. The electrode is bi-directional, capable of both stimulating and recording action potentials. This system is non-invasive, so that all constituent neurons of a network can be studied over its lifetime with stable one-to-one neuron-to-electrode correspondence. Proof-of-concept experiments are described to illustrate that functional networks form in a neurochip system of 16 cages in a 4×4 array, and that suprathreshold connectivity can be fully mapped over several weeks. The neurochip opens a new domain in neurobiology for studying small cultured neural networks. PMID:18775453

  5. Analysis of connectivity map: Control to glutamate injured and phenobarbital treated neuronal network

    NASA Astrophysics Data System (ADS)

    Kamal, Hassan; Kanhirodan, Rajan; Srinivas, Kalyan V.; Sikdar, Sujit K.

    2010-04-01

    We study the responses of a cultured neural network when it is exposed to epileptogenesis glutamate injury causing epilepsy and subsequent treatment with phenobarbital by constructing connectivity map of neurons using correlation matrix. This study is particularly useful in understanding the pharmaceutical drug induced changes in the neuronal network properties with insights into changes at the systems biology level.

  6. Developmental changes of neuronal networks associated with strategic social decision-making.

    PubMed

    Steinmann, Elisabeth; Schmalor, Antonia; Prehn-Kristensen, Alexander; Wolff, Stephan; Galka, Andreas; Möhring, Jan; Gerber, Wolf-Dieter; Petermann, Franz; Stephani, Ulrich; Siniatchkin, Michael

    2014-04-01

    One of the important prerequisites for successful social interaction is the willingness of each individual to cooperate socially. Using the ultimatum game, several studies have demonstrated that the process of decision-making to cooperate or to defeat in interaction with a partner is associated with activation of the dorsolateral prefrontal cortex (DLPFC), anterior cingulate cortex (ACC), anterior insula (AI), and inferior frontal cortex (IFC). This study investigates developmental changes in this neuronal network. 15 healthy children (8-12 years), 15 adolescents (13-18 years) and 15 young adults (19-28 years) were investigated using the ultimatum game. Neuronal networks representing decision-making based on strategic thinking were characterized using functional MRI. In all age groups, the process of decision-making in reaction to unfair offers was associated with hemodynamic changes in similar regions. Compared with children, however, healthy adults and adolescents revealed greater activation in the IFC and the fusiform gyrus, as well as the nucleus accumbens. In contrast, healthy children displayed more activation in the AI, the dorsal part of the ACC, and the DLPFC. There were no differences in brain activations between adults and adolescents. The neuronal mechanisms underlying strategic social decision making are already developed by the age of eight. Decision-making based on strategic thinking is associated with age-dependent involvement of different brain regions. Neuronal networks underlying theory of mind and reward anticipation are more activated in adults and adolescents with regard to the increasing perspective taking with age. In relation to emotional reactivity and respective compensatory coping in younger ages, children have higher activations in a neuronal network associated with emotional processing and executive control. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Hopf bifurcation of an (n + 1) -neuron bidirectional associative memory neural network model with delays.

    PubMed

    Xiao, Min; Zheng, Wei Xing; Cao, Jinde

    2013-01-01

    Recent studies on Hopf bifurcations of neural networks with delays are confined to simplified neural network models consisting of only two, three, four, five, or six neurons. It is well known that neural networks are complex and large-scale nonlinear dynamical systems, so the dynamics of the delayed neural networks are very rich and complicated. Although discussing the dynamics of networks with a few neurons may help us to understand large-scale networks, there are inevitably some complicated problems that may be overlooked if simplified networks are carried over to large-scale networks. In this paper, a general delayed bidirectional associative memory neural network model with n + 1 neurons is considered. By analyzing the associated characteristic equation, the local stability of the trivial steady state is examined, and then the existence of the Hopf bifurcation at the trivial steady state is established. By applying the normal form theory and the center manifold reduction, explicit formulae are derived to determine the direction and stability of the bifurcating periodic solution. Furthermore, the paper highlights situations where the Hopf bifurcations are particularly critical, in the sense that the amplitude and the period of oscillations are very sensitive to errors due to tolerances in the implementation of neuron interconnections. It is shown that the sensitivity is crucially dependent on the delay and also significantly influenced by the feature of the number of neurons. Numerical simulations are carried out to illustrate the main results.

  8. Simultaneous multi-patch-clamp and extracellular-array recordings: Single neuron reflects network activity

    PubMed Central

    Vardi, Roni; Goldental, Amir; Sardi, Shira; Sheinin, Anton; Kanter, Ido

    2016-01-01

    The increasing number of recording electrodes enhances the capability of capturing the network’s cooperative activity, however, using too many monitors might alter the properties of the measured neural network and induce noise. Using a technique that merges simultaneous multi-patch-clamp and multi-electrode array recordings of neural networks in-vitro, we show that the membrane potential of a single neuron is a reliable and super-sensitive probe for monitoring such cooperative activities and their detailed rhythms. Specifically, the membrane potential and the spiking activity of a single neuron are either highly correlated or highly anti-correlated with the time-dependent macroscopic activity of the entire network. This surprising observation also sheds light on the cooperative origin of neuronal burst in cultured networks. Our findings present an alternative flexible approach to the technique based on a massive tiling of networks by large-scale arrays of electrodes to monitor their activity. PMID:27824075

  9. Efficient and accurate time-stepping schemes for integrate-and-fire neuronal networks.

    PubMed

    Shelley, M J; Tao, L

    2001-01-01

    To avoid the numerical errors associated with resetting the potential following a spike in simulations of integrate-and-fire neuronal networks, Hansel et al. and Shelley independently developed a modified time-stepping method. Their particular scheme consists of second-order Runge-Kutta time-stepping, a linear interpolant to find spike times, and a recalibration of postspike potential using the spike times. Here we show analytically that such a scheme is second order, discuss the conditions under which efficient, higher-order algorithms can be constructed to treat resets, and develop a modified fourth-order scheme. To support our analysis, we simulate a system of integrate-and-fire conductance-based point neurons with all-to-all coupling. For six-digit accuracy, our modified Runge-Kutta fourth-order scheme needs a time-step of Delta(t) = 0.5 x 10(-3) seconds, whereas to achieve comparable accuracy using a recalibrated second-order or a first-order algorithm requires time-steps of 10(-5) seconds or 10(-9) seconds, respectively. Furthermore, since the cortico-cortical conductances in standard integrate-and-fire neuronal networks do not depend on the value of the membrane potential, we can attain fourth-order accuracy with computational costs normally associated with second-order schemes.

  10. Detection of neuron membranes in electron microscopy images using a serial neural network architecture.

    PubMed

    Jurrus, Elizabeth; Paiva, Antonio R C; Watanabe, Shigeki; Anderson, James R; Jones, Bryan W; Whitaker, Ross T; Jorgensen, Erik M; Marc, Robert E; Tasdizen, Tolga

    2010-12-01

    Study of nervous systems via the connectome, the map of connectivities of all neurons in that system, is a challenging problem in neuroscience. Towards this goal, neurobiologists are acquiring large electron microscopy datasets. However, the shear volume of these datasets renders manual analysis infeasible. Hence, automated image analysis methods are required for reconstructing the connectome from these very large image collections. Segmentation of neurons in these images, an essential step of the reconstruction pipeline, is challenging because of noise, anisotropic shapes and brightness, and the presence of confounding structures. The method described in this paper uses a series of artificial neural networks (ANNs) in a framework combined with a feature vector that is composed of image intensities sampled over a stencil neighborhood. Several ANNs are applied in series allowing each ANN to use the classification context provided by the previous network to improve detection accuracy. We develop the method of serial ANNs and show that the learned context does improve detection over traditional ANNs. We also demonstrate advantages over previous membrane detection methods. The results are a significant step towards an automated system for the reconstruction of the connectome. Copyright 2010 Elsevier B.V. All rights reserved.

  11. Layer-specific optogenetic activation of pyramidal neurons causes beta–gamma entrainment of neonatal networks

    PubMed Central

    Bitzenhofer, Sebastian H; Ahlbeck, Joachim; Wolff, Amy; Wiegert, J. Simon; Gee, Christine E.; Oertner, Thomas G.; Hanganu-Opatz, Ileana L.

    2017-01-01

    Coordinated activity patterns in the developing brain may contribute to the wiring of neuronal circuits underlying future behavioural requirements. However, causal evidence for this hypothesis has been difficult to obtain owing to the absence of tools for selective manipulation of oscillations during early development. We established a protocol that combines optogenetics with electrophysiological recordings from neonatal mice in vivo to elucidate the substrate of early network oscillations in the prefrontal cortex. We show that light-induced activation of layer II/III pyramidal neurons that are transfected by in utero electroporation with a high-efficiency channelrhodopsin drives frequency-specific spiking and boosts network oscillations within beta–gamma frequency range. By contrast, activation of layer V/VI pyramidal neurons causes nonspecific network activation. Thus, entrainment of neonatal prefrontal networks in fast rhythms relies on the activation of layer II/III pyramidal neurons. This approach used here may be useful for further interrogation of developing circuits, and their behavioural readout. PMID:28216627

  12. Spatio-temporal specialization of GABAergic septo-hippocampal neurons for rhythmic network activity.

    PubMed

    Unal, Gunes; Crump, Michael G; Viney, Tim J; Éltes, Tímea; Katona, Linda; Klausberger, Thomas; Somogyi, Peter

    2018-03-03

    Medial septal GABAergic neurons of the basal forebrain innervate the hippocampus and related cortical areas, contributing to the coordination of network activity, such as theta oscillations and sharp wave-ripple events, via a preferential innervation of GABAergic interneurons. Individual medial septal neurons display diverse activity patterns, which may be related to their termination in different cortical areas and/or to the different types of innervated interneurons. To test these hypotheses, we extracellularly recorded and juxtacellularly labeled single medial septal neurons in anesthetized rats in vivo during hippocampal theta and ripple oscillations, traced their axons to distant cortical target areas, and analyzed their postsynaptic interneurons. Medial septal GABAergic neurons exhibiting different hippocampal theta phase preferences and/or sharp wave-ripple related activity terminated in restricted hippocampal regions, and selectively targeted a limited number of interneuron types, as established on the basis of molecular markers. We demonstrate the preferential innervation of bistratified cells in CA1 and of basket cells in CA3 by individual axons. One group of septal neurons was suppressed during sharp wave-ripples, maintained their firing rate across theta and non-theta network states and mainly fired along the descending phase of CA1 theta oscillations. In contrast, neurons that were active during sharp wave-ripples increased their firing significantly during "theta" compared to "non-theta" states, with most firing during the ascending phase of theta oscillations. These results demonstrate that specialized septal GABAergic neurons contribute to the coordination of network activity through parallel, target area- and cell type-selective projections to the hippocampus.

  13. Network Receptive Field Modeling Reveals Extensive Integration and Multi-feature Selectivity in Auditory Cortical Neurons.

    PubMed

    Harper, Nicol S; Schoppe, Oliver; Willmore, Ben D B; Cui, Zhanfeng; Schnupp, Jan W H; King, Andrew J

    2016-11-01

    Cortical sensory neurons are commonly characterized using the receptive field, the linear dependence of their response on the stimulus. In primary auditory cortex neurons can be characterized by their spectrotemporal receptive fields, the spectral and temporal features of a sound that linearly drive a neuron. However, receptive fields do not capture the fact that the response of a cortical neuron results from the complex nonlinear network in which it is embedded. By fitting a nonlinear feedforward network model (a network receptive field) to cortical responses to natural sounds, we reveal that primary auditory cortical neurons are sensitive over a substantially larger spectrotemporal domain than is seen in their standard spectrotemporal receptive fields. Furthermore, the network receptive field, a parsimonious network consisting of 1-7 sub-receptive fields that interact nonlinearly, consistently better predicts neural responses to auditory stimuli than the standard receptive fields. The network receptive field reveals separate excitatory and inhibitory sub-fields with different nonlinear properties, and interaction of the sub-fields gives rise to important operations such as gain control and conjunctive feature detection. The conjunctive effects, where neurons respond only if several specific features are present together, enable increased selectivity for particular complex spectrotemporal structures, and may constitute an important stage in sound recognition. In conclusion, we demonstrate that fitting auditory cortical neural responses with feedforward network models expands on simple linear receptive field models in a manner that yields substantially improved predictive power and reveals key nonlinear aspects of cortical processing, while remaining easy to interpret in a physiological context.

  14. Network Receptive Field Modeling Reveals Extensive Integration and Multi-feature Selectivity in Auditory Cortical Neurons

    PubMed Central

    Willmore, Ben D. B.; Cui, Zhanfeng; Schnupp, Jan W. H.; King, Andrew J.

    2016-01-01

    Cortical sensory neurons are commonly characterized using the receptive field, the linear dependence of their response on the stimulus. In primary auditory cortex neurons can be characterized by their spectrotemporal receptive fields, the spectral and temporal features of a sound that linearly drive a neuron. However, receptive fields do not capture the fact that the response of a cortical neuron results from the complex nonlinear network in which it is embedded. By fitting a nonlinear feedforward network model (a network receptive field) to cortical responses to natural sounds, we reveal that primary auditory cortical neurons are sensitive over a substantially larger spectrotemporal domain than is seen in their standard spectrotemporal receptive fields. Furthermore, the network receptive field, a parsimonious network consisting of 1–7 sub-receptive fields that interact nonlinearly, consistently better predicts neural responses to auditory stimuli than the standard receptive fields. The network receptive field reveals separate excitatory and inhibitory sub-fields with different nonlinear properties, and interaction of the sub-fields gives rise to important operations such as gain control and conjunctive feature detection. The conjunctive effects, where neurons respond only if several specific features are present together, enable increased selectivity for particular complex spectrotemporal structures, and may constitute an important stage in sound recognition. In conclusion, we demonstrate that fitting auditory cortical neural responses with feedforward network models expands on simple linear receptive field models in a manner that yields substantially improved predictive power and reveals key nonlinear aspects of cortical processing, while remaining easy to interpret in a physiological context. PMID:27835647

  15. Synchronization properties of networks of electrically coupled neurons in the presence of noise and heterogeneities.

    PubMed

    Ostojic, Srdjan; Brunel, Nicolas; Hakim, Vincent

    2009-06-01

    We investigate how synchrony can be generated or induced in networks of electrically coupled integrate-and-fire neurons subject to noisy and heterogeneous inputs. Using analytical tools, we find that in a network under constant external inputs, synchrony can appear via a Hopf bifurcation from the asynchronous state to an oscillatory state. In a homogeneous net work, in the oscillatory state all neurons fire in synchrony, while in a heterogeneous network synchrony is looser, many neurons skipping cycles of the oscillation. If the transmission of action potentials via the electrical synapses is effectively excitatory, the Hopf bifurcation is supercritical, while effectively inhibitory transmission due to pronounced hyperpolarization leads to a subcritical bifurcation. In the latter case, the network exhibits bistability between an asynchronous state and an oscillatory state where all the neurons fire in synchrony. Finally we show that for time-varying external inputs, electrical coupling enhances the synchronization in an asynchronous network via a resonance at the firing-rate frequency.

  16. Activity-dependent stochastic resonance in recurrent neuronal networks

    NASA Astrophysics Data System (ADS)

    Volman, Vladislav

    2009-03-01

    An important source of noise for neuronal networks is that of the stochastic nature of synaptic transmission. In particular, there can occur spontaneous asynchronous release of neurotransmitter at a rate that is strongly dependent on the presynaptic Ca2+ concentration and hence strongly dependent on the rate of spike induced Ca2+. Here it is shown that this noise can lead to a new form of stochastic resonance for local circuits consisting of roughly 100 neurons - a ``microcolumn''- coupled via noisy plastic synapses. Furthermore, due to the plastic coupling and activity-dependent noise component, the detection of weak stimuli will also depend on the structure of the latter. In addition, the circuit can exhibit short-term memory, by which we mean that spiking will continue to occur for a transient period following removal of the stimulus. These results can be directly tested in experiments on cultured networks.

  17. Modeling of synchronization behavior of bursting neurons at nonlinearly coupled dynamical networks.

    PubMed

    Çakir, Yüksel

    2016-01-01

    Synchronization behaviors of bursting neurons coupled through electrical and dynamic chemical synapses are investigated. The Izhikevich model is used with random and small world network of bursting neurons. Various currents which consist of diffusive electrical and time-delayed dynamic chemical synapses are used in the simulations to investigate the influences of synaptic currents and couplings on synchronization behavior of bursting neurons. The effects of parameters, such as time delay, inhibitory synaptic strengths, and decay time on synchronization behavior are investigated. It is observed that in random networks with no delay, bursting synchrony is established with the electrical synapse alone, single spiking synchrony is observed with hybrid coupling. In small world network with no delay, periodic bursting behavior with multiple spikes is observed when only chemical and only electrical synapse exist. Single-spike and multiple-spike bursting are established with hybrid couplings. A decrease in the synchronization measure is observed with zero time delay, as the decay time is increased in random network. For synaptic delays which are above active phase period, synchronization measure increases with an increase in synaptic strength and time delay in small world network. However, in random network, it increases with only an increase in synaptic strength.

  18. Time-dependent Increase in the Network Response to the Stimulation of Neuronal Cell Cultures on Micro-electrode Arrays.

    PubMed

    Gertz, Monica L; Baker, Zachary; Jose, Sharon; Peixoto, Nathalia

    2017-05-29

    Micro-electrode arrays (MEAs) can be used to investigate drug toxicity, design paradigms for next-generation personalized medicine, and study network dynamics in neuronal cultures. In contrast with more traditional methods, such as patch-clamping, which can only record activity from a single cell, MEAs can record simultaneously from multiple sites in a network, without requiring the arduous task of placing each electrode individually. Moreover, numerous control and stimulation configurations can be easily applied within the same experimental setup, allowing for a broad range of dynamics to be explored. One of the key dynamics of interest in these in vitro studies has been the extent to which cultured networks display properties indicative of learning. Mouse neuronal cells cultured on MEAs display an increase in response following training induced by electrical stimulation. This protocol demonstrates how to culture neuronal cells on MEAs; successfully record from over 95% of the plated dishes; establish a protocol to train the networks to respond to patterns of stimulation; and sort, plot, and interpret the results from such experiments. The use of a proprietary system for stimulating and recording neuronal cultures is demonstrated. Software packages are also used to sort neuronal units. A custom-designed graphical user interface is used to visualize post-stimulus time histograms, inter-burst intervals, and burst duration, as well as to compare the cellular response to stimulation before and after a training protocol. Finally, representative results and future directions of this research effort are discussed.

  19. Realistic modeling of neurons and networks: towards brain simulation.

    PubMed

    D'Angelo, Egidio; Solinas, Sergio; Garrido, Jesus; Casellato, Claudia; Pedrocchi, Alessandra; Mapelli, Jonathan; Gandolfi, Daniela; Prestori, Francesca

    2013-01-01

    Realistic modeling is a new advanced methodology for investigating brain functions. Realistic modeling is based on a detailed biophysical description of neurons and synapses, which can be integrated into microcircuits. The latter can, in turn, be further integrated to form large-scale brain networks and eventually to reconstruct complex brain systems. Here we provide a review of the realistic simulation strategy and use the cerebellar network as an example. This network has been carefully investigated at molecular and cellular level and has been the object of intense theoretical investigation. The cerebellum is thought to lie at the core of the forward controller operations of the brain and to implement timing and sensory prediction functions. The cerebellum is well described and provides a challenging field in which one of the most advanced realistic microcircuit models has been generated. We illustrate how these models can be elaborated and embedded into robotic control systems to gain insight into how the cellular properties of cerebellar neurons emerge in integrated behaviors. Realistic network modeling opens up new perspectives for the investigation of brain pathologies and for the neurorobotic field.

  20. Realistic modeling of neurons and networks: towards brain simulation

    PubMed Central

    D’Angelo, Egidio; Solinas, Sergio; Garrido, Jesus; Casellato, Claudia; Pedrocchi, Alessandra; Mapelli, Jonathan; Gandolfi, Daniela; Prestori, Francesca

    Summary Realistic modeling is a new advanced methodology for investigating brain functions. Realistic modeling is based on a detailed biophysical description of neurons and synapses, which can be integrated into microcircuits. The latter can, in turn, be further integrated to form large-scale brain networks and eventually to reconstruct complex brain systems. Here we provide a review of the realistic simulation strategy and use the cerebellar network as an example. This network has been carefully investigated at molecular and cellular level and has been the object of intense theoretical investigation. The cerebellum is thought to lie at the core of the forward controller operations of the brain and to implement timing and sensory prediction functions. The cerebellum is well described and provides a challenging field in which one of the most advanced realistic microcircuit models has been generated. We illustrate how these models can be elaborated and embedded into robotic control systems to gain insight into how the cellular properties of cerebellar neurons emerge in integrated behaviors. Realistic network modeling opens up new perspectives for the investigation of brain pathologies and for the neurorobotic field. PMID:24139652

  1. Soft chitosan microbeads scaffold for 3D functional neuronal networks.

    PubMed

    Tedesco, Maria Teresa; Di Lisa, Donatella; Massobrio, Paolo; Colistra, Nicolò; Pesce, Mattia; Catelani, Tiziano; Dellacasa, Elena; Raiteri, Roberto; Martinoia, Sergio; Pastorino, Laura

    2018-02-01

    The availability of 3D biomimetic in vitro neuronal networks of mammalian neurons represents a pivotal step for the development of brain-on-a-chip experimental models to study neuronal (dys)functions and particularly neuronal connectivity. The use of hydrogel-based scaffolds for 3D cell cultures has been extensively studied in the last years. However, limited work on biomimetic 3D neuronal cultures has been carried out to date. In this respect, here we investigated the use of a widely popular polysaccharide, chitosan (CHI), for the fabrication of a microbead based 3D scaffold to be coupled to primary neuronal cells. CHI microbeads were characterized by optical and atomic force microscopies. The cell/scaffold interaction was deeply characterized by transmission electron microscopy and by immunocytochemistry using confocal microscopy. Finally, a preliminary electrophysiological characterization by micro-electrode arrays was carried out. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Dynamics of Competition between Subnetworks of Spiking Neuronal Networks in the Balanced State.

    PubMed

    Lagzi, Fereshteh; Rotter, Stefan

    2015-01-01

    We explore and analyze the nonlinear switching dynamics of neuronal networks with non-homogeneous connectivity. The general significance of such transient dynamics for brain function is unclear; however, for instance decision-making processes in perception and cognition have been implicated with it. The network under study here is comprised of three subnetworks of either excitatory or inhibitory leaky integrate-and-fire neurons, of which two are of the same type. The synaptic weights are arranged to establish and maintain a balance between excitation and inhibition in case of a constant external drive. Each subnetwork is randomly connected, where all neurons belonging to a particular population have the same in-degree and the same out-degree. Neurons in different subnetworks are also randomly connected with the same probability; however, depending on the type of the pre-synaptic neuron, the synaptic weight is scaled by a factor. We observed that for a certain range of the "within" versus "between" connection weights (bifurcation parameter), the network activation spontaneously switches between the two sub-networks of the same type. This kind of dynamics has been termed "winnerless competition", which also has a random component here. In our model, this phenomenon is well described by a set of coupled stochastic differential equations of Lotka-Volterra type that imply a competition between the subnetworks. The associated mean-field model shows the same dynamical behavior as observed in simulations of large networks comprising thousands of spiking neurons. The deterministic phase portrait is characterized by two attractors and a saddle node, its stochastic component is essentially given by the multiplicative inherent noise of the system. We find that the dwell time distribution of the active states is exponential, indicating that the noise drives the system randomly from one attractor to the other. A similar model for a larger number of populations might suggest a

  3. Dynamics of Competition between Subnetworks of Spiking Neuronal Networks in the Balanced State

    PubMed Central

    Lagzi, Fereshteh; Rotter, Stefan

    2015-01-01

    We explore and analyze the nonlinear switching dynamics of neuronal networks with non-homogeneous connectivity. The general significance of such transient dynamics for brain function is unclear; however, for instance decision-making processes in perception and cognition have been implicated with it. The network under study here is comprised of three subnetworks of either excitatory or inhibitory leaky integrate-and-fire neurons, of which two are of the same type. The synaptic weights are arranged to establish and maintain a balance between excitation and inhibition in case of a constant external drive. Each subnetwork is randomly connected, where all neurons belonging to a particular population have the same in-degree and the same out-degree. Neurons in different subnetworks are also randomly connected with the same probability; however, depending on the type of the pre-synaptic neuron, the synaptic weight is scaled by a factor. We observed that for a certain range of the “within” versus “between” connection weights (bifurcation parameter), the network activation spontaneously switches between the two sub-networks of the same type. This kind of dynamics has been termed “winnerless competition”, which also has a random component here. In our model, this phenomenon is well described by a set of coupled stochastic differential equations of Lotka-Volterra type that imply a competition between the subnetworks. The associated mean-field model shows the same dynamical behavior as observed in simulations of large networks comprising thousands of spiking neurons. The deterministic phase portrait is characterized by two attractors and a saddle node, its stochastic component is essentially given by the multiplicative inherent noise of the system. We find that the dwell time distribution of the active states is exponential, indicating that the noise drives the system randomly from one attractor to the other. A similar model for a larger number of populations might

  4. The immunoglobulin-like genetic predetermination of the brain: the protocadherins, blueprint of the neuronal network

    NASA Astrophysics Data System (ADS)

    Hilschmann, N.; Barnikol, H. U.; Barnikol-Watanabe, S.; Götz, H.; Kratzin, H.; Thinnes, F. P.

    2001-01-01

    The morphogenesis of the brain is governed by synaptogenesis. Synaptogenesis in turn is determined by cell adhesion molecules, which bridge the synaptic cleft and, by homophilic contact, decide which neurons are connected and which are not. Because of their enormous diversification in specificities, protocadherins (pcdhα, pcdhβ, pcdhγ), a new class of cadherins, play a decisive role. Surprisingly, the genetic control of the protocadherins is very similar to that of the immunoglobulins. There are three sets of variable (V) genes followed by a corresponding constant (C) gene. Applying the rules of the immunoglobulin genes to the protocadherin genes leads, despite of this similarity, to quite different results in the central nervous system. The lymphocyte expresses one single receptor molecule specifically directed against an outside stimulus. In contrast, there are three specific recognition sites in each neuron, each expressing a different protocadherin. In this way, 4,950 different neurons arising from one stem cell form a neuronal network, in which homophilic contacts can be formed in 52 layers, permitting an enormous number of different connections and restraints between neurons. This network is one module of the central computer of the brain. Since the V-genes are generated during evolution and V-gene translocation during embryogenesis, outside stimuli have no influence on this network. The network is an inborn property of the protocadherin genes. Every circuit produced, as well as learning and memory, has to be based on this genetically predetermined network. This network is so universal that it can cope with everything, even the unexpected. In this respect the neuronal network resembles the recognition sites of the immunoglobulins.

  5. Visible rodent brain-wide networks at single-neuron resolution

    PubMed Central

    Yuan, Jing; Gong, Hui; Li, Anan; Li, Xiangning; Chen, Shangbin; Zeng, Shaoqun; Luo, Qingming

    2015-01-01

    There are some unsolvable fundamental questions, such as cell type classification, neural circuit tracing and neurovascular coupling, though great progresses are being made in neuroscience. Because of the structural features of neurons and neural circuits, the solution of these questions needs us to break through the current technology of neuroanatomy for acquiring the exactly fine morphology of neuron and vessels and tracing long-distant circuit at axonal resolution in the whole brain of mammals. Combined with fast-developing labeling techniques, efficient whole-brain optical imaging technology emerging at the right moment presents a huge potential in the structure and function research of specific-function neuron and neural circuit. In this review, we summarize brain-wide optical tomography techniques, review the progress on visible brain neuronal/vascular networks benefit from these novel techniques, and prospect the future technical development. PMID:26074784

  6. Activity-dependent switch of GABAergic inhibition into glutamatergic excitation in astrocyte-neuron networks.

    PubMed

    Perea, Gertrudis; Gómez, Ricardo; Mederos, Sara; Covelo, Ana; Ballesteros, Jesús J; Schlosser, Laura; Hernández-Vivanco, Alicia; Martín-Fernández, Mario; Quintana, Ruth; Rayan, Abdelrahman; Díez, Adolfo; Fuenzalida, Marco; Agarwal, Amit; Bergles, Dwight E; Bettler, Bernhard; Manahan-Vaughan, Denise; Martín, Eduardo D; Kirchhoff, Frank; Araque, Alfonso

    2016-12-24

    Interneurons are critical for proper neural network function and can activate Ca 2+ signaling in astrocytes. However, the impact of the interneuron-astrocyte signaling into neuronal network operation remains unknown. Using the simplest hippocampal Astrocyte-Neuron network, i.e., GABAergic interneuron, pyramidal neuron, single CA3-CA1 glutamatergic synapse, and astrocytes, we found that interneuron-astrocyte signaling dynamically affected excitatory neurotransmission in an activity- and time-dependent manner, and determined the sign (inhibition vs potentiation) of the GABA-mediated effects. While synaptic inhibition was mediated by GABA A receptors, potentiation involved astrocyte GABA B receptors, astrocytic glutamate release, and presynaptic metabotropic glutamate receptors. Using conditional astrocyte-specific GABA B receptor ( Gabbr1 ) knockout mice, we confirmed the glial source of the interneuron-induced potentiation, and demonstrated the involvement of astrocytes in hippocampal theta and gamma oscillations in vivo. Therefore, astrocytes decode interneuron activity and transform inhibitory into excitatory signals, contributing to the emergence of novel network properties resulting from the interneuron-astrocyte interplay.

  7. Modulation of network pacemaker neurons by oxygen at the anaerobic threshold.

    PubMed

    Hill, Andrew A V; Simmers, John; Meyrand, Pierre; Massabuau, Jean-Charles

    2012-07-01

    Previous in vitro and in vivo studies showed that the frequency of rhythmic pyloric network activity in the lobster is modulated directly by oxygen partial pressure (PO(2)). We have extended these results by (1) increasing the period of exposure to low PO(2) and by (2) testing the sensitivity of the pyloric network to changes in PO(2) that are within the narrow range normally experienced by the lobster (1 to 6 kPa). We found that the pyloric network rhythm was indeed altered by changes in PO(2) within the range typically observed in vivo. Furthermore, a previous study showed that the lateral pyloric constrictor motor neuron (LP) contributes to the O(2) sensitivity of the pyloric network. Here, we expanded on this idea by testing the hypothesis that pyloric pacemaker neurons also contribute to pyloric O(2) sensitivity. A 2-h exposure to 1 kPa PO(2), which was twice the period used previously, decreased the frequency of an isolated group of pacemaker neurons, suggesting that changes in the rhythmogenic properties of these cells contribute to pyloric O(2) sensitivity during long-term near-anaerobic (anaerobic threshold, 0.7-1.2 kPa) conditions.

  8. A review of the integrate-and-fire neuron model: II. Inhomogeneous synaptic input and network properties.

    PubMed

    Burkitt, A N

    2006-08-01

    The integrate-and-fire neuron model describes the state of a neuron in terms of its membrane potential, which is determined by the synaptic inputs and the injected current that the neuron receives. When the membrane potential reaches a threshold, an action potential (spike) is generated. This review considers the model in which the synaptic input varies periodically and is described by an inhomogeneous Poisson process, with both current and conductance synapses. The focus is on the mathematical methods that allow the output spike distribution to be analyzed, including first passage time methods and the Fokker-Planck equation. Recent interest in the response of neurons to periodic input has in part arisen from the study of stochastic resonance, which is the noise-induced enhancement of the signal-to-noise ratio. Networks of integrate-and-fire neurons behave in a wide variety of ways and have been used to model a variety of neural, physiological, and psychological phenomena. The properties of the integrate-and-fire neuron model with synaptic input described as a temporally homogeneous Poisson process are reviewed in an accompanying paper (Burkitt in Biol Cybern, 2006).

  9. Modification of a neuronal network direction using stepwise photo-thermal etching of an agarose architecture.

    PubMed

    Suzuki, Ikurou; Sugio, Yoshihiro; Moriguchi, Hiroyuki; Jimbo, Yasuhiko; Yasuda, Kenji

    2004-07-01

    Control over spatial distribution of individual neurons and the pattern of neural network provides an important tool for studying information processing pathways during neural network formation. Moreover, the knowledge of the direction of synaptic connections between cells in each neural network can provide detailed information on the relationship between the forward and feedback signaling. We have developed a method for topographical control of the direction of synaptic connections within a living neuronal network using a new type of individual-cell-based on-chip cell-cultivation system with an agarose microchamber array (AMCA). The advantages of this system include the possibility to control positions and number of cultured cells as well as flexible control of the direction of elongation of axons through stepwise melting of narrow grooves. Such micrometer-order microchannels are obtained by photo-thermal etching of agarose where a portion of the gel is melted with a 1064-nm infrared laser beam. Using this system, we created neural network from individual Rat hippocampal cells. We were able to control elongation of individual axons during cultivation (from cells contained within the AMCA) by non-destructive stepwise photo-thermal etching. We have demonstrated the potential of our on-chip AMCA cell cultivation system for the controlled development of individual cell-based neural networks.

  10. Effects of spike-time-dependent plasticity on the stochastic resonance of small-world neuronal networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Haitao; Guo, Xinmeng; Wang, Jiang, E-mail: jiangwang@tju.edu.cn

    2014-09-01

    The phenomenon of stochastic resonance in Newman-Watts small-world neuronal networks is investigated when the strength of synaptic connections between neurons is adaptively adjusted by spike-time-dependent plasticity (STDP). It is shown that irrespective of the synaptic connectivity is fixed or adaptive, the phenomenon of stochastic resonance occurs. The efficiency of network stochastic resonance can be largely enhanced by STDP in the coupling process. Particularly, the resonance for adaptive coupling can reach a much larger value than that for fixed one when the noise intensity is small or intermediate. STDP with dominant depression and small temporal window ratio is more efficient formore » the transmission of weak external signal in small-world neuronal networks. In addition, we demonstrate that the effect of stochastic resonance can be further improved via fine-tuning of the average coupling strength of the adaptive network. Furthermore, the small-world topology can significantly affect stochastic resonance of excitable neuronal networks. It is found that there exists an optimal probability of adding links by which the noise-induced transmission of weak periodic signal peaks.« less

  11. Fast global oscillations in networks of integrate-and-fire neurons with low firing rates.

    PubMed

    Brunel, N; Hakim, V

    1999-10-01

    We study analytically the dynamics of a network of sparsely connected inhibitory integrate-and-fire neurons in a regime where individual neurons emit spikes irregularly and at a low rate. In the limit when the number of neurons --> infinity, the network exhibits a sharp transition between a stationary and an oscillatory global activity regime where neurons are weakly synchronized. The activity becomes oscillatory when the inhibitory feedback is strong enough. The period of the global oscillation is found to be mainly controlled by synaptic times but depends also on the characteristics of the external input. In large but finite networks, the analysis shows that global oscillations of finite coherence time generically exist both above and below the critical inhibition threshold. Their characteristics are determined as functions of systems parameters in these two different regions. The results are found to be in good agreement with numerical simulations.

  12. Linking Neurons to Network Function and Behavior by Two-Photon Holographic Optogenetics and Volumetric Imaging.

    PubMed

    Dal Maschio, Marco; Donovan, Joseph C; Helmbrecht, Thomas O; Baier, Herwig

    2017-05-17

    We introduce a flexible method for high-resolution interrogation of circuit function, which combines simultaneous 3D two-photon stimulation of multiple targeted neurons, volumetric functional imaging, and quantitative behavioral tracking. This integrated approach was applied to dissect how an ensemble of premotor neurons in the larval zebrafish brain drives a basic motor program, the bending of the tail. We developed an iterative photostimulation strategy to identify minimal subsets of channelrhodopsin (ChR2)-expressing neurons that are sufficient to initiate tail movements. At the same time, the induced network activity was recorded by multiplane GCaMP6 imaging across the brain. From this dataset, we computationally identified activity patterns associated with distinct components of the elicited behavior and characterized the contributions of individual neurons. Using photoactivatable GFP (paGFP), we extended our protocol to visualize single functionally identified neurons and reconstruct their morphologies. Together, this toolkit enables linking behavior to circuit activity with unprecedented resolution. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Bounds on the number of hidden neurons in three-layer binary neural networks.

    PubMed

    Zhang, Zhaozhi; Ma, Xiaomin; Yang, Yixian

    2003-09-01

    This paper investigates an important problem concerning the complexity of three-layer binary neural networks (BNNs) with one hidden layer. The neuron in the studied BNNs employs a hard limiter activation function with only integer weights and an integer threshold. The studies are focused on implementations of arbitrary Boolean functions which map from [0, 1]n into [0, 1]. A deterministic algorithm called set covering algorithm (SCA) is proposed for the construction of a three-layer BNN to implement an arbitrary Boolean function. The SCA is based on a unit sphere covering (USC) of the Hamming space (HS) which is chosen in advance. It is proved that for the implementation of an arbitrary Boolean function of n-variables (n > or = 3) by using SCA, [3L/2] hidden neurons are necessary and sufficient, where L is the number of unit spheres contained in the chosen USC of the n-dimensional HS. It is shown that by using SCA, the number of hidden neurons required is much less than that by using a two-parallel hyperplane method. In order to indicate the potential ability of three-layer BNNs, a lower bound on the required number of hidden neurons which is derived by using the method of estimating the Vapnik-Chervonenkis (VC) dimension is also given.

  14. Identification of neuron-related genes for cell therapy of neurological disorders by network analysis.

    PubMed

    Su, Li-Ning; Song, Xiao-Qing; Wei, Hui-Ping; Yin, Hai-Feng

    Bone mesenchymal stem cells (BMSCs) differentiated into neurons have been widely proposed for use in cell therapy of many neurological disorders. It is therefore important to understand the molecular mechanisms underlying this differentiation. We screened differentially expressed genes between immature neural tissues and untreated BMSCs to identify the genes responsible for neuronal differentiation from BMSCs. GSE68243 gene microarray data of rat BMSCs and GSE18860 gene microarray data of rat neurons were received from the Gene Expression Omnibus database. Transcriptome Analysis Console software showed that 1248 genes were up-regulated and 1273 were down-regulated in neurons compared with BMSCs. Gene Ontology functional enrichment, protein-protein interaction networks, functional modules, and hub genes were analyzed using DAVID, STRING 10, BiNGO tool, and Network Analyzer software, revealing that nine hub genes, Nrcam, Sema3a, Mapk8, Dlg4, Slit1, Creb1, Ntrk2, Cntn2, and Pax6, may play a pivotal role in neuronal differentiation from BMSCs. Seven genes, Dcx, Nrcam, sema3a, Cntn2, Slit1, Ephb1, and Pax6, were shown to be hub nodes within the neuronal development network, while six genes, Fgf2, Tgfβ1, Vegfa, Serpine1, Il6, and Stat1, appeared to play an important role in suppressing neuronal differentiation. However, additional studies are required to confirm these results.

  15. Synaptic multistability and network synchronization induced by the neuron-glial interaction in the brain

    NASA Astrophysics Data System (ADS)

    Lazarevich, I. A.; Stasenko, S. V.; Kazantsev, V. B.

    2017-02-01

    The dynamics of a synaptic contact between neurons that forms a feedback loop through the interaction with glial cells of the brain surrounding the neurons is studied. It is shown that, depending on the character of the neuron-glial interaction, the dynamics of the signal transmission frequency in the synaptic contact can be bistable with two stable steady states or spiking with the regular generation of spikes with various amplitudes and durations. It is found that such a synaptic contact at the network level is responsible for the appearance of quasisynchronous network bursts.

  16. How noise affects the synchronization properties of recurrent networks of inhibitory neurons.

    PubMed

    Brunel, Nicolas; Hansel, David

    2006-05-01

    GABAergic interneurons play a major role in the emergence of various types of synchronous oscillatory patterns of activity in the central nervous system. Motivated by these experimental facts, modeling studies have investigated mechanisms for the emergence of coherent activity in networks of inhibitory neurons. However, most of these studies have focused either when the noise in the network is absent or weak or in the opposite situation when it is strong. Hence, a full picture of how noise affects the dynamics of such systems is still lacking. The aim of this letter is to provide a more comprehensive understanding of the mechanisms by which the asynchronous states in large, fully connected networks of inhibitory neurons are destabilized as a function of the noise level. Three types of single neuron models are considered: the leaky integrate-and-fire (LIF) model, the exponential integrate-and-fire (EIF), model and conductance-based models involving sodium and potassium Hodgkin-Huxley (HH) currents. We show that in all models, the instabilities of the asynchronous state can be classified in two classes. The first one consists of clustering instabilities, which exist in a restricted range of noise. These instabilities lead to synchronous patterns in which the population of neurons is broken into clusters of synchronously firing neurons. The irregularity of the firing patterns of the neurons is weak. The second class of instabilities, termed oscillatory firing rate instabilities, exists at any value of noise. They lead to cluster state at low noise. As the noise is increased, the instability occurs at larger coupling, and the pattern of firing that emerges becomes more irregular. In the regime of high noise and strong coupling, these instabilities lead to stochastic oscillations in which neurons fire in an approximately Poisson way with a common instantaneous probability of firing that oscillates in time.

  17. Chimera states in a multilayer network of coupled and uncoupled neurons

    NASA Astrophysics Data System (ADS)

    Majhi, Soumen; Perc, Matjaž; Ghosh, Dibakar

    2017-07-01

    We study the emergence of chimera states in a multilayer neuronal network, where one layer is composed of coupled and the other layer of uncoupled neurons. Through the multilayer structure, the layer with coupled neurons acts as the medium by means of which neurons in the uncoupled layer share information in spite of the absence of physical connections among them. Neurons in the coupled layer are connected with electrical synapses, while across the two layers, neurons are connected through chemical synapses. In both layers, the dynamics of each neuron is described by the Hindmarsh-Rose square wave bursting dynamics. We show that the presence of two different types of connecting synapses within and between the two layers, together with the multilayer network structure, plays a key role in the emergence of between-layer synchronous chimera states and patterns of synchronous clusters. In particular, we find that these chimera states can emerge in the coupled layer regardless of the range of electrical synapses. Even in all-to-all and nearest-neighbor coupling within the coupled layer, we observe qualitatively identical between-layer chimera states. Moreover, we show that the role of information transmission delay between the two layers must not be neglected, and we obtain precise parameter bounds at which chimera states can be observed. The expansion of the chimera region and annihilation of cluster and fully coherent states in the parameter plane for increasing values of inter-layer chemical synaptic time delay are illustrated using effective range measurements. These results are discussed in the light of neuronal evolution, where the coexistence of coherent and incoherent dynamics during the developmental stage is particularly likely.

  18. Self-organized criticality occurs in non-conservative neuronal networks during Up states

    PubMed Central

    Millman, Daniel; Mihalas, Stefan; Kirkwood, Alfredo; Niebur, Ernst

    2010-01-01

    During sleep, under anesthesia and in vitro, cortical neurons in sensory, motor, association and executive areas fluctuate between Up and Down states (UDS) characterized by distinct membrane potentials and spike rates [1, 2, 3, 4, 5]. Another phenomenon observed in preparations similar to those that exhibit UDS, such as anesthetized rats [6], brain slices and cultures devoid of sensory input [7], as well as awake monkey cortex [8] is self-organized criticality (SOC). This is characterized by activity “avalanches” whose size distributions obey a power law with critical exponent of about −32 and branching parameter near unity. Recent work has demonstrated SOC in conservative neuronal network models [9, 10], however critical behavior breaks down when biologically realistic non-conservatism is introduced [9]. We here report robust SOC behavior in networks of non-conservative leaky integrate-and-fire neurons with short-term synaptic depression. We show analytically and numerically that these networks typically have 2 stable activity levels corresponding to Up and Down states, that the networks switch spontaneously between them, and that Up states are critical and Down states are subcritical. PMID:21804861

  19. Three-dimensional spatial modeling of spines along dendritic networks in human cortical pyramidal neurons

    PubMed Central

    Larrañaga, Pedro; Benavides-Piccione, Ruth; Fernaud-Espinosa, Isabel; DeFelipe, Javier; Bielza, Concha

    2017-01-01

    We modeled spine distribution along the dendritic networks of pyramidal neurons in both basal and apical dendrites. To do this, we applied network spatial analysis because spines can only lie on the dendritic shaft. We expanded the existing 2D computational techniques for spatial analysis along networks to perform a 3D network spatial analysis. We analyzed five detailed reconstructions of adult human pyramidal neurons of the temporal cortex with a total of more than 32,000 spines. We confirmed that there is a spatial variation in spine density that is dependent on the distance to the cell body in all dendrites. Considering the dendritic arborizations of each pyramidal cell as a group of instances of the same observation (the neuron), we used replicated point patterns together with network spatial analysis for the first time to search for significant differences in the spine distribution of basal dendrites between different cells and between all the basal and apical dendrites. To do this, we used a recent variant of Ripley’s K function defined to work along networks. The results showed that there were no significant differences in spine distribution along basal arbors of the same neuron and along basal arbors of different pyramidal neurons. This suggests that dendritic spine distribution in basal dendritic arbors adheres to common rules. However, we did find significant differences in spine distribution along basal versus apical networks. Therefore, not only do apical and basal dendritic arborizations have distinct morphologies but they also obey different rules of spine distribution. Specifically, the results suggested that spines are more clustered along apical than in basal dendrites. Collectively, the results further highlighted that synaptic input information processing is different between these two dendritic domains. PMID:28662210

  20. Three-dimensional spatial modeling of spines along dendritic networks in human cortical pyramidal neurons.

    PubMed

    Anton-Sanchez, Laura; Larrañaga, Pedro; Benavides-Piccione, Ruth; Fernaud-Espinosa, Isabel; DeFelipe, Javier; Bielza, Concha

    2017-01-01

    We modeled spine distribution along the dendritic networks of pyramidal neurons in both basal and apical dendrites. To do this, we applied network spatial analysis because spines can only lie on the dendritic shaft. We expanded the existing 2D computational techniques for spatial analysis along networks to perform a 3D network spatial analysis. We analyzed five detailed reconstructions of adult human pyramidal neurons of the temporal cortex with a total of more than 32,000 spines. We confirmed that there is a spatial variation in spine density that is dependent on the distance to the cell body in all dendrites. Considering the dendritic arborizations of each pyramidal cell as a group of instances of the same observation (the neuron), we used replicated point patterns together with network spatial analysis for the first time to search for significant differences in the spine distribution of basal dendrites between different cells and between all the basal and apical dendrites. To do this, we used a recent variant of Ripley's K function defined to work along networks. The results showed that there were no significant differences in spine distribution along basal arbors of the same neuron and along basal arbors of different pyramidal neurons. This suggests that dendritic spine distribution in basal dendritic arbors adheres to common rules. However, we did find significant differences in spine distribution along basal versus apical networks. Therefore, not only do apical and basal dendritic arborizations have distinct morphologies but they also obey different rules of spine distribution. Specifically, the results suggested that spines are more clustered along apical than in basal dendrites. Collectively, the results further highlighted that synaptic input information processing is different between these two dendritic domains.

  1. Synapto-protective drugs evaluation in reconstructed neuronal network.

    PubMed

    Deleglise, Bérangère; Lassus, Benjamin; Soubeyre, Vaneyssa; Alleaume-Butaux, Aurélie; Hjorth, Johannes J; Vignes, Maéva; Schneider, Benoit; Brugg, Bernard; Viovy, Jean-Louis; Peyrin, Jean-Michel

    2013-01-01

    Chronic neurodegenerative syndromes such as Alzheimer's and Parkinson's diseases, or acute syndromes such as ischemic stroke or traumatic brain injuries are characterized by early synaptic collapse which precedes axonal and neuronal cell body degeneration and promotes early cognitive impairment in patients. Until now, neuroprotective strategies have failed to impede the progression of neurodegenerative syndromes. Drugs preventing the loss of cell body do not prevent the cognitive decline, probably because they lack synapto-protective effects. The absence of physiologically realistic neuronal network models which can be easily handled has hindered the development of synapto-protective drugs suitable for therapies. Here we describe a new microfluidic platform which makes it possible to study the consequences of axonal trauma of reconstructed oriented mouse neuronal networks. Each neuronal population and sub-compartment can be chemically addressed individually. The somatic, mid axon, presynaptic and postsynaptic effects of local pathological stresses or putative protective molecules can thus be evaluated with the help of this versatile "brain on chip" platform. We show that presynaptic loss is the earliest event observed following axotomy of cortical fibers, before any sign of axonal fragmentation or post-synaptic spine alteration. This platform can be used to screen and evaluate the synapto-protective potential of several drugs. For instance, NAD⁺ and the Rho-kinase inhibitor Y27632 can efficiently prevent synaptic disconnection, whereas the broad-spectrum caspase inhibitor zVAD-fmk and the stilbenoid resveratrol do not prevent presynaptic degeneration. Hence, this platform is a promising tool for fundamental research in the field of developmental and neurodegenerative neurosciences, and also offers the opportunity to set up pharmacological screening of axon-protective and synapto-protective drugs.

  2. Biophysical synaptic dynamics in an analog VLSI network of Hodgkin-Huxley neurons.

    PubMed

    Yu, Theodore; Cauwenberghs, Gert

    2009-01-01

    We study synaptic dynamics in a biophysical network of four coupled spiking neurons implemented in an analog VLSI silicon microchip. The four neurons implement a generalized Hodgkin-Huxley model with individually configurable rate-based kinetics of opening and closing of Na+ and K+ ion channels. The twelve synapses implement a rate-based first-order kinetic model of neurotransmitter and receptor dynamics, accounting for NMDA and non-NMDA type chemical synapses. The implemented models on the chip are fully configurable by 384 parameters accounting for conductances, reversal potentials, and pre/post-synaptic voltage-dependence of the channel kinetics. We describe the models and present experimental results from the chip characterizing single neuron dynamics, single synapse dynamics, and multi-neuron network dynamics showing phase-locking behavior as a function of synaptic coupling strength. The 3mm x 3mm microchip consumes 1.29 mW power making it promising for applications including neuromorphic modeling and neural prostheses.

  3. High-Degree Neurons Feed Cortical Computations

    PubMed Central

    Timme, Nicholas M.; Ito, Shinya; Shimono, Masanori; Yeh, Fang-Chin; Litke, Alan M.; Beggs, John M.

    2016-01-01

    Recent work has shown that functional connectivity among cortical neurons is highly varied, with a small percentage of neurons having many more connections than others. Also, recent theoretical developments now make it possible to quantify how neurons modify information from the connections they receive. Therefore, it is now possible to investigate how information modification, or computation, depends on the number of connections a neuron receives (in-degree) or sends out (out-degree). To do this, we recorded the simultaneous spiking activity of hundreds of neurons in cortico-hippocampal slice cultures using a high-density 512-electrode array. This preparation and recording method combination produced large numbers of neurons recorded at temporal and spatial resolutions that are not currently available in any in vivo recording system. We utilized transfer entropy (a well-established method for detecting linear and nonlinear interactions in time series) and the partial information decomposition (a powerful, recently developed tool for dissecting multivariate information processing into distinct parts) to quantify computation between neurons where information flows converged. We found that computations did not occur equally in all neurons throughout the networks. Surprisingly, neurons that computed large amounts of information tended to receive connections from high out-degree neurons. However, the in-degree of a neuron was not related to the amount of information it computed. To gain insight into these findings, we developed a simple feedforward network model. We found that a degree-modified Hebbian wiring rule best reproduced the pattern of computation and degree correlation results seen in the real data. Interestingly, this rule also maximized signal propagation in the presence of network-wide correlations, suggesting a mechanism by which cortex could deal with common random background input. These are the first results to show that the extent to which a neuron

  4. Neuronal current detection with low-field magnetic resonance: simulations and methods.

    PubMed

    Cassará, Antonino Mario; Maraviglia, Bruno; Hartwig, Stefan; Trahms, Lutz; Burghoff, Martin

    2009-10-01

    The noninvasive detection of neuronal currents in active brain networks [or direct neuronal imaging (DNI)] by means of nuclear magnetic resonance (NMR) remains a scientific challenge. Many different attempts using NMR scanners with magnetic fields >1 T (high-field NMR) have been made in the past years to detect phase shifts or magnitude changes in the NMR signals. However, the many physiological (i.e., the contemporarily BOLD effect, the weakness of the neuronal-induced magnetic field, etc.) and technical limitations (e.g., the spatial resolution) in observing the weak signals have led to some contradicting results. In contrast, only a few attempts have been made using low-field NMR techniques. As such, this paper was aimed at reviewing two recent developments in this front. The detection schemes discussed in this manuscript, the resonant mechanism (RM) and the DC method, are specific to NMR instrumentations with main fields below the earth magnetic field (50 microT), while some even below a few microteslas (ULF-NMR). However, the experimental validation for both techniques, with differentiating sensitivity to the various neuronal activities at specific temporal and spatial resolutions, is still in progress and requires carefully designed magnetic field sensor technology. Additional care should be taken to ensure a stringent magnetic shield from the ambient magnetic field fluctuations. In this review, we discuss the characteristics and prospect of these two methods in detecting neuronal currents, along with the technical requirements on the instrumentation.

  5. Discontinuous Galerkin finite element method for solving population density functions of cortical pyramidal and thalamic neuronal populations.

    PubMed

    Huang, Chih-Hsu; Lin, Chou-Ching K; Ju, Ming-Shaung

    2015-02-01

    Compared with the Monte Carlo method, the population density method is efficient for modeling collective dynamics of neuronal populations in human brain. In this method, a population density function describes the probabilistic distribution of states of all neurons in the population and it is governed by a hyperbolic partial differential equation. In the past, the problem was mainly solved by using the finite difference method. In a previous study, a continuous Galerkin finite element method was found better than the finite difference method for solving the hyperbolic partial differential equation; however, the population density function often has discontinuity and both methods suffer from a numerical stability problem. The goal of this study is to improve the numerical stability of the solution using discontinuous Galerkin finite element method. To test the performance of the new approach, interaction of a population of cortical pyramidal neurons and a population of thalamic neurons was simulated. The numerical results showed good agreement between results of discontinuous Galerkin finite element and Monte Carlo methods. The convergence and accuracy of the solutions are excellent. The numerical stability problem could be resolved using the discontinuous Galerkin finite element method which has total-variation-diminishing property. The efficient approach will be employed to simulate the electroencephalogram or dynamics of thalamocortical network which involves three populations, namely, thalamic reticular neurons, thalamocortical neurons and cortical pyramidal neurons. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Molecular changes in brain aging and Alzheimer’s disease are mirrored in experimentally silenced cortical neuron networks

    PubMed Central

    Gleichmann, Marc; Zhang, Yongqing; Wood, William H.; Becker, Kevin G.; Mughal, Mohamed R.; Pazin, Michael J.; van Praag, Henriette; Kobilo, Tali; Zonderman, Alan B.; Troncoso, Juan C.; Markesbery, William R.; Mattson, Mark P.

    2010-01-01

    Activity-dependent modulation of neuronal gene expression promotes neuronal survival and plasticity, and neuronal network activity is perturbed in aging and Alzheimer’s disease (AD). Here we show that cerebral cortical neurons respond to chronic suppression of excitability by downregulating the expression of genes and their encoded proteins involved in inhibitory transmission (GABAergic and somatostatin) and Ca2+ signaling; alterations in pathways involved in lipid metabolism and energy management are also features of silenced neuronal networks. A molecular fingerprint strikingly similar to that of diminished network activity occurs in the human brain during aging and in AD, and opposite changes occur in response to activation of N-methyl-D-aspartate (NMDA) and brain-derived neurotrophic factor (BDNF) receptors in cultured cortical neurons and in mice in response to an enriched environment or electroconvulsive shock. Our findings suggest that reduced inhibitory neurotransmission during aging and in AD may be the result of compensatory responses that, paradoxically, render the neurons vulnerable to Ca2+-mediated degeneration. PMID:20947216

  7. Emergence of small-world structure in networks of spiking neurons through STDP plasticity.

    PubMed

    Basalyga, Gleb; Gleiser, Pablo M; Wennekers, Thomas

    2011-01-01

    In this work, we use a complex network approach to investigate how a neural network structure changes under synaptic plasticity. In particular, we consider a network of conductance-based, single-compartment integrate-and-fire excitatory and inhibitory neurons. Initially the neurons are connected randomly with uniformly distributed synaptic weights. The weights of excitatory connections can be strengthened or weakened during spiking activity by the mechanism known as spike-timing-dependent plasticity (STDP). We extract a binary directed connection matrix by thresholding the weights of the excitatory connections at every simulation step and calculate its major topological characteristics such as the network clustering coefficient, characteristic path length and small-world index. We numerically demonstrate that, under certain conditions, a nontrivial small-world structure can emerge from a random initial network subject to STDP learning.

  8. Mirrored STDP Implements Autoencoder Learning in a Network of Spiking Neurons.

    PubMed

    Burbank, Kendra S

    2015-12-01

    The autoencoder algorithm is a simple but powerful unsupervised method for training neural networks. Autoencoder networks can learn sparse distributed codes similar to those seen in cortical sensory areas such as visual area V1, but they can also be stacked to learn increasingly abstract representations. Several computational neuroscience models of sensory areas, including Olshausen & Field's Sparse Coding algorithm, can be seen as autoencoder variants, and autoencoders have seen extensive use in the machine learning community. Despite their power and versatility, autoencoders have been difficult to implement in a biologically realistic fashion. The challenges include their need to calculate differences between two neuronal activities and their requirement for learning rules which lead to identical changes at feedforward and feedback connections. Here, we study a biologically realistic network of integrate-and-fire neurons with anatomical connectivity and synaptic plasticity that closely matches that observed in cortical sensory areas. Our choice of synaptic plasticity rules is inspired by recent experimental and theoretical results suggesting that learning at feedback connections may have a different form from learning at feedforward connections, and our results depend critically on this novel choice of plasticity rules. Specifically, we propose that plasticity rules at feedforward versus feedback connections are temporally opposed versions of spike-timing dependent plasticity (STDP), leading to a symmetric combined rule we call Mirrored STDP (mSTDP). We show that with mSTDP, our network follows a learning rule that approximately minimizes an autoencoder loss function. When trained with whitened natural image patches, the learned synaptic weights resemble the receptive fields seen in V1. Our results use realistic synaptic plasticity rules to show that the powerful autoencoder learning algorithm could be within the reach of real biological networks.

  9. Mirrored STDP Implements Autoencoder Learning in a Network of Spiking Neurons

    PubMed Central

    Burbank, Kendra S.

    2015-01-01

    The autoencoder algorithm is a simple but powerful unsupervised method for training neural networks. Autoencoder networks can learn sparse distributed codes similar to those seen in cortical sensory areas such as visual area V1, but they can also be stacked to learn increasingly abstract representations. Several computational neuroscience models of sensory areas, including Olshausen & Field’s Sparse Coding algorithm, can be seen as autoencoder variants, and autoencoders have seen extensive use in the machine learning community. Despite their power and versatility, autoencoders have been difficult to implement in a biologically realistic fashion. The challenges include their need to calculate differences between two neuronal activities and their requirement for learning rules which lead to identical changes at feedforward and feedback connections. Here, we study a biologically realistic network of integrate-and-fire neurons with anatomical connectivity and synaptic plasticity that closely matches that observed in cortical sensory areas. Our choice of synaptic plasticity rules is inspired by recent experimental and theoretical results suggesting that learning at feedback connections may have a different form from learning at feedforward connections, and our results depend critically on this novel choice of plasticity rules. Specifically, we propose that plasticity rules at feedforward versus feedback connections are temporally opposed versions of spike-timing dependent plasticity (STDP), leading to a symmetric combined rule we call Mirrored STDP (mSTDP). We show that with mSTDP, our network follows a learning rule that approximately minimizes an autoencoder loss function. When trained with whitened natural image patches, the learned synaptic weights resemble the receptive fields seen in V1. Our results use realistic synaptic plasticity rules to show that the powerful autoencoder learning algorithm could be within the reach of real biological networks. PMID:26633645

  10. A spatially resolved network spike in model neuronal cultures reveals nucleation centers, circular traveling waves and drifting spiral waves.

    PubMed

    Paraskevov, A V; Zendrikov, D K

    2017-03-23

    We show that in model neuronal cultures, where the probability of interneuronal connection formation decreases exponentially with increasing distance between the neurons, there exists a small number of spatial nucleation centers of a network spike, from where the synchronous spiking activity starts propagating in the network typically in the form of circular traveling waves. The number of nucleation centers and their spatial locations are unique and unchanged for a given realization of neuronal network but are different for different networks. In contrast, if the probability of interneuronal connection formation is independent of the distance between neurons, then the nucleation centers do not arise and the synchronization of spiking activity during a network spike occurs spatially uniform throughout the network. Therefore one can conclude that spatial proximity of connections between neurons is important for the formation of nucleation centers. It is also shown that fluctuations of the spatial density of neurons at their random homogeneous distribution typical for the experiments in vitro do not determine the locations of the nucleation centers. The simulation results are qualitatively consistent with the experimental observations.

  11. A spatially resolved network spike in model neuronal cultures reveals nucleation centers, circular traveling waves and drifting spiral waves

    NASA Astrophysics Data System (ADS)

    Paraskevov, A. V.; Zendrikov, D. K.

    2017-04-01

    We show that in model neuronal cultures, where the probability of interneuronal connection formation decreases exponentially with increasing distance between the neurons, there exists a small number of spatial nucleation centers of a network spike, from where the synchronous spiking activity starts propagating in the network typically in the form of circular traveling waves. The number of nucleation centers and their spatial locations are unique and unchanged for a given realization of neuronal network but are different for different networks. In contrast, if the probability of interneuronal connection formation is independent of the distance between neurons, then the nucleation centers do not arise and the synchronization of spiking activity during a network spike occurs spatially uniform throughout the network. Therefore one can conclude that spatial proximity of connections between neurons is important for the formation of nucleation centers. It is also shown that fluctuations of the spatial density of neurons at their random homogeneous distribution typical for the experiments in vitro do not determine the locations of the nucleation centers. The simulation results are qualitatively consistent with the experimental observations.

  12. Modeling spike-wave discharges by a complex network of neuronal oscillators.

    PubMed

    Medvedeva, Tatiana M; Sysoeva, Marina V; van Luijtelaar, Gilles; Sysoev, Ilya V

    2018-02-01

    The organization of neural networks and the mechanisms, which generate the highly stereotypical for absence epilepsy spike-wave discharges (SWDs) is heavily debated. Here we describe such a model which can both reproduce the characteristics of SWDs and dynamics of coupling between brain regions, relying mainly on properties of hierarchically organized networks of a large number of neuronal oscillators. We used a two level mesoscale model. The first level consists of three structures: the nervus trigeminus serving as an input, the thalamus and the somatosensory cortex; the second level of a group of nearby situated neurons belonging to one of three modeled structures. The model reproduces the main features of the transition from normal to epileptiformic activity and its spontaneous abortion: an increase in the oscillation amplitude, the emergence of the main frequency and its higher harmonics, and the ability to generate trains of seizures. The model was stable with respect to variations in the structure of couplings and to scaling. The analyzes of the interactions between model structures from their time series using Granger causality method showed that the model reproduced the preictal coupling increase detected previously from experimental data. SWDs can be generated by changes in network organization. It is proposed that a specific pathological architecture of couplings in the brain is necessary to allow the transition from normal to epileptiformic activity, next to by others modeled and reported factors referring to complex, intrinsic, and synaptic mechanisms. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Activity-dependent switch of GABAergic inhibition into glutamatergic excitation in astrocyte-neuron networks

    PubMed Central

    Perea, Gertrudis; Gómez, Ricardo; Mederos, Sara; Covelo, Ana; Ballesteros, Jesús J; Schlosser, Laura; Hernández-Vivanco, Alicia; Martín-Fernández, Mario; Quintana, Ruth; Rayan, Abdelrahman; Díez, Adolfo; Fuenzalida, Marco; Agarwal, Amit; Bergles, Dwight E; Bettler, Bernhard; Manahan-Vaughan, Denise; Martín, Eduardo D; Kirchhoff, Frank; Araque, Alfonso

    2016-01-01

    Interneurons are critical for proper neural network function and can activate Ca2+ signaling in astrocytes. However, the impact of the interneuron-astrocyte signaling into neuronal network operation remains unknown. Using the simplest hippocampal Astrocyte-Neuron network, i.e., GABAergic interneuron, pyramidal neuron, single CA3-CA1 glutamatergic synapse, and astrocytes, we found that interneuron-astrocyte signaling dynamically affected excitatory neurotransmission in an activity- and time-dependent manner, and determined the sign (inhibition vs potentiation) of the GABA-mediated effects. While synaptic inhibition was mediated by GABAA receptors, potentiation involved astrocyte GABAB receptors, astrocytic glutamate release, and presynaptic metabotropic glutamate receptors. Using conditional astrocyte-specific GABAB receptor (Gabbr1) knockout mice, we confirmed the glial source of the interneuron-induced potentiation, and demonstrated the involvement of astrocytes in hippocampal theta and gamma oscillations in vivo. Therefore, astrocytes decode interneuron activity and transform inhibitory into excitatory signals, contributing to the emergence of novel network properties resulting from the interneuron-astrocyte interplay. DOI: http://dx.doi.org/10.7554/eLife.20362.001 PMID:28012274

  14. Modeling somatic and dendritic spike mediated plasticity at the single neuron and network level.

    PubMed

    Bono, Jacopo; Clopath, Claudia

    2017-09-26

    Synaptic plasticity is thought to be the principal neuronal mechanism underlying learning. Models of plastic networks typically combine point neurons with spike-timing-dependent plasticity (STDP) as the learning rule. However, a point neuron does not capture the local non-linear processing of synaptic inputs allowed for by dendrites. Furthermore, experimental evidence suggests that STDP is not the only learning rule available to neurons. By implementing biophysically realistic neuron models, we study how dendrites enable multiple synaptic plasticity mechanisms to coexist in a single cell. In these models, we compare the conditions for STDP and for synaptic strengthening by local dendritic spikes. We also explore how the connectivity between two cells is affected by these plasticity rules and by different synaptic distributions. Finally, we show that how memory retention during associative learning can be prolonged in networks of neurons by including dendrites.Synaptic plasticity is the neuronal mechanism underlying learning. Here the authors construct biophysical models of pyramidal neurons that reproduce observed plasticity gradients along the dendrite and show that dendritic spike dependent LTP which is predominant in distal sections can prolong memory retention.

  15. A portable microelectrode array recording system incorporating cultured neuronal networks for neurotoxin detection.

    PubMed

    Pancrazio, Joseph J; Gray, Samuel A; Shubin, Yura S; Kulagina, Nadezhda; Cuttino, David S; Shaffer, Kara M; Eisemann, Kevin; Curran, Anthony; Zim, Bret; Gross, Guenter W; O'Shaughnessy, Thomas J

    2003-10-01

    Cultured neuronal networks, which have the capacity to respond to a wide range of neuroactive compounds, have been suggested to be useful for both screening known analytes and unknown compounds for acute neuropharmacologic effects. Extracellular recording from cultured neuronal networks provides a means for extracting physiologically relevant activity, i.e. action potential firing, in a noninvasive manner conducive for long-term measurements. Previous work from our laboratory described prototype portable systems capable of high signal-to-noise extracellular recordings from cardiac myocytes. The present work describes a portable system tailored to monitoring neuronal extracellular potentials that readily incorporates standardized microelectrode arrays developed by and in use at the University of North Texas. This system utilizes low noise amplifier and filter boards, a two-stage thermal control system with integrated fluidics and a graphical user interface for data acquisition and control implemented on a personal computer. Wherever possible, off-the-shelf components have been utilized for system design and fabrication. During use with cultured neuronal networks, the system typically exhibits input referred noise levels of only 4-6 microVRMS, such that extracellular potentials exceeding 40 microV can be readily resolved. A flow rate of up to 1 ml/min was achieved while the cell recording chamber temperature was maintained within a range of 36-37 degrees C. To demonstrate the capability of this system to resolve small extracellular potentials, pharmacological experiments with cultured neuronal networks have been performed using ion channel blockers, tetrodotoxin and tityustoxin. The implications of the experiments for neurotoxin detection are discussed.

  16. Anti-correlated cortical networks arise from spontaneous neuronal dynamics at slow timescales.

    PubMed

    Kodama, Nathan X; Feng, Tianyi; Ullett, James J; Chiel, Hillel J; Sivakumar, Siddharth S; Galán, Roberto F

    2018-01-12

    In the highly interconnected architectures of the cerebral cortex, recurrent intracortical loops disproportionately outnumber thalamo-cortical inputs. These networks are also capable of generating neuronal activity without feedforward sensory drive. It is unknown, however, what spatiotemporal patterns may be solely attributed to intrinsic connections of the local cortical network. Using high-density microelectrode arrays, here we show that in the isolated, primary somatosensory cortex of mice, neuronal firing fluctuates on timescales from milliseconds to tens of seconds. Slower firing fluctuations reveal two spatially distinct neuronal ensembles, which correspond to superficial and deeper layers. These ensembles are anti-correlated: when one fires more, the other fires less and vice versa. This interplay is clearest at timescales of several seconds and is therefore consistent with shifts between active sensing and anticipatory behavioral states in mice.

  17. Analog "neuronal" networks in early vision.

    PubMed Central

    Koch, C; Marroquin, J; Yuille, A

    1986-01-01

    Many problems in early vision can be formulated in terms of minimizing a cost function. Examples are shape from shading, edge detection, motion analysis, structure from motion, and surface interpolation. As shown by Poggio and Koch [Poggio, T. & Koch, C. (1985) Proc. R. Soc. London, Ser. B 226, 303-323], quadratic variational problems, an important subset of early vision tasks, can be "solved" by linear, analog electrical, or chemical networks. However, in the presence of discontinuities, the cost function is nonquadratic, raising the question of designing efficient algorithms for computing the optimal solution. Recently, Hopfield and Tank [Hopfield, J. J. & Tank, D. W. (1985) Biol. Cybern. 52, 141-152] have shown that networks of nonlinear analog "neurons" can be effective in computing the solution of optimization problems. We show how these networks can be generalized to solve the nonconvex energy functionals of early vision. We illustrate this approach by implementing a specific analog network, solving the problem of reconstructing a smooth surface from sparse data while preserving its discontinuities. These results suggest a novel computational strategy for solving early vision problems in both biological and real-time artificial vision systems. PMID:3459172

  18. Balance of excitation and inhibition determines 1/f power spectrum in neuronal networks.

    PubMed

    Lombardi, F; Herrmann, H J; de Arcangelis, L

    2017-04-01

    The 1/f-like decay observed in the power spectrum of electro-physiological signals, along with scale-free statistics of the so-called neuronal avalanches, constitutes evidence of criticality in neuronal systems. Recent in vitro studies have shown that avalanche dynamics at criticality corresponds to some specific balance of excitation and inhibition, thus suggesting that this is a basic feature of the critical state of neuronal networks. In particular, a lack of inhibition significantly alters the temporal structure of the spontaneous avalanche activity and leads to an anomalous abundance of large avalanches. Here, we study the relationship between network inhibition and the scaling exponent β of the power spectral density (PSD) of avalanche activity in a neuronal network model inspired in Self-Organized Criticality. We find that this scaling exponent depends on the percentage of inhibitory synapses and tends to the value β = 1 for a percentage of about 30%. More specifically, β is close to 2, namely, Brownian noise, for purely excitatory networks and decreases towards values in the interval [1, 1.4] as the percentage of inhibitory synapses ranges between 20% and 30%, in agreement with experimental findings. These results indicate that the level of inhibition affects the frequency spectrum of resting brain activity and suggest the analysis of the PSD scaling behavior as a possible tool to study pathological conditions.

  19. Balance of excitation and inhibition determines 1/f power spectrum in neuronal networks

    NASA Astrophysics Data System (ADS)

    Lombardi, F.; Herrmann, H. J.; de Arcangelis, L.

    2017-04-01

    The 1/f-like decay observed in the power spectrum of electro-physiological signals, along with scale-free statistics of the so-called neuronal avalanches, constitutes evidence of criticality in neuronal systems. Recent in vitro studies have shown that avalanche dynamics at criticality corresponds to some specific balance of excitation and inhibition, thus suggesting that this is a basic feature of the critical state of neuronal networks. In particular, a lack of inhibition significantly alters the temporal structure of the spontaneous avalanche activity and leads to an anomalous abundance of large avalanches. Here, we study the relationship between network inhibition and the scaling exponent β of the power spectral density (PSD) of avalanche activity in a neuronal network model inspired in Self-Organized Criticality. We find that this scaling exponent depends on the percentage of inhibitory synapses and tends to the value β = 1 for a percentage of about 30%. More specifically, β is close to 2, namely, Brownian noise, for purely excitatory networks and decreases towards values in the interval [1, 1.4] as the percentage of inhibitory synapses ranges between 20% and 30%, in agreement with experimental findings. These results indicate that the level of inhibition affects the frequency spectrum of resting brain activity and suggest the analysis of the PSD scaling behavior as a possible tool to study pathological conditions.

  20. The influence of hubs in the structure of a neuronal network during an epileptic seizure

    NASA Astrophysics Data System (ADS)

    Rodrigues, Abner Cardoso; Cerdeira, Hilda A.; Machado, Birajara Soares

    2016-02-01

    In this work, we propose changes in the structure of a neuronal network with the intention to provoke strong synchronization to simulate episodes of epileptic seizure. Starting with a network of Izhikevich neurons we slowly increase the number of connections in selected nodes in a controlled way, to produce (or not) hubs. We study how these structures alter the synchronization on the spike firings interval, on individual neurons as well as on mean values, as a function of the concentration of connections for random and non-random (hubs) distribution. We also analyze how the post-ictal signal varies for the different distributions. We conclude that a network with hubs is more appropriate to represent an epileptic state.

  1. Neuronal network model of interictal and recurrent ictal activity

    NASA Astrophysics Data System (ADS)

    Lopes, M. A.; Lee, K.-E.; Goltsev, A. V.

    2017-12-01

    We propose a neuronal network model which undergoes a saddle node on an invariant circle bifurcation as the mechanism of the transition from the interictal to the ictal (seizure) state. In the vicinity of this transition, the model captures important dynamical features of both interictal and ictal states. We study the nature of interictal spikes and early warnings of the transition predicted by this model. We further demonstrate that recurrent seizures emerge due to the interaction between two networks.

  2. Computational Modeling of Single Neuron Extracellular Electric Potentials and Network Local Field Potentials using LFPsim.

    PubMed

    Parasuram, Harilal; Nair, Bipin; D'Angelo, Egidio; Hines, Michael; Naldi, Giovanni; Diwakar, Shyam

    2016-01-01

    Local Field Potentials (LFPs) are population signals generated by complex spatiotemporal interaction of current sources and dipoles. Mathematical computations of LFPs allow the study of circuit functions and dysfunctions via simulations. This paper introduces LFPsim, a NEURON-based tool for computing population LFP activity and single neuron extracellular potentials. LFPsim was developed to be used on existing cable compartmental neuron and network models. Point source, line source, and RC based filter approximations can be used to compute extracellular activity. As a demonstration of efficient implementation, we showcase LFPs from mathematical models of electrotonically compact cerebellum granule neurons and morphologically complex neurons of the neocortical column. LFPsim reproduced neocortical LFP at 8, 32, and 56 Hz via current injection, in vitro post-synaptic N2a, N2b waves and in vivo T-C waves in cerebellum granular layer. LFPsim also includes a simulation of multi-electrode array of LFPs in network populations to aid computational inference between biophysical activity in neural networks and corresponding multi-unit activity resulting in extracellular and evoked LFP signals.

  3. From in silico astrocyte cell models to neuron-astrocyte network models: A review.

    PubMed

    Oschmann, Franziska; Berry, Hugues; Obermayer, Klaus; Lenk, Kerstin

    2018-01-01

    The idea that astrocytes may be active partners in synaptic information processing has recently emerged from abundant experimental reports. Because of their spatial proximity to neurons and their bidirectional communication with them, astrocytes are now considered as an important third element of the synapse. Astrocytes integrate and process synaptic information and by doing so generate cytosolic calcium signals that are believed to reflect neuronal transmitter release. Moreover, they regulate neuronal information transmission by releasing gliotransmitters into the synaptic cleft affecting both pre- and postsynaptic receptors. Concurrent with the first experimental reports of the astrocytic impact on neural network dynamics, computational models describing astrocytic functions have been developed. In this review, we give an overview over the published computational models of astrocytic functions, from single-cell dynamics to the tripartite synapse level and network models of astrocytes and neurons. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Hybrid multiphoton volumetric functional imaging of large-scale bioengineered neuronal networks

    NASA Astrophysics Data System (ADS)

    Dana, Hod; Marom, Anat; Paluch, Shir; Dvorkin, Roman; Brosh, Inbar; Shoham, Shy

    2014-06-01

    Planar neural networks and interfaces serve as versatile in vitro models of central nervous system physiology, but adaptations of related methods to three dimensions (3D) have met with limited success. Here, we demonstrate for the first time volumetric functional imaging in a bioengineered neural tissue growing in a transparent hydrogel with cortical cellular and synaptic densities, by introducing complementary new developments in nonlinear microscopy and neural tissue engineering. Our system uses a novel hybrid multiphoton microscope design combining a 3D scanning-line temporal-focusing subsystem and a conventional laser-scanning multiphoton microscope to provide functional and structural volumetric imaging capabilities: dense microscopic 3D sampling at tens of volumes per second of structures with mm-scale dimensions containing a network of over 1,000 developing cells with complex spontaneous activity patterns. These developments open new opportunities for large-scale neuronal interfacing and for applications of 3D engineered networks ranging from basic neuroscience to the screening of neuroactive substances.

  5. Pulse propagation in discrete excitatory networks of integrate-and-fire neurons.

    PubMed

    Badel, Laurent; Tonnelier, Arnaud

    2004-07-01

    We study the propagation of solitary waves in a discrete excitatory network of integrate-and-fire neurons. We show the existence and the stability of a fast wave and a family of slow waves. Fast waves are similar to those already described in continuum networks. Stable slow waves have not been previously reported in purely excitatory networks and their propagation is particular to the discrete nature of the network. The robustness of our results is studied in the presence of noise.

  6. Neuronal networks with NMDARs and lateral inhibition implement winner-takes-all

    PubMed Central

    Shoemaker, Patrick A.

    2015-01-01

    A neural circuit that relies on the electrical properties of NMDA synaptic receptors is shown by numerical and theoretical analysis to be capable of realizing the winner-takes-all function, a powerful computational primitive that is often attributed to biological nervous systems. This biophysically-plausible model employs global lateral inhibition in a simple feedback arrangement. As its inputs increase, high-gain and then bi- or multi-stable equilibrium states may be assumed in which there is significant depolarization of a single neuron and hyperpolarization or very weak depolarization of other neurons in the network. The state of the winning neuron conveys analog information about its input. The winner-takes-all characteristic depends on the nonmonotonic current-voltage relation of NMDA receptor ion channels, as well as neural thresholding, and the gain and nature of the inhibitory feedback. Dynamical regimes vary with input strength. Fixed points may become unstable as the network enters a winner-takes-all regime, which can lead to entrained oscillations. Under some conditions, oscillatory behavior can be interpreted as winner-takes-all in nature. Stable winner-takes-all behavior is typically recovered as inputs increase further, but with still larger inputs, the winner-takes-all characteristic is ultimately lost. Network stability may be enhanced by biologically plausible mechanisms. PMID:25741276

  7. A Functionally Conserved Gene Regulatory Network Module Governing Olfactory Neuron Diversity.

    PubMed

    Li, Qingyun; Barish, Scott; Okuwa, Sumie; Maciejewski, Abigail; Brandt, Alicia T; Reinhold, Dominik; Jones, Corbin D; Volkan, Pelin Cayirlioglu

    2016-01-01

    Sensory neuron diversity is required for organisms to decipher complex environmental cues. In Drosophila, the olfactory environment is detected by 50 different olfactory receptor neuron (ORN) classes that are clustered in combinations within distinct sensilla subtypes. Each sensilla subtype houses stereotypically clustered 1-4 ORN identities that arise through asymmetric divisions from a single multipotent sensory organ precursor (SOP). How each class of SOPs acquires a unique differentiation potential that accounts for ORN diversity is unknown. Previously, we reported a critical component of SOP diversification program, Rotund (Rn), increases ORN diversity by generating novel developmental trajectories from existing precursors within each independent sensilla type lineages. Here, we show that Rn, along with BarH1/H2 (Bar), Bric-à-brac (Bab), Apterous (Ap) and Dachshund (Dac), constitutes a transcription factor (TF) network that patterns the developing olfactory tissue. This network was previously shown to pattern the segmentation of the leg, which suggests that this network is functionally conserved. In antennal imaginal discs, precursors with diverse ORN differentiation potentials are selected from concentric rings defined by unique combinations of these TFs along the proximodistal axis of the developing antennal disc. The combinatorial code that demarcates each precursor field is set up by cross-regulatory interactions among different factors within the network. Modifications of this network lead to predictable changes in the diversity of sensilla subtypes and ORN pools. In light of our data, we propose a molecular map that defines each unique SOP fate. Our results highlight the importance of the early prepatterning gene regulatory network as a modulator of SOP and terminally differentiated ORN diversity. Finally, our model illustrates how conserved developmental strategies are used to generate neuronal diversity.

  8. Optimal Detection of a Localized Perturbation in Random Networks of Integrate-and-Fire Neurons.

    PubMed

    Bernardi, Davide; Lindner, Benjamin

    2017-06-30

    Experimental and theoretical studies suggest that cortical networks are chaotic and coding relies on averages over large populations. However, there is evidence that rats can respond to the short stimulation of a single cortical cell, a theoretically unexplained fact. We study effects of single-cell stimulation on a large recurrent network of integrate-and-fire neurons and propose a simple way to detect the perturbation. Detection rates obtained from simulations and analytical estimates are similar to experimental response rates if the readout is slightly biased towards specific neurons. Near-optimal detection is attained for a broad range of intermediate values of the mean coupling between neurons.

  9. Optimal Detection of a Localized Perturbation in Random Networks of Integrate-and-Fire Neurons

    NASA Astrophysics Data System (ADS)

    Bernardi, Davide; Lindner, Benjamin

    2017-06-01

    Experimental and theoretical studies suggest that cortical networks are chaotic and coding relies on averages over large populations. However, there is evidence that rats can respond to the short stimulation of a single cortical cell, a theoretically unexplained fact. We study effects of single-cell stimulation on a large recurrent network of integrate-and-fire neurons and propose a simple way to detect the perturbation. Detection rates obtained from simulations and analytical estimates are similar to experimental response rates if the readout is slightly biased towards specific neurons. Near-optimal detection is attained for a broad range of intermediate values of the mean coupling between neurons.

  10. Time evolution of coherent structures in networks of Hindmarch Rose neurons

    NASA Astrophysics Data System (ADS)

    Mainieri, M. S.; Erichsen, R.; Brunnet, L. G.

    2005-08-01

    In the regime of partial synchronization, networks of diffusively coupled Hindmarch-Rose neurons show coherent structures developing in a region of the phase space which is wider than in the correspondent single neuron. Such structures are kept, without important changes, during several bursting periods. In this work, we study the time evolution of these structures and their dynamical stability under damage. This system may model the behavior of ensembles of neurons coupled through a bidirectional gap junction or, in a broader sense, it could also account for the molecular cascades present in the formation of flash and short time memory.

  11. Single neuron modeling and data assimilation in BNST neurons

    NASA Astrophysics Data System (ADS)

    Farsian, Reza

    Neurons, although tiny in size, are vastly complicated systems, which are responsible for the most basic yet essential functions of any nervous system. Even the most simple models of single neurons are usually high dimensional, nonlinear, and contain many parameters and states which are unobservable in a typical neurophysiological experiment. One of the most fundamental problems in experimental neurophysiology is the estimation of these parameters and states, since knowing their values is essential in identification, model construction, and forward prediction of biological neurons. Common methods of parameter and state estimation do not perform well for neural models due to their high dimensionality and nonlinearity. In this dissertation, two alternative approaches for parameters and state estimation of biological neurons have been demonstrated: dynamical parameter estimation (DPE) and a Markov Chain Monte Carlo (MCMC) method. The first method uses elements of chaos control and synchronization theory for parameter and state estimation. MCMC is a statistical approach which uses a path integral formulation to evaluate a mean and an error bound for these unobserved parameters and states. These methods have been applied to biological system of neurons in Bed Nucleus of Stria Termialis neurons (BNST) of rats. State and parameters of neurons in both systems were estimated, and their value were used for recreating a realistic model and predicting the behavior of the neurons successfully. The knowledge of biological parameters can ultimately provide a better understanding of the internal dynamics of a neuron in order to build robust models of neuron networks.

  12. Three-dimensional chimera patterns in networks of spiking neuron oscillators

    NASA Astrophysics Data System (ADS)

    Kasimatis, T.; Hizanidis, J.; Provata, A.

    2018-05-01

    We study the stable spatiotemporal patterns that arise in a three-dimensional (3D) network of neuron oscillators, whose dynamics is described by the leaky integrate-and-fire (LIF) model. More specifically, we investigate the form of the chimera states induced by a 3D coupling matrix with nonlocal topology. The observed patterns are in many cases direct generalizations of the corresponding two-dimensional (2D) patterns, e.g., spheres, layers, and cylinder grids. We also find cylindrical and "cross-layered" chimeras that do not have an equivalent in 2D systems. Quantitative measures are calculated, such as the ratio of synchronized and unsynchronized neurons as a function of the coupling range, the mean phase velocities, and the distribution of neurons in mean phase velocities. Based on these measures, the chimeras are categorized in two families. The first family of patterns is observed for weaker coupling and exhibits higher mean phase velocities for the unsynchronized areas of the network. The opposite holds for the second family, where the unsynchronized areas have lower mean phase velocities. The various measures demonstrate discontinuities, indicating criticality as the parameters cross from the first family of patterns to the second.

  13. Irregular synchronous activity in stochastically-coupled networks of integrate-and-fire neurons.

    PubMed

    Lin, J K; Pawelzik, K; Ernst, U; Sejnowski, T J

    1998-08-01

    We investigate the spatial and temporal aspects of firing patterns in a network of integrate-and-fire neurons arranged in a one-dimensional ring topology. The coupling is stochastic and shaped like a Mexican hat with local excitation and lateral inhibition. With perfect precision in the couplings, the attractors of activity in the network occur at every position in the ring. Inhomogeneities in the coupling break the translational invariance of localized attractors and lead to synchronization within highly active as well as weakly active clusters. The interspike interval variability is high, consistent with recent observations of spike time distributions in visual cortex. The robustness of our results is demonstrated with more realistic simulations on a network of McGregor neurons which model conductance changes and after-hyperpolarization potassium currents.

  14. Highly ordered large-scale neuronal networks of individual cells - toward single cell to 3D nanowire intracellular interfaces.

    PubMed

    Kwiat, Moria; Elnathan, Roey; Pevzner, Alexander; Peretz, Asher; Barak, Boaz; Peretz, Hagit; Ducobni, Tamir; Stein, Daniel; Mittelman, Leonid; Ashery, Uri; Patolsky, Fernando

    2012-07-25

    The use of artificial, prepatterned neuronal networks in vitro is a promising approach for studying the development and dynamics of small neural systems in order to understand the basic functionality of neurons and later on of the brain. The present work presents a high fidelity and robust procedure for controlling neuronal growth on substrates such as silicon wafers and glass, enabling us to obtain mature and durable neural networks of individual cells at designed geometries. It offers several advantages compared to other related techniques that have been reported in recent years mainly because of its high yield and reproducibility. The procedure is based on surface chemistry that allows the formation of functional, tailormade neural architectures with a micrometer high-resolution partition, that has the ability to promote or repel cells attachment. The main achievements of this work are deemed to be the creation of a large scale neuronal network at low density down to individual cells, that develop intact typical neurites and synapses without any glia-supportive cells straight from the plating stage and with a relatively long term survival rate, up to 4 weeks. An important application of this method is its use on 3D nanopillars and 3D nanowire-device arrays, enabling not only the cell bodies, but also their neurites to be positioned directly on electrical devices and grow with registration to the recording elements underneath.

  15. Combined exposure to simulated microgravity and acute or chronic radiation reduces neuronal network integrity and cell survival

    NASA Astrophysics Data System (ADS)

    Benotmane, Rafi

    During orbital or interplanetary space flights, astronauts are exposed to cosmic radiations and microgravity. This study aimed at assessing the effect of these combined conditions on neuronal network density, cell morphology and survival, using well-connected mouse cortical neuron cultures. To this end, neurons were exposed to acute low and high doses of low LET (X-rays) radiation or to chronic low dose-rate of high LET neutron irradiation (Californium-252), under the simulated microgravity generated by the Random Positioning Machine (RPM, Dutch space). High content image analysis of cortical neurons positive for the neuronal marker βIII-tubulin unveiled a reduced neuronal network integrity and connectivity, and an altered cell morphology after exposure to acute/chronic radiation or to simulated microgravity. Additionally, in both conditions, a defect in DNA-repair efficiency was revealed by an increased number of γH2AX-positive foci, as well as an increased number of Annexin V-positive apoptotic neurons. Of interest, when combining both simulated space conditions, we noted a synergistic effect on neuronal network density, neuronal morphology, cell survival and DNA repair. Furthermore, these observations are in agreement with preliminary gene expression data, revealing modulations in cytoskeletal and apoptosis-related genes after exposure to simulated microgravity. In conclusion, the observed in vitro changes in neuronal network integrity and cell survival induced by space simulated conditions provide us with mechanistic understanding to evaluate health risks and the development of countermeasures to prevent neurological disorders in astronauts over long-term space travels. Acknowledgements: This work is supported partly by the EU-FP7 projects CEREBRAD (n° 295552)

  16. Accelerated intoxication of GABAergic synapses by botulinum neurotoxin A disinhibits stem cell-derived neuron networks prior to network silencing

    PubMed Central

    Beske, Phillip H.; Scheeler, Stephen M.; Adler, Michael; McNutt, Patrick M.

    2015-01-01

    Botulinum neurotoxins (BoNTs) are extremely potent toxins that specifically cleave SNARE proteins in peripheral synapses, preventing neurotransmitter release. Neuronal responses to BoNT intoxication are traditionally studied by quantifying SNARE protein cleavage in vitro or monitoring physiological paralysis in vivo. Consequently, the dynamic effects of intoxication on synaptic behaviors are not well-understood. We have reported that mouse embryonic stem cell-derived neurons (ESNs) are highly sensitive to BoNT based on molecular readouts of intoxication. Here we study the time-dependent changes in synapse- and network-level behaviors following addition of BoNT/A to spontaneously active networks of glutamatergic and GABAergic ESNs. Whole-cell patch-clamp recordings indicated that BoNT/A rapidly blocked synaptic neurotransmission, confirming that ESNs replicate the functional pathophysiology responsible for clinical botulism. Quantitation of spontaneous neurotransmission in pharmacologically isolated synapses revealed accelerated silencing of GABAergic synapses compared to glutamatergic synapses, which was consistent with the selective accumulation of cleaved SNAP-25 at GAD1+ pre-synaptic terminals at early timepoints. Different latencies of intoxication resulted in complex network responses to BoNT/A addition, involving rapid disinhibition of stochastic firing followed by network silencing. Synaptic activity was found to be highly sensitive to SNAP-25 cleavage, reflecting the functional consequences of the localized cleavage of the small subpopulation of SNAP-25 that is engaged in neurotransmitter release in the nerve terminal. Collectively these findings illustrate that use of synaptic function assays in networked neurons cultures offers a novel and highly sensitive approach for mechanistic studies of toxin:neuron interactions and synaptic responses to BoNT. PMID:25954159

  17. Modeling the emergence of circadian rhythms in a clock neuron network.

    PubMed

    Diambra, Luis; Malta, Coraci P

    2012-01-01

    Circadian rhythms in pacemaker cells persist for weeks in constant darkness, while in other types of cells the molecular oscillations that underlie circadian rhythms damp rapidly under the same conditions. Although much progress has been made in understanding the biochemical and cellular basis of circadian rhythms, the mechanisms leading to damped or self-sustained oscillations remain largely unknown. There exist many mathematical models that reproduce the circadian rhythms in the case of a single cell of the Drosophila fly. However, not much is known about the mechanisms leading to coherent circadian oscillation in clock neuron networks. In this work we have implemented a model for a network of interacting clock neurons to describe the emergence (or damping) of circadian rhythms in Drosophila fly, in the absence of zeitgebers. Our model consists of an array of pacemakers that interact through the modulation of some parameters by a network feedback. The individual pacemakers are described by a well-known biochemical model for circadian oscillation, to which we have added degradation of PER protein by light and multiplicative noise. The network feedback is the PER protein level averaged over the whole network. In particular, we have investigated the effect of modulation of the parameters associated with (i) the control of net entrance of PER into the nucleus and (ii) the non-photic degradation of PER. Our results indicate that the modulation of PER entrance into the nucleus allows the synchronization of clock neurons, leading to coherent circadian oscillations under constant dark condition. On the other hand, the modulation of non-photic degradation cannot reset the phases of individual clocks subjected to intrinsic biochemical noise.

  18. Emergence and robustness of target waves in a neuronal network

    NASA Astrophysics Data System (ADS)

    Xu, Ying; Jin, Wuyin; Ma, Jun

    2015-08-01

    Target waves in excitable media such as neuronal network can regulate the spatial distribution and orderliness as a continuous pacemaker. Three different schemes are used to develop stable target wave in the network, and the potential mechanism for emergence of target waves in the excitable media is investigated. For example, a local pacing driven by external periodical forcing can generate stable target wave in the excitable media, furthermore, heterogeneity and local feedback under self-feedback coupling are also effective to generate continuous target wave as well. To discern the difference of these target waves, a statistical synchronization factor is defined by using mean field theory and artificial defects are introduced into the network to block the target wave, thus the robustness of these target waves could be detected. However, these target waves developed from the above mentioned schemes show different robustness to the blocking from artificial defects. A regular network of Hindmarsh-Rose neurons is designed in a two-dimensional square array, target waves are induced by using three different ways, and then some artificial defects, which are associated with anatomical defects, are set in the network to detect the effect of defects blocking on the travelling waves. It confirms that the robustness of target waves to defects blocking depends on the intrinsic properties (ways to generate target wave) of target waves.

  19. Analyzing neuronal networks using discrete-time dynamics

    NASA Astrophysics Data System (ADS)

    Ahn, Sungwoo; Smith, Brian H.; Borisyuk, Alla; Terman, David

    2010-05-01

    We develop mathematical techniques for analyzing detailed Hodgkin-Huxley like models for excitatory-inhibitory neuronal networks. Our strategy for studying a given network is to first reduce it to a discrete-time dynamical system. The discrete model is considerably easier to analyze, both mathematically and computationally, and parameters in the discrete model correspond directly to parameters in the original system of differential equations. While these networks arise in many important applications, a primary focus of this paper is to better understand mechanisms that underlie temporally dynamic responses in early processing of olfactory sensory information. The models presented here exhibit several properties that have been described for olfactory codes in an insect’s Antennal Lobe. These include transient patterns of synchronization and decorrelation of sensory inputs. By reducing the model to a discrete system, we are able to systematically study how properties of the dynamics, including the complex structure of the transients and attractors, depend on factors related to connectivity and the intrinsic and synaptic properties of cells within the network.

  20. Channel noise-induced temporal coherence transitions and synchronization transitions in adaptive neuronal networks with time delay

    NASA Astrophysics Data System (ADS)

    Gong, Yubing; Xie, Huijuan

    2017-09-01

    Using spike-timing-dependent plasticity (STDP), we study the effect of channel noise on temporal coherence and synchronization of adaptive scale-free Hodgkin-Huxley neuronal networks with time delay. It is found that the spiking regularity and spatial synchronization of the neurons intermittently increase and decrease as channel noise intensity is varied, exhibiting transitions of temporal coherence and synchronization. Moreover, this phenomenon depends on time delay, STDP, and network average degree. As time delay increases, the phenomenon is weakened, however, there are optimal STDP and network average degree by which the phenomenon becomes strongest. These results show that channel noise can intermittently enhance the temporal coherence and synchronization of the delayed adaptive neuronal networks. These findings provide a new insight into channel noise for the information processing and transmission in neural systems.

  1. Integrated workflows for spiking neuronal network simulations

    PubMed Central

    Antolík, Ján; Davison, Andrew P.

    2013-01-01

    The increasing availability of computational resources is enabling more detailed, realistic modeling in computational neuroscience, resulting in a shift toward more heterogeneous models of neuronal circuits, and employment of complex experimental protocols. This poses a challenge for existing tool chains, as the set of tools involved in a typical modeler's workflow is expanding concomitantly, with growing complexity in the metadata flowing between them. For many parts of the workflow, a range of tools is available; however, numerous areas lack dedicated tools, while integration of existing tools is limited. This forces modelers to either handle the workflow manually, leading to errors, or to write substantial amounts of code to automate parts of the workflow, in both cases reducing their productivity. To address these issues, we have developed Mozaik: a workflow system for spiking neuronal network simulations written in Python. Mozaik integrates model, experiment and stimulation specification, simulation execution, data storage, data analysis and visualization into a single automated workflow, ensuring that all relevant metadata are available to all workflow components. It is based on several existing tools, including PyNN, Neo, and Matplotlib. It offers a declarative way to specify models and recording configurations using hierarchically organized configuration files. Mozaik automatically records all data together with all relevant metadata about the experimental context, allowing automation of the analysis and visualization stages. Mozaik has a modular architecture, and the existing modules are designed to be extensible with minimal programming effort. Mozaik increases the productivity of running virtual experiments on highly structured neuronal networks by automating the entire experimental cycle, while increasing the reliability of modeling studies by relieving the user from manual handling of the flow of metadata between the individual workflow stages. PMID

  2. Integrated workflows for spiking neuronal network simulations.

    PubMed

    Antolík, Ján; Davison, Andrew P

    2013-01-01

    The increasing availability of computational resources is enabling more detailed, realistic modeling in computational neuroscience, resulting in a shift toward more heterogeneous models of neuronal circuits, and employment of complex experimental protocols. This poses a challenge for existing tool chains, as the set of tools involved in a typical modeler's workflow is expanding concomitantly, with growing complexity in the metadata flowing between them. For many parts of the workflow, a range of tools is available; however, numerous areas lack dedicated tools, while integration of existing tools is limited. This forces modelers to either handle the workflow manually, leading to errors, or to write substantial amounts of code to automate parts of the workflow, in both cases reducing their productivity. To address these issues, we have developed Mozaik: a workflow system for spiking neuronal network simulations written in Python. Mozaik integrates model, experiment and stimulation specification, simulation execution, data storage, data analysis and visualization into a single automated workflow, ensuring that all relevant metadata are available to all workflow components. It is based on several existing tools, including PyNN, Neo, and Matplotlib. It offers a declarative way to specify models and recording configurations using hierarchically organized configuration files. Mozaik automatically records all data together with all relevant metadata about the experimental context, allowing automation of the analysis and visualization stages. Mozaik has a modular architecture, and the existing modules are designed to be extensible with minimal programming effort. Mozaik increases the productivity of running virtual experiments on highly structured neuronal networks by automating the entire experimental cycle, while increasing the reliability of modeling studies by relieving the user from manual handling of the flow of metadata between the individual workflow stages.

  3. A solution to neural field equations by a recurrent neural network method

    NASA Astrophysics Data System (ADS)

    Alharbi, Abir

    2012-09-01

    Neural field equations (NFE) are used to model the activity of neurons in the brain, it is introduced from a single neuron 'integrate-and-fire model' starting point. The neural continuum is spatially discretized for numerical studies, and the governing equations are modeled as a system of ordinary differential equations. In this article the recurrent neural network approach is used to solve this system of ODEs. This consists of a technique developed by combining the standard numerical method of finite-differences with the Hopfield neural network. The architecture of the net, energy function, updating equations, and algorithms are developed for the NFE model. A Hopfield Neural Network is then designed to minimize the energy function modeling the NFE. Results obtained from the Hopfield-finite-differences net show excellent performance in terms of accuracy and speed. The parallelism nature of the Hopfield approaches may make them easier to implement on fast parallel computers and give them the speed advantage over the traditional methods.

  4. Modularity Induced Gating and Delays in Neuronal Networks

    PubMed Central

    Shein-Idelson, Mark; Cohen, Gilad; Hanein, Yael

    2016-01-01

    Neural networks, despite their highly interconnected nature, exhibit distinctly localized and gated activation. Modularity, a distinctive feature of neural networks, has been recently proposed as an important parameter determining the manner by which networks support activity propagation. Here we use an engineered biological model, consisting of engineered rat cortical neurons, to study the role of modular topology in gating the activity between cell populations. We show that pairs of connected modules support conditional propagation (transmitting stronger bursts with higher probability), long delays and propagation asymmetry. Moreover, large modular networks manifest diverse patterns of both local and global activation. Blocking inhibition decreased activity diversity and replaced it with highly consistent transmission patterns. By independently controlling modularity and disinhibition, experimentally and in a model, we pose that modular topology is an important parameter affecting activation localization and is instrumental for population-level gating by disinhibition. PMID:27104350

  5. PyNN: A Common Interface for Neuronal Network Simulators.

    PubMed

    Davison, Andrew P; Brüderle, Daniel; Eppler, Jochen; Kremkow, Jens; Muller, Eilif; Pecevski, Dejan; Perrinet, Laurent; Yger, Pierre

    2008-01-01

    Computational neuroscience has produced a diversity of software for simulations of networks of spiking neurons, with both negative and positive consequences. On the one hand, each simulator uses its own programming or configuration language, leading to considerable difficulty in porting models from one simulator to another. This impedes communication between investigators and makes it harder to reproduce and build on the work of others. On the other hand, simulation results can be cross-checked between different simulators, giving greater confidence in their correctness, and each simulator has different optimizations, so the most appropriate simulator can be chosen for a given modelling task. A common programming interface to multiple simulators would reduce or eliminate the problems of simulator diversity while retaining the benefits. PyNN is such an interface, making it possible to write a simulation script once, using the Python programming language, and run it without modification on any supported simulator (currently NEURON, NEST, PCSIM, Brian and the Heidelberg VLSI neuromorphic hardware). PyNN increases the productivity of neuronal network modelling by providing high-level abstraction, by promoting code sharing and reuse, and by providing a foundation for simulator-agnostic analysis, visualization and data-management tools. PyNN increases the reliability of modelling studies by making it much easier to check results on multiple simulators. PyNN is open-source software and is available from http://neuralensemble.org/PyNN.

  6. Complex Rotation Quantum Dynamic Neural Networks (CRQDNN) using Complex Quantum Neuron (CQN): Applications to time series prediction.

    PubMed

    Cui, Yiqian; Shi, Junyou; Wang, Zili

    2015-11-01

    Quantum Neural Networks (QNN) models have attracted great attention since it innovates a new neural computing manner based on quantum entanglement. However, the existing QNN models are mainly based on the real quantum operations, and the potential of quantum entanglement is not fully exploited. In this paper, we proposes a novel quantum neuron model called Complex Quantum Neuron (CQN) that realizes a deep quantum entanglement. Also, a novel hybrid networks model Complex Rotation Quantum Dynamic Neural Networks (CRQDNN) is proposed based on Complex Quantum Neuron (CQN). CRQDNN is a three layer model with both CQN and classical neurons. An infinite impulse response (IIR) filter is embedded in the Networks model to enable the memory function to process time series inputs. The Levenberg-Marquardt (LM) algorithm is used for fast parameter learning. The networks model is developed to conduct time series predictions. Two application studies are done in this paper, including the chaotic time series prediction and electronic remaining useful life (RUL) prediction. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Mapping cortical mesoscopic networks of single spiking cortical or sub-cortical neurons

    PubMed Central

    Xiao, Dongsheng; Vanni, Matthieu P; Mitelut, Catalin C; Chan, Allen W; LeDue, Jeffrey M; Xie, Yicheng; Chen, Andrew CN; Swindale, Nicholas V; Murphy, Timothy H

    2017-01-01

    Understanding the basis of brain function requires knowledge of cortical operations over wide-spatial scales, but also within the context of single neurons. In vivo, wide-field GCaMP imaging and sub-cortical/cortical cellular electrophysiology were used in mice to investigate relationships between spontaneous single neuron spiking and mesoscopic cortical activity. We make use of a rich set of cortical activity motifs that are present in spontaneous activity in anesthetized and awake animals. A mesoscale spike-triggered averaging procedure allowed the identification of motifs that are preferentially linked to individual spiking neurons by employing genetically targeted indicators of neuronal activity. Thalamic neurons predicted and reported specific cycles of wide-scale cortical inhibition/excitation. In contrast, spike-triggered maps derived from single cortical neurons yielded spatio-temporal maps expected for regional cortical consensus function. This approach can define network relationships between any point source of neuronal spiking and mesoscale cortical maps. DOI: http://dx.doi.org/10.7554/eLife.19976.001 PMID:28160463

  8. Multi-level characterization of balanced inhibitory-excitatory cortical neuron network derived from human pluripotent stem cells.

    PubMed

    Nadadhur, Aishwarya G; Emperador Melero, Javier; Meijer, Marieke; Schut, Desiree; Jacobs, Gerbren; Li, Ka Wan; Hjorth, J J Johannes; Meredith, Rhiannon M; Toonen, Ruud F; Van Kesteren, Ronald E; Smit, August B; Verhage, Matthijs; Heine, Vivi M

    2017-01-01

    Generation of neuronal cultures from induced pluripotent stem cells (hiPSCs) serve the studies of human brain disorders. However we lack neuronal networks with balanced excitatory-inhibitory activities, which are suitable for single cell analysis. We generated low-density networks of hPSC-derived GABAergic and glutamatergic cortical neurons. We used two different co-culture models with astrocytes. We show that these cultures have balanced excitatory-inhibitory synaptic identities using confocal microscopy, electrophysiological recordings, calcium imaging and mRNA analysis. These simple and robust protocols offer the opportunity for single-cell to multi-level analysis of patient hiPSC-derived cortical excitatory-inhibitory networks; thereby creating advanced tools to study disease mechanisms underlying neurodevelopmental disorders.

  9. An Asynchronous Recurrent Network of Cellular Automaton-Based Neurons and Its Reproduction of Spiking Neural Network Activities.

    PubMed

    Matsubara, Takashi; Torikai, Hiroyuki

    2016-04-01

    Modeling and implementation approaches for the reproduction of input-output relationships in biological nervous tissues contribute to the development of engineering and clinical applications. However, because of high nonlinearity, the traditional modeling and implementation approaches encounter difficulties in terms of generalization ability (i.e., performance when reproducing an unknown data set) and computational resources (i.e., computation time and circuit elements). To overcome these difficulties, asynchronous cellular automaton-based neuron (ACAN) models, which are described as special kinds of cellular automata that can be implemented as small asynchronous sequential logic circuits have been proposed. This paper presents a novel type of such ACAN and a theoretical analysis of its excitability. This paper also presents a novel network of such neurons, which can mimic input-output relationships of biological and nonlinear ordinary differential equation model neural networks. Numerical analyses confirm that the presented network has a higher generalization ability than other major modeling and implementation approaches. In addition, Field-Programmable Gate Array-implementations confirm that the presented network requires lower computational resources.

  10. Local and global synchronization transitions induced by time delays in small-world neuronal networks with chemical synapses.

    PubMed

    Yu, Haitao; Wang, Jiang; Du, Jiwei; Deng, Bin; Wei, Xile

    2015-02-01

    Effects of time delay on the local and global synchronization in small-world neuronal networks with chemical synapses are investigated in this paper. Numerical results show that, for both excitatory and inhibitory coupling types, the information transmission delay can always induce synchronization transitions of spiking neurons in small-world networks. In particular, regions of in-phase and out-of-phase synchronization of connected neurons emerge intermittently as the synaptic delay increases. For excitatory coupling, all transitions to spiking synchronization occur approximately at integer multiples of the firing period of individual neurons; while for inhibitory coupling, these transitions appear at the odd multiples of the half of the firing period of neurons. More importantly, the local synchronization transition is more profound than the global synchronization transition, depending on the type of coupling synapse. For excitatory synapses, the local in-phase synchronization observed for some values of the delay also occur at a global scale; while for inhibitory ones, this synchronization, observed at the local scale, disappears at a global scale. Furthermore, the small-world structure can also affect the phase synchronization of neuronal networks. It is demonstrated that increasing the rewiring probability can always improve the global synchronization of neuronal activity, but has little effect on the local synchronization of neighboring neurons.

  11. Models and simulation of 3D neuronal dendritic trees using Bayesian networks.

    PubMed

    López-Cruz, Pedro L; Bielza, Concha; Larrañaga, Pedro; Benavides-Piccione, Ruth; DeFelipe, Javier

    2011-12-01

    Neuron morphology is crucial for neuronal connectivity and brain information processing. Computational models are important tools for studying dendritic morphology and its role in brain function. We applied a class of probabilistic graphical models called Bayesian networks to generate virtual dendrites from layer III pyramidal neurons from three different regions of the neocortex of the mouse. A set of 41 morphological variables were measured from the 3D reconstructions of real dendrites and their probability distributions used in a machine learning algorithm to induce the model from the data. A simulation algorithm is also proposed to obtain new dendrites by sampling values from Bayesian networks. The main advantage of this approach is that it takes into account and automatically locates the relationships between variables in the data instead of using predefined dependencies. Therefore, the methodology can be applied to any neuronal class while at the same time exploiting class-specific properties. Also, a Bayesian network was defined for each part of the dendrite, allowing the relationships to change in the different sections and to model heterogeneous developmental factors or spatial influences. Several univariate statistical tests and a novel multivariate test based on Kullback-Leibler divergence estimation confirmed that virtual dendrites were similar to real ones. The analyses of the models showed relationships that conform to current neuroanatomical knowledge and support model correctness. At the same time, studying the relationships in the models can help to identify new interactions between variables related to dendritic morphology.

  12. Simple and Inexpensive Paper-Based Astrocyte Co-culture to Improve Survival of Low-Density Neuronal Networks

    PubMed Central

    Aebersold, Mathias J.; Thompson-Steckel, Greta; Joutang, Adriane; Schneider, Moritz; Burchert, Conrad; Forró, Csaba; Weydert, Serge; Han, Hana; Vörös, János

    2018-01-01

    Bottom-up neuroscience aims to engineer well-defined networks of neurons to investigate the functions of the brain. By reducing the complexity of the brain to achievable target questions, such in vitro bioassays better control experimental variables and can serve as a versatile tool for fundamental and pharmacological research. Astrocytes are a cell type critical to neuronal function, and the addition of astrocytes to neuron cultures can improve the quality of in vitro assays. Here, we present cellulose as an astrocyte culture substrate. Astrocytes cultured on the cellulose fiber matrix thrived and formed a dense 3D network. We devised a novel co-culture platform by suspending the easy-to-handle astrocytic paper cultures above neuronal networks of low densities typically needed for bottom-up neuroscience. There was significant improvement in neuronal viability after 5 days in vitro at densities ranging from 50,000 cells/cm2 down to isolated cells at 1,000 cells/cm2. Cultures exhibited spontaneous spiking even at the very low densities, with a significantly greater spike frequency per cell compared to control mono-cultures. Applying the co-culture platform to an engineered network of neurons on a patterned substrate resulted in significantly improved viability and almost doubled the density of live cells. Lastly, the shape of the cellulose substrate can easily be customized to a wide range of culture vessels, making the platform versatile for different applications that will further enable research in bottom-up neuroscience and drug development. PMID:29535595

  13. SERS investigations and electrical recording of neuronal networks with three-dimensional plasmonic nanoantennas (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    De Angelis, Francesco

    2017-06-01

    SERS investigations and electrical recording of neuronal networks with three-dimensional plasmonic nanoantennas Michele Dipalo, Valeria Caprettini, Anbrea Barbaglia, Laura Lovato, Francesco De Angelis e-mail: francesco.deangelis@iit.it Istituto Italiano di Tecnologia, Via Morego 30, 16163, Genova Biological systems are analysed mainly by optical, chemical or electrical methods. Normally each of these techniques provides only partial information about the environment, while combined investigations could reveal new phenomena occurring in complex systems such as in-vitro neuronal networks. Aiming at the merging of optical and electrical investigations of biological samples, we introduced three-dimensional plasmonic nanoantennas on CMOS-based electrical sensors [1]. The overall device is then capable of enhanced Raman Analysis of cultured cells combined with electrical recording of neuronal activity. The Raman measurements show a much higher sensitivity when performed on the tip of the nanoantenna in respect to the flat substrate [2]; this effect is a combination of the high plasmonic field enhancement and of the tight adhesion of cells on the nanoantenna tip. Furthermore, when plasmonic opto-poration is exploited [3] the 3D nanoelectrodes are able to penetrate through the cell membrane thus accessing the intracellular environment. Our latest results (unpublished) show that the technique is completely non-invasive and solves many problems related to state-of-the-art intracellular recording approaches on large neuronal networks. This research received funding from ERC-IDEAS Program: "Neuro-Plasmonics" [Grant n. 616213]. References: [1] M. Dipalo, G. C. Messina, H. Amin, R. La Rocca, V. Shalabaeva, A. Simi, A. Maccione, P. Zilio, L. Berdondini, F. De Angelis, Nanoscale 2015, 7, 3703. [2] R. La Rocca, G. C. Messina, M. Dipalo, V. Shalabaeva, F. De Angelis, Small 2015, 11, 4632. [3] G. C. Messina et al., Spatially, Temporally, and Quantitatively Controlled Delivery of

  14. Intrinsically active and pacemaker neurons in pluripotent stem cell-derived neuronal populations.

    PubMed

    Illes, Sebastian; Jakab, Martin; Beyer, Felix; Gelfert, Renate; Couillard-Despres, Sébastien; Schnitzler, Alfons; Ritter, Markus; Aigner, Ludwig

    2014-03-11

    Neurons generated from pluripotent stem cells (PSCs) self-organize into functional neuronal assemblies in vitro, generating synchronous network activities. Intriguingly, PSC-derived neuronal assemblies develop spontaneous activities that are independent of external stimulation, suggesting the presence of thus far undetected intrinsically active neurons (IANs). Here, by using mouse embryonic stem cells, we provide evidence for the existence of IANs in PSC-neuronal networks based on extracellular multielectrode array and intracellular patch-clamp recordings. IANs remain active after pharmacological inhibition of fast synaptic communication and possess intrinsic mechanisms required for autonomous neuronal activity. PSC-derived IANs are functionally integrated in PSC-neuronal populations, contribute to synchronous network bursting, and exhibit pacemaker properties. The intrinsic activity and pacemaker properties of the neuronal subpopulation identified herein may be particularly relevant for interventions involving transplantation of neural tissues. IANs may be a key element in the regulation of the functional activity of grafted as well as preexisting host neuronal networks.

  15. Intrinsically Active and Pacemaker Neurons in Pluripotent Stem Cell-Derived Neuronal Populations

    PubMed Central

    Illes, Sebastian; Jakab, Martin; Beyer, Felix; Gelfert, Renate; Couillard-Despres, Sébastien; Schnitzler, Alfons; Ritter, Markus; Aigner, Ludwig

    2014-01-01

    Summary Neurons generated from pluripotent stem cells (PSCs) self-organize into functional neuronal assemblies in vitro, generating synchronous network activities. Intriguingly, PSC-derived neuronal assemblies develop spontaneous activities that are independent of external stimulation, suggesting the presence of thus far undetected intrinsically active neurons (IANs). Here, by using mouse embryonic stem cells, we provide evidence for the existence of IANs in PSC-neuronal networks based on extracellular multielectrode array and intracellular patch-clamp recordings. IANs remain active after pharmacological inhibition of fast synaptic communication and possess intrinsic mechanisms required for autonomous neuronal activity. PSC-derived IANs are functionally integrated in PSC-neuronal populations, contribute to synchronous network bursting, and exhibit pacemaker properties. The intrinsic activity and pacemaker properties of the neuronal subpopulation identified herein may be particularly relevant for interventions involving transplantation of neural tissues. IANs may be a key element in the regulation of the functional activity of grafted as well as preexisting host neuronal networks. PMID:24672755

  16. Blur identification by multilayer neural network based on multivalued neurons.

    PubMed

    Aizenberg, Igor; Paliy, Dmitriy V; Zurada, Jacek M; Astola, Jaakko T

    2008-05-01

    A multilayer neural network based on multivalued neurons (MLMVN) is a neural network with a traditional feedforward architecture. At the same time, this network has a number of specific different features. Its backpropagation learning algorithm is derivative-free. The functionality of MLMVN is superior to that of the traditional feedforward neural networks and of a variety kernel-based networks. Its higher flexibility and faster adaptation to the target mapping enables to model complex problems using simpler networks. In this paper, the MLMVN is used to identify both type and parameters of the point spread function, whose precise identification is of crucial importance for the image deblurring. The simulation results show the high efficiency of the proposed approach. It is confirmed that the MLMVN is a powerful tool for solving classification problems, especially multiclass ones.

  17. Mean-field models for heterogeneous networks of two-dimensional integrate and fire neurons.

    PubMed

    Nicola, Wilten; Campbell, Sue Ann

    2013-01-01

    We analytically derive mean-field models for all-to-all coupled networks of heterogeneous, adapting, two-dimensional integrate and fire neurons. The class of models we consider includes the Izhikevich, adaptive exponential and quartic integrate and fire models. The heterogeneity in the parameters leads to different moment closure assumptions that can be made in the derivation of the mean-field model from the population density equation for the large network. Three different moment closure assumptions lead to three different mean-field systems. These systems can be used for distinct purposes such as bifurcation analysis of the large networks, prediction of steady state firing rate distributions, parameter estimation for actual neurons and faster exploration of the parameter space. We use the mean-field systems to analyze adaptation induced bursting under realistic sources of heterogeneity in multiple parameters. Our analysis demonstrates that the presence of heterogeneity causes the Hopf bifurcation associated with the emergence of bursting to change from sub-critical to super-critical. This is confirmed with numerical simulations of the full network for biologically reasonable parameter values. This change decreases the plausibility of adaptation being the cause of bursting in hippocampal area CA3, an area with a sizable population of heavily coupled, strongly adapting neurons.

  18. Mean-field models for heterogeneous networks of two-dimensional integrate and fire neurons

    PubMed Central

    Nicola, Wilten; Campbell, Sue Ann

    2013-01-01

    We analytically derive mean-field models for all-to-all coupled networks of heterogeneous, adapting, two-dimensional integrate and fire neurons. The class of models we consider includes the Izhikevich, adaptive exponential and quartic integrate and fire models. The heterogeneity in the parameters leads to different moment closure assumptions that can be made in the derivation of the mean-field model from the population density equation for the large network. Three different moment closure assumptions lead to three different mean-field systems. These systems can be used for distinct purposes such as bifurcation analysis of the large networks, prediction of steady state firing rate distributions, parameter estimation for actual neurons and faster exploration of the parameter space. We use the mean-field systems to analyze adaptation induced bursting under realistic sources of heterogeneity in multiple parameters. Our analysis demonstrates that the presence of heterogeneity causes the Hopf bifurcation associated with the emergence of bursting to change from sub-critical to super-critical. This is confirmed with numerical simulations of the full network for biologically reasonable parameter values. This change decreases the plausibility of adaptation being the cause of bursting in hippocampal area CA3, an area with a sizable population of heavily coupled, strongly adapting neurons. PMID:24416013

  19. Midbrain dopamine neurons in Parkinson's disease exhibit a dysregulated miRNA and target-gene network.

    PubMed

    Briggs, Christine E; Wang, Yulei; Kong, Benjamin; Woo, Tsung-Ung W; Iyer, Lakshmanan K; Sonntag, Kai C

    2015-08-27

    The degeneration of substantia nigra (SN) dopamine (DA) neurons in sporadic Parkinson׳s disease (PD) is characterized by disturbed gene expression networks. Micro(mi)RNAs are post-transcriptional regulators of gene expression and we recently provided evidence that these molecules may play a functional role in the pathogenesis of PD. Here, we document a comprehensive analysis of miRNAs in SN DA neurons and PD, including sex differences. Our data show that miRNAs are dysregulated in disease-affected neurons and differentially expressed between male and female samples with a trend of more up-regulated miRNAs in males and more down-regulated miRNAs in females. Unbiased Ingenuity Pathway Analysis (IPA) revealed a network of miRNA/target-gene associations that is consistent with dysfunctional gene and signaling pathways in PD pathology. Our study provides evidence for a general association of miRNAs with the cellular function and identity of SN DA neurons, and with deregulated gene expression networks and signaling pathways related to PD pathogenesis that may be sex-specific. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. GABA-A receptor antagonists increase firing, bursting and synchrony of spontaneous activity in neuronal networks grown on microelectrode arrays: a step towards chemical "fingerprinting"

    EPA Science Inventory

    Assessment of effects on spontaneous network activity in neurons grown on MEAs is a proposed method to screen chemicals for potential neurotoxicity. In addition, differential effects on network activity (chemical "fingerprints") could be used to classify chemical modes of action....

  1. Cholinergic and nitrergic neuronal networks in the goldfish telencephalon.

    PubMed

    Giraldez-Perez, Rosa M; Gaytan, Susana P; Pasaro, Rosario

    2013-01-01

    The general organization of cholinergic and nitrergic elements in the central nervous system seems to be highly conserved among vertebrates, with the involvement of these neurotransmitter systems now well established in sensory, motor and cognitive processing. The goldfish is a widely used animal model in neuroanatomical, neurophysiological, and behavioral research. The purpose of this study was to examine pallial and subpallial cholinoceptive, cholinergic and nitrergic populations in the goldfish telencephalon by means of histochemical and immunohistochemical techniques in order to identify neurons containing acetylcholinesterase (AChE), choline acetyltransferase (ChAT), NADPH-diaphorase (NADPHd), and neuronal nitric oxide synthase (nNOS), and to relate their distribution to their putative functional significance. Regions containing AChE-labeled neurons represented terminal fields of cholinergic inputs as well as a widespread distribution of AChE-related enzymes; these regions also usually contained NADPHd-labeled neurons and often contained small numbers of nNOS-positive cells. However, the ventral subdivisions of the medial and lateral parts of the dorsal telencephalic area, and the ventral and lateral parts of the ventral telencephalic area, were devoid of nNOS-labeled cells. ChAT-positive neurons were found only in the lateral part of the ventral telencephalic area. ChAT- and nNOS-positive fibers exhibited a radial orientation, and were seen as thin axons with en-passant boutons. The distribution of these elements could help to elucidate the role of cholinergic and nitrergic neuronal networks in the goldfish telencephalon.

  2. A unified framework for spiking and gap-junction interactions in distributed neuronal network simulations.

    PubMed

    Hahne, Jan; Helias, Moritz; Kunkel, Susanne; Igarashi, Jun; Bolten, Matthias; Frommer, Andreas; Diesmann, Markus

    2015-01-01

    Contemporary simulators for networks of point and few-compartment model neurons come with a plethora of ready-to-use neuron and synapse models and support complex network topologies. Recent technological advancements have broadened the spectrum of application further to the efficient simulation of brain-scale networks on supercomputers. In distributed network simulations the amount of spike data that accrues per millisecond and process is typically low, such that a common optimization strategy is to communicate spikes at relatively long intervals, where the upper limit is given by the shortest synaptic transmission delay in the network. This approach is well-suited for simulations that employ only chemical synapses but it has so far impeded the incorporation of gap-junction models, which require instantaneous neuronal interactions. Here, we present a numerical algorithm based on a waveform-relaxation technique which allows for network simulations with gap junctions in a way that is compatible with the delayed communication strategy. Using a reference implementation in the NEST simulator, we demonstrate that the algorithm and the required data structures can be smoothly integrated with existing code such that they complement the infrastructure for spiking connections. To show that the unified framework for gap-junction and spiking interactions achieves high performance and delivers high accuracy in the presence of gap junctions, we present benchmarks for workstations, clusters, and supercomputers. Finally, we discuss limitations of the novel technology.

  3. Artificial neuron-glia networks learning approach based on cooperative coevolution.

    PubMed

    Mesejo, Pablo; Ibáñez, Oscar; Fernández-Blanco, Enrique; Cedrón, Francisco; Pazos, Alejandro; Porto-Pazos, Ana B

    2015-06-01

    Artificial Neuron-Glia Networks (ANGNs) are a novel bio-inspired machine learning approach. They extend classical Artificial Neural Networks (ANNs) by incorporating recent findings and suppositions about the way information is processed by neural and astrocytic networks in the most evolved living organisms. Although ANGNs are not a consolidated method, their performance against the traditional approach, i.e. without artificial astrocytes, was already demonstrated on classification problems. However, the corresponding learning algorithms developed so far strongly depends on a set of glial parameters which are manually tuned for each specific problem. As a consequence, previous experimental tests have to be done in order to determine an adequate set of values, making such manual parameter configuration time-consuming, error-prone, biased and problem dependent. Thus, in this paper, we propose a novel learning approach for ANGNs that fully automates the learning process, and gives the possibility of testing any kind of reasonable parameter configuration for each specific problem. This new learning algorithm, based on coevolutionary genetic algorithms, is able to properly learn all the ANGNs parameters. Its performance is tested on five classification problems achieving significantly better results than ANGN and competitive results with ANN approaches.

  4. Large-Scale Modeling of Epileptic Seizures: Scaling Properties of Two Parallel Neuronal Network Simulation Algorithms

    DOE PAGES

    Pesce, Lorenzo L.; Lee, Hyong C.; Hereld, Mark; ...

    2013-01-01

    Our limited understanding of the relationship between the behavior of individual neurons and large neuronal networks is an important limitation in current epilepsy research and may be one of the main causes of our inadequate ability to treat it. Addressing this problem directly via experiments is impossibly complex; thus, we have been developing and studying medium-large-scale simulations of detailed neuronal networks to guide us. Flexibility in the connection schemas and a complete description of the cortical tissue seem necessary for this purpose. In this paper we examine some of the basic issues encountered in these multiscale simulations. We have determinedmore » the detailed behavior of two such simulators on parallel computer systems. The observed memory and computation-time scaling behavior for a distributed memory implementation were very good over the range studied, both in terms of network sizes (2,000 to 400,000 neurons) and processor pool sizes (1 to 256 processors). Our simulations required between a few megabytes and about 150 gigabytes of RAM and lasted between a few minutes and about a week, well within the capability of most multinode clusters. Therefore, simulations of epileptic seizures on networks with millions of cells should be feasible on current supercomputers.« less

  5. DELTAMETHRIN AND ESFENVALERATE INHIBIT SPONTANEOUS NETWORK ACTIVITY IN RAT CORTICAL NEURONS IN VITRO.

    EPA Science Inventory

    Understanding pyrethroid actions on neuronal networks will help to establish a mode of action for these compounds, which is needed for cumulative risk decisions under the Food Quality Protection Act of 1996. However, pyrethroid effects on spontaneous activity in networks of inter...

  6. The effects of neuron morphology on graph theoretic measures of network connectivity: the analysis of a two-level statistical model.

    PubMed

    Aćimović, Jugoslava; Mäki-Marttunen, Tuomo; Linne, Marja-Leena

    2015-01-01

    We developed a two-level statistical model that addresses the question of how properties of neurite morphology shape the large-scale network connectivity. We adopted a low-dimensional statistical description of neurites. From the neurite model description we derived the expected number of synapses, node degree, and the effective radius, the maximal distance between two neurons expected to form at least one synapse. We related these quantities to the network connectivity described using standard measures from graph theory, such as motif counts, clustering coefficient, minimal path length, and small-world coefficient. These measures are used in a neuroscience context to study phenomena from synaptic connectivity in the small neuronal networks to large scale functional connectivity in the cortex. For these measures we provide analytical solutions that clearly relate different model properties. Neurites that sparsely cover space lead to a small effective radius. If the effective radius is small compared to the overall neuron size the obtained networks share similarities with the uniform random networks as each neuron connects to a small number of distant neurons. Large neurites with densely packed branches lead to a large effective radius. If this effective radius is large compared to the neuron size, the obtained networks have many local connections. In between these extremes, the networks maximize the variability of connection repertoires. The presented approach connects the properties of neuron morphology with large scale network properties without requiring heavy simulations with many model parameters. The two-steps procedure provides an easier interpretation of the role of each modeled parameter. The model is flexible and each of its components can be further expanded. We identified a range of model parameters that maximizes variability in network connectivity, the property that might affect network capacity to exhibit different dynamical regimes.

  7. Widespread receptivity to neuropeptide PDF throughout the neuronal circadian clock network of Drosophila revealed by real-time cyclic AMP imaging.

    PubMed

    Shafer, Orie T; Kim, Dong Jo; Dunbar-Yaffe, Richard; Nikolaev, Viacheslav O; Lohse, Martin J; Taghert, Paul H

    2008-04-24

    The neuropeptide PDF is released by sixteen clock neurons in Drosophila and helps maintain circadian activity rhythms by coordinating a network of approximately 150 neuronal clocks. Whether PDF acts directly on elements of this neural network remains unknown. We address this question by adapting Epac1-camps, a genetically encoded cAMP FRET sensor, for use in the living brain. We find that a subset of the PDF-expressing neurons respond to PDF with long-lasting cAMP increases and confirm that such responses require the PDF receptor. In contrast, an unrelated Drosophila neuropeptide, DH31, stimulates large cAMP increases in all PDF-expressing clock neurons. Thus, the network of approximately 150 clock neurons displays widespread, though not uniform, PDF receptivity. This work introduces a sensitive means of measuring cAMP changes in a living brain with subcellular resolution. Specifically, it experimentally confirms the longstanding hypothesis that PDF is a direct modulator of most neurons in the Drosophila clock network.

  8. Channel Noise-Enhanced Synchronization Transitions Induced by Time Delay in Adaptive Neuronal Networks with Spike-Timing-Dependent Plasticity

    NASA Astrophysics Data System (ADS)

    Xie, Huijuan; Gong, Yubing; Wang, Baoying

    In this paper, we numerically study the effect of channel noise on synchronization transitions induced by time delay in adaptive scale-free Hodgkin-Huxley neuronal networks with spike-timing-dependent plasticity (STDP). It is found that synchronization transitions by time delay vary as channel noise intensity is changed and become most pronounced when channel noise intensity is optimal. This phenomenon depends on STDP and network average degree, and it can be either enhanced or suppressed as network average degree increases depending on channel noise intensity. These results show that there are optimal channel noise and network average degree that can enhance the synchronization transitions by time delay in the adaptive neuronal networks. These findings could be helpful for better understanding of the regulation effect of channel noise on synchronization of neuronal networks. They could find potential implications for information transmission in neural systems.

  9. Development of coherent neuronal activity patterns in mammalian cortical networks: common principles and local hetereogeneity.

    PubMed

    Egorov, Alexei V; Draguhn, Andreas

    2013-01-01

    Many mammals are born in a very immature state and develop their rich repertoire of behavioral and cognitive functions postnatally. This development goes in parallel with changes in the anatomical and functional organization of cortical structures which are involved in most complex activities. The emerging spatiotemporal activity patterns in multi-neuronal cortical networks may indeed form a direct neuronal correlate of systemic functions like perception, sensorimotor integration, decision making or memory formation. During recent years, several studies--mostly in rodents--have shed light on the ontogenesis of such highly organized patterns of network activity. While each local network has its own peculiar properties, some general rules can be derived. We therefore review and compare data from the developing hippocampus, neocortex and--as an intermediate region--entorhinal cortex. All cortices seem to follow a characteristic sequence starting with uncorrelated activity in uncoupled single neurons where transient activity seems to have mostly trophic effects. In rodents, before and shortly after birth, cortical networks develop weakly coordinated multineuronal discharges which have been termed synchronous plateau assemblies (SPAs). While these patterns rely mostly on electrical coupling by gap junctions, the subsequent increase in number and maturation of chemical synapses leads to the generation of large-scale coherent discharges. These patterns have been termed giant depolarizing potentials (GDPs) for predominantly GABA-induced events or early network oscillations (ENOs) for mostly glutamatergic bursts, respectively. During the third to fourth postnatal week, cortical areas reach their final activity patterns with distinct network oscillations and highly specific neuronal discharge sequences which support adult behavior. While some of the mechanisms underlying maturation of network activity have been elucidated much work remains to be done in order to fully

  10. Low Dose Isoflurane Exerts Opposing Effects on Neuronal Network Excitability in Neocortex and Hippocampus

    PubMed Central

    Ranft, Andreas; von Meyer, Ludwig; Zieglgänsberger, Walter; Kochs, Eberhard; Dodt, Hans-Ulrich

    2012-01-01

    The anesthetic excitement phase occurring during induction of anesthesia with volatile anesthetics is a well-known phenomenon in clinical practice. However, the physiological mechanisms underlying anesthetic-induced excitation are still unclear. Here we provide evidence from in vitro experiments performed on rat brain slices that the general anesthetic isoflurane at a concentration of about 0.1 mM can enhance neuronal network excitability in the hippocampus, while simultaneously reducing it in the neocortex. In contrast, isoflurane tissue concentrations above 0.3 mM expectedly caused a pronounced reduction in both brain regions. Neuronal network excitability was assessed by combining simultaneous multisite stimulation via a multielectrode array with recording intrinsic optical signals as a measure of neuronal population activity. PMID:22723999

  11. The Influence of Neuronal Density and Maturation on Network Activity of Hippocampal Cell Cultures: A Methodological Study

    PubMed Central

    Menegon, Andrea; Ferrigno, Giancarlo; Pedrocchi, Alessandra

    2013-01-01

    It is known that cell density influences the maturation process of in vitro neuronal networks. Neuronal cultures plated with different cell densities differ in number of synapses per neuron and thus in single neuron synaptic transmission, which results in a density-dependent neuronal network activity. Although many authors provided detailed information about the effects of cell density on neuronal culture activity, a dedicated report of density and age influence on neuronal hippocampal culture activity has not yet been reported. Therefore, this work aims at providing reference data to researchers that set up an experimental study on hippocampal neuronal cultures, helping in planning and decoding the experiments. In this work, we analysed the effects of both neuronal density and culture age on functional attributes of maturing hippocampal cultures. We characterized the electrophysiological activity of neuronal cultures seeded at three different cell densities, recording their spontaneous electrical activity over maturation by means of MicroElectrode Arrays (MEAs). We had gather data from 86 independent hippocampal cultures to achieve solid statistic results, considering the high culture-to-culture variability. Network activity was evaluated in terms of simple spiking, burst and network burst features. We observed that electrical descriptors were characterized by a functional peak during maturation, followed by a stable phase (for sparse and medium density cultures) or by a decrease phase (for high dense neuronal cultures). Moreover, 900 cells/mm2 cultures showed characteristics suitable for long lasting experiments (e.g. chronic effect of drug treatments) while 1800 cells/mm2 cultures should be preferred for experiments that require intense electrical activity (e.g. to evaluate the effect of inhibitory molecules). Finally, cell cultures at 3600 cells/mm2 are more appropriate for experiments in which time saving is relevant (e.g. drug screenings). These results are

  12. The influence of neuronal density and maturation on network activity of hippocampal cell cultures: a methodological study.

    PubMed

    Biffi, Emilia; Regalia, Giulia; Menegon, Andrea; Ferrigno, Giancarlo; Pedrocchi, Alessandra

    2013-01-01

    It is known that cell density influences the maturation process of in vitro neuronal networks. Neuronal cultures plated with different cell densities differ in number of synapses per neuron and thus in single neuron synaptic transmission, which results in a density-dependent neuronal network activity. Although many authors provided detailed information about the effects of cell density on neuronal culture activity, a dedicated report of density and age influence on neuronal hippocampal culture activity has not yet been reported. Therefore, this work aims at providing reference data to researchers that set up an experimental study on hippocampal neuronal cultures, helping in planning and decoding the experiments. In this work, we analysed the effects of both neuronal density and culture age on functional attributes of maturing hippocampal cultures. We characterized the electrophysiological activity of neuronal cultures seeded at three different cell densities, recording their spontaneous electrical activity over maturation by means of MicroElectrode Arrays (MEAs). We had gather data from 86 independent hippocampal cultures to achieve solid statistic results, considering the high culture-to-culture variability. Network activity was evaluated in terms of simple spiking, burst and network burst features. We observed that electrical descriptors were characterized by a functional peak during maturation, followed by a stable phase (for sparse and medium density cultures) or by a decrease phase (for high dense neuronal cultures). Moreover, 900 cells/mm(2) cultures showed characteristics suitable for long lasting experiments (e.g. chronic effect of drug treatments) while 1800 cells/mm(2) cultures should be preferred for experiments that require intense electrical activity (e.g. to evaluate the effect of inhibitory molecules). Finally, cell cultures at 3600 cells/mm(2) are more appropriate for experiments in which time saving is relevant (e.g. drug screenings). These results

  13. Collective stochastic coherence in recurrent neuronal networks

    NASA Astrophysics Data System (ADS)

    Sancristóbal, Belén; Rebollo, Beatriz; Boada, Pol; Sanchez-Vives, Maria V.; Garcia-Ojalvo, Jordi

    2016-09-01

    Recurrent networks of dynamic elements frequently exhibit emergent collective oscillations, which can show substantial regularity even when the individual elements are considerably noisy. How noise-induced dynamics at the local level coexists with regular oscillations at the global level is still unclear. Here we show that a combination of stochastic recurrence-based initiation with deterministic refractoriness in an excitable network can reconcile these two features, leading to maximum collective coherence for an intermediate noise level. We report this behaviour in the slow oscillation regime exhibited by a cerebral cortex network under dynamical conditions resembling slow-wave sleep and anaesthesia. Computational analysis of a biologically realistic network model reveals that an intermediate level of background noise leads to quasi-regular dynamics. We verify this prediction experimentally in cortical slices subject to varying amounts of extracellular potassium, which modulates neuronal excitability and thus synaptic noise. The model also predicts that this effectively regular state should exhibit noise-induced memory of the spatial propagation profile of the collective oscillations, which is also verified experimentally. Taken together, these results allow us to construe the high regularity observed experimentally in the brain as an instance of collective stochastic coherence.

  14. PyNN: A Common Interface for Neuronal Network Simulators

    PubMed Central

    Davison, Andrew P.; Brüderle, Daniel; Eppler, Jochen; Kremkow, Jens; Muller, Eilif; Pecevski, Dejan; Perrinet, Laurent; Yger, Pierre

    2008-01-01

    Computational neuroscience has produced a diversity of software for simulations of networks of spiking neurons, with both negative and positive consequences. On the one hand, each simulator uses its own programming or configuration language, leading to considerable difficulty in porting models from one simulator to another. This impedes communication between investigators and makes it harder to reproduce and build on the work of others. On the other hand, simulation results can be cross-checked between different simulators, giving greater confidence in their correctness, and each simulator has different optimizations, so the most appropriate simulator can be chosen for a given modelling task. A common programming interface to multiple simulators would reduce or eliminate the problems of simulator diversity while retaining the benefits. PyNN is such an interface, making it possible to write a simulation script once, using the Python programming language, and run it without modification on any supported simulator (currently NEURON, NEST, PCSIM, Brian and the Heidelberg VLSI neuromorphic hardware). PyNN increases the productivity of neuronal network modelling by providing high-level abstraction, by promoting code sharing and reuse, and by providing a foundation for simulator-agnostic analysis, visualization and data-management tools. PyNN increases the reliability of modelling studies by making it much easier to check results on multiple simulators. PyNN is open-source software and is available from http://neuralensemble.org/PyNN. PMID:19194529

  15. Cultured networks of excitatory projection neurons and inhibitory interneurons for studying human cortical neurotoxicity

    PubMed Central

    Xu, Jin-Chong; Fan, Jing; Wang, Xueqing; Eacker, Stephen M.; Kam, Tae-In; Chen, Li; Yin, Xiling; Zhu, Juehua; Chi, Zhikai; Jiang, Haisong; Chen, Rong; Dawson, Ted M.; Dawson, Valina L.

    2017-01-01

    Translating neuroprotective treatments from discovery in cell and animal models to the clinic has proven challenging. To reduce the gap between basic studies of neurotoxicity and neuroprotection and clinically relevant therapies, we developed a human cortical neuron culture system from human embryonic stem cells (ESCs) or inducible pluripotent stem cells (iPSCs) that generated both excitatory and inhibitory neuronal networks resembling the composition of the human cortex. This methodology used timed administration of retinoic acid (RA) to FOXG1 neural precursor cells leading to differentiation of neuronal populations representative of the six cortical layers with both excitatory and inhibitory neuronal networks that were functional and homeostatically stable. In human cortical neuron cultures, excitotoxicity or ischemia due to oxygen and glucose deprivation led to cell death that was dependent on N-methyl-D-aspartate (NMDA) receptors, nitric oxide (NO), and the poly (ADP-ribose) polymerase (PARP)-dependent cell death, a cell death pathway designated parthanatos to separate it from apoptosis, necroptosis and other forms of cell death. Neuronal cell death was attenuated by PARP inhibitors that are currently in clinical trials for cancer treatment. This culture system provides a new platform for the study of human cortical neurotoxicity and suggests that PARP inhibitors may be useful for ameliorating excitotoxic and ischemic cell death in human neurons. PMID:27053772

  16. A simple method for characterizing passive and active neuronal properties: application to striatal neurons.

    PubMed

    Lepora, Nathan F; Blomeley, Craig P; Hoyland, Darren; Bracci, Enrico; Overton, Paul G; Gurney, Kevin

    2011-11-01

    The study of active and passive neuronal dynamics usually relies on a sophisticated array of electrophysiological, staining and pharmacological techniques. We describe here a simple complementary method that recovers many findings of these more complex methods but relies only on a basic patch-clamp recording approach. Somatic short and long current pulses were applied in vitro to striatal medium spiny (MS) and fast spiking (FS) neurons from juvenile rats. The passive dynamics were quantified by fitting two-compartment models to the short current pulse data. Lumped conductances for the active dynamics were then found by compensating this fitted passive dynamics within the current-voltage relationship from the long current pulse data. These estimated passive and active properties were consistent with previous more complex estimations of the neuron properties, supporting the approach. Relationships within the MS and FS neuron types were also evident, including a graduation of MS neuron properties consistent with recent findings about D1 and D2 dopamine receptor expression. Application of the method to simulated neuron data supported the hypothesis that it gives reasonable estimates of membrane properties and gross morphology. Therefore detailed information about the biophysics can be gained from this simple approach, which is useful for both classification of neuron type and biophysical modelling. Furthermore, because these methods rely upon no manipulations to the cell other than patch clamping, they are ideally suited to in vivo electrophysiology. © 2011 The Authors. European Journal of Neuroscience © 2011 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.

  17. A Simple Deep Learning Method for Neuronal Spike Sorting

    NASA Astrophysics Data System (ADS)

    Yang, Kai; Wu, Haifeng; Zeng, Yu

    2017-10-01

    Spike sorting is one of key technique to understand brain activity. With the development of modern electrophysiology technology, some recent multi-electrode technologies have been able to record the activity of thousands of neuronal spikes simultaneously. The spike sorting in this case will increase the computational complexity of conventional sorting algorithms. In this paper, we will focus spike sorting on how to reduce the complexity, and introduce a deep learning algorithm, principal component analysis network (PCANet) to spike sorting. The introduced method starts from a conventional model and establish a Toeplitz matrix. Through the column vectors in the matrix, we trains a PCANet, where some eigenvalue vectors of spikes could be extracted. Finally, support vector machine (SVM) is used to sort spikes. In experiments, we choose two groups of simulated data from public databases availably and compare this introduced method with conventional methods. The results indicate that the introduced method indeed has lower complexity with the same sorting errors as the conventional methods.

  18. Synchronization stability and pattern selection in a memristive neuronal network.

    PubMed

    Wang, Chunni; Lv, Mi; Alsaedi, Ahmed; Ma, Jun

    2017-11-01

    Spatial pattern formation and selection depend on the intrinsic self-organization and cooperation between nodes in spatiotemporal systems. Based on a memory neuron model, a regular network with electromagnetic induction is proposed to investigate the synchronization and pattern selection. In our model, the memristor is used to bridge the coupling between the magnetic flux and the membrane potential, and the induction current results from the time-varying electromagnetic field contributed by the exchange of ion currents and the distribution of charged ions. The statistical factor of synchronization predicts the transition of synchronization and pattern stability. The bifurcation analysis of the sampled time series for the membrane potential reveals the mode transition in electrical activity and pattern selection. A formation mechanism is outlined to account for the emergence of target waves. Although an external stimulus is imposed on each neuron uniformly, the diversity in the magnetic flux and the induction current leads to emergence of target waves in the studied network.

  19. Synchronization stability and pattern selection in a memristive neuronal network

    NASA Astrophysics Data System (ADS)

    Wang, Chunni; Lv, Mi; Alsaedi, Ahmed; Ma, Jun

    2017-11-01

    Spatial pattern formation and selection depend on the intrinsic self-organization and cooperation between nodes in spatiotemporal systems. Based on a memory neuron model, a regular network with electromagnetic induction is proposed to investigate the synchronization and pattern selection. In our model, the memristor is used to bridge the coupling between the magnetic flux and the membrane potential, and the induction current results from the time-varying electromagnetic field contributed by the exchange of ion currents and the distribution of charged ions. The statistical factor of synchronization predicts the transition of synchronization and pattern stability. The bifurcation analysis of the sampled time series for the membrane potential reveals the mode transition in electrical activity and pattern selection. A formation mechanism is outlined to account for the emergence of target waves. Although an external stimulus is imposed on each neuron uniformly, the diversity in the magnetic flux and the induction current leads to emergence of target waves in the studied network.

  20. The transfer and transformation of collective network information in gene-matched networks.

    PubMed

    Kitsukawa, Takashi; Yagi, Takeshi

    2015-10-09

    Networks, such as the human society network, social and professional networks, and biological system networks, contain vast amounts of information. Information signals in networks are distributed over nodes and transmitted through intricately wired links, making the transfer and transformation of such information difficult to follow. Here we introduce a novel method for describing network information and its transfer using a model network, the Gene-matched network (GMN), in which nodes (neurons) possess attributes (genes). In the GMN, nodes are connected according to their expression of common genes. Because neurons have multiple genes, the GMN is cluster-rich. We show that, in the GMN, information transfer and transformation were controlled systematically, according to the activity level of the network. Furthermore, information transfer and transformation could be traced numerically with a vector using genes expressed in the activated neurons, the active-gene array, which was used to assess the relative activity among overlapping neuronal groups. Interestingly, this coding style closely resembles the cell-assembly neural coding theory. The method introduced here could be applied to many real-world networks, since many systems, including human society and various biological systems, can be represented as a network of this type.

  1. Plasticity of Neuron-Glial Transmission: Equipping Glia for Long-Term Integration of Network Activity.

    PubMed

    Croft, Wayne; Dobson, Katharine L; Bellamy, Tomas C

    2015-01-01

    The capacity of synaptic networks to express activity-dependent changes in strength and connectivity is essential for learning and memory processes. In recent years, glial cells (most notably astrocytes) have been recognized as active participants in the modulation of synaptic transmission and synaptic plasticity, implicating these electrically nonexcitable cells in information processing in the brain. While the concept of bidirectional communication between neurons and glia and the mechanisms by which gliotransmission can modulate neuronal function are well established, less attention has been focussed on the computational potential of neuron-glial transmission itself. In particular, whether neuron-glial transmission is itself subject to activity-dependent plasticity and what the computational properties of such plasticity might be has not been explored in detail. In this review, we summarize current examples of plasticity in neuron-glial transmission, in many brain regions and neurotransmitter pathways. We argue that induction of glial plasticity typically requires repetitive neuronal firing over long time periods (minutes-hours) rather than the short-lived, stereotyped trigger typical of canonical long-term potentiation. We speculate that this equips glia with a mechanism for monitoring average firing rates in the synaptic network, which is suited to the longer term roles proposed for astrocytes in neurophysiology.

  2. Population activity structure of excitatory and inhibitory neurons

    PubMed Central

    Doiron, Brent

    2017-01-01

    Many studies use population analysis approaches, such as dimensionality reduction, to characterize the activity of large groups of neurons. To date, these methods have treated each neuron equally, without taking into account whether neurons are excitatory or inhibitory. We studied population activity structure as a function of neuron type by applying factor analysis to spontaneous activity from spiking networks with balanced excitation and inhibition. Throughout the study, we characterized population activity structure by measuring its dimensionality and the percentage of overall activity variance that is shared among neurons. First, by sampling only excitatory or only inhibitory neurons, we found that the activity structures of these two populations in balanced networks are measurably different. We also found that the population activity structure is dependent on the ratio of excitatory to inhibitory neurons sampled. Finally we classified neurons from extracellular recordings in the primary visual cortex of anesthetized macaques as putative excitatory or inhibitory using waveform classification, and found similarities with the neuron type-specific population activity structure of a balanced network with excitatory clustering. These results imply that knowledge of neuron type is important, and allows for stronger statistical tests, when interpreting population activity structure. PMID:28817581

  3. Nanoparticles Induce Changes of the Electrical Activity of Neuronal Networks on Microelectrode Array Neurochips

    PubMed Central

    Gramowski, Alexandra; Flossdorf, Juliane; Bhattacharya, Kunal; Jonas, Ludwig; Lantow, Margareta; Rahman, Qamar; Schiffmann, Dietmar; Weiss, Dieter G.; Dopp, Elke

    2010-01-01

    Background Nanomaterials are extensively used in industry and daily life, but little is known about possible health effects. An intensified research regarding toxicity of nanomaterials is urgently needed. Several studies have demonstrated that nanoparticles (NPs; diameter < 100 nm) can be transported to the central nervous system; however, interference of NPs with the electrical activity of neurons has not yet been shown. Objectives/methods We investigated the acute electrophysiological effects of carbon black (CB), hematite (Fe2O3), and titanium dioxide (TiO2) NPs in primary murine cortical networks on microelectrode array (MEA) neurochips. Uptake of NPs was studied by transmission electron microscopy (TEM), and intracellular formation of reactive oxygen species (ROS) was studied by flow cytometry. Results The multiparametric assessment of electrical activity changes caused by the NPs revealed an NP-specific and concentration-dependent inhibition of the firing patterns. The number of action potentials and the frequency of their patterns (spike and burst rates) showed a significant particle-dependent decrease and significant differences in potency. Further, we detected the uptake of CB, Fe2O3, and TiO2 into glial cells and neurons by TEM. Additionally, 24 hr exposure to TiO2 NPs caused intracellular formation of ROS in neuronal and glial cells, whereas exposure to CB and Fe2O3 NPs up to a concentration of 10 μg/cm2 did not induce significant changes in free radical levels. Conclusion NPs at low particle concentrations are able to exhibit a neurotoxic effect by disturbing the electrical activity of neuronal networks, but the underlying mechanisms depend on the particle type. PMID:20457553

  4. Autaptic pacemaker mediated propagation of weak rhythmic activity across small-world neuronal networks

    NASA Astrophysics Data System (ADS)

    Yilmaz, Ergin; Baysal, Veli; Ozer, Mahmut; Perc, Matjaž

    2016-02-01

    We study the effects of an autapse, which is mathematically described as a self-feedback loop, on the propagation of weak, localized pacemaker activity across a Newman-Watts small-world network consisting of stochastic Hodgkin-Huxley neurons. We consider that only the pacemaker neuron, which is stimulated by a subthreshold periodic signal, has an electrical autapse that is characterized by a coupling strength and a delay time. We focus on the impact of the coupling strength, the network structure, the properties of the weak periodic stimulus, and the properties of the autapse on the transmission of localized pacemaker activity. Obtained results indicate the existence of optimal channel noise intensity for the propagation of the localized rhythm. Under optimal conditions, the autapse can significantly improve the propagation of pacemaker activity, but only for a specific range of the autaptic coupling strength. Moreover, the autaptic delay time has to be equal to the intrinsic oscillation period of the Hodgkin-Huxley neuron or its integer multiples. We analyze the inter-spike interval histogram and show that the autapse enhances or suppresses the propagation of the localized rhythm by increasing or decreasing the phase locking between the spiking of the pacemaker neuron and the weak periodic signal. In particular, when the autaptic delay time is equal to the intrinsic period of oscillations an optimal phase locking takes place, resulting in a dominant time scale of the spiking activity. We also investigate the effects of the network structure and the coupling strength on the propagation of pacemaker activity. We find that there exist an optimal coupling strength and an optimal network structure that together warrant an optimal propagation of the localized rhythm.

  5. Synaptic plasticity and neuronal refractory time cause scaling behaviour of neuronal avalanches

    NASA Astrophysics Data System (ADS)

    Michiels van Kessenich, L.; de Arcangelis, L.; Herrmann, H. J.

    2016-08-01

    Neuronal avalanches measured in vitro and in vivo in different cortical networks consistently exhibit power law behaviour for the size and duration distributions with exponents typical for a mean field self-organized branching process. These exponents are also recovered in neuronal network simulations implementing various neuronal dynamics on different network topologies. They can therefore be considered a very robust feature of spontaneous neuronal activity. Interestingly, this scaling behaviour is also observed on regular lattices in finite dimensions, which raises the question about the origin of the mean field behavior observed experimentally. In this study we provide an answer to this open question by investigating the effect of activity dependent plasticity in combination with the neuronal refractory time in a neuronal network. Results show that the refractory time hinders backward avalanches forcing a directed propagation. Hebbian plastic adaptation plays the role of sculpting these directed avalanche patterns into the topology of the network slowly changing it into a branched structure where loops are marginal.

  6. Synaptic plasticity and neuronal refractory time cause scaling behaviour of neuronal avalanches.

    PubMed

    Michiels van Kessenich, L; de Arcangelis, L; Herrmann, H J

    2016-08-18

    Neuronal avalanches measured in vitro and in vivo in different cortical networks consistently exhibit power law behaviour for the size and duration distributions with exponents typical for a mean field self-organized branching process. These exponents are also recovered in neuronal network simulations implementing various neuronal dynamics on different network topologies. They can therefore be considered a very robust feature of spontaneous neuronal activity. Interestingly, this scaling behaviour is also observed on regular lattices in finite dimensions, which raises the question about the origin of the mean field behavior observed experimentally. In this study we provide an answer to this open question by investigating the effect of activity dependent plasticity in combination with the neuronal refractory time in a neuronal network. Results show that the refractory time hinders backward avalanches forcing a directed propagation. Hebbian plastic adaptation plays the role of sculpting these directed avalanche patterns into the topology of the network slowly changing it into a branched structure where loops are marginal.

  7. Obtaining Arbitrary Prescribed Mean Field Dynamics for Recurrently Coupled Networks of Type-I Spiking Neurons with Analytically Determined Weights

    PubMed Central

    Nicola, Wilten; Tripp, Bryan; Scott, Matthew

    2016-01-01

    A fundamental question in computational neuroscience is how to connect a network of spiking neurons to produce desired macroscopic or mean field dynamics. One possible approach is through the Neural Engineering Framework (NEF). The NEF approach requires quantities called decoders which are solved through an optimization problem requiring large matrix inversion. Here, we show how a decoder can be obtained analytically for type I and certain type II firing rates as a function of the heterogeneity of its associated neuron. These decoders generate approximants for functions that converge to the desired function in mean-squared error like 1/N, where N is the number of neurons in the network. We refer to these decoders as scale-invariant decoders due to their structure. These decoders generate weights for a network of neurons through the NEF formula for weights. These weights force the spiking network to have arbitrary and prescribed mean field dynamics. The weights generated with scale-invariant decoders all lie on low dimensional hypersurfaces asymptotically. We demonstrate the applicability of these scale-invariant decoders and weight surfaces by constructing networks of spiking theta neurons that replicate the dynamics of various well known dynamical systems such as the neural integrator, Van der Pol system and the Lorenz system. As these decoders are analytically determined and non-unique, the weights are also analytically determined and non-unique. We discuss the implications for measured weights of neuronal networks. PMID:26973503

  8. Obtaining Arbitrary Prescribed Mean Field Dynamics for Recurrently Coupled Networks of Type-I Spiking Neurons with Analytically Determined Weights.

    PubMed

    Nicola, Wilten; Tripp, Bryan; Scott, Matthew

    2016-01-01

    A fundamental question in computational neuroscience is how to connect a network of spiking neurons to produce desired macroscopic or mean field dynamics. One possible approach is through the Neural Engineering Framework (NEF). The NEF approach requires quantities called decoders which are solved through an optimization problem requiring large matrix inversion. Here, we show how a decoder can be obtained analytically for type I and certain type II firing rates as a function of the heterogeneity of its associated neuron. These decoders generate approximants for functions that converge to the desired function in mean-squared error like 1/N, where N is the number of neurons in the network. We refer to these decoders as scale-invariant decoders due to their structure. These decoders generate weights for a network of neurons through the NEF formula for weights. These weights force the spiking network to have arbitrary and prescribed mean field dynamics. The weights generated with scale-invariant decoders all lie on low dimensional hypersurfaces asymptotically. We demonstrate the applicability of these scale-invariant decoders and weight surfaces by constructing networks of spiking theta neurons that replicate the dynamics of various well known dynamical systems such as the neural integrator, Van der Pol system and the Lorenz system. As these decoders are analytically determined and non-unique, the weights are also analytically determined and non-unique. We discuss the implications for measured weights of neuronal networks.

  9. Role of Ongoing, Intrinsic Activity of Neuronal Populations for Quantitative Neuroimaging of Functional Magnetic Resonance Imaging–Based Networks

    PubMed Central

    Herman, Peter; Sanganahalli, Basavaraju G.; Coman, Daniel; Blumenfeld, Hal; Rothman, Douglas L.

    2011-01-01

    Abstract A primary objective in neuroscience is to determine how neuronal populations process information within networks. In humans and animal models, functional magnetic resonance imaging (fMRI) is gaining increasing popularity for network mapping. Although neuroimaging with fMRI—conducted with or without tasks—is actively discovering new brain networks, current fMRI data analysis schemes disregard the importance of the total neuronal activity in a region. In task fMRI experiments, the baseline is differenced away to disclose areas of small evoked changes in the blood oxygenation level-dependent (BOLD) signal. In resting-state fMRI experiments, the spotlight is on regions revealed by correlations of tiny fluctuations in the baseline (or spontaneous) BOLD signal. Interpretation of fMRI-based networks is obscured further, because the BOLD signal indirectly reflects neuronal activity, and difference/correlation maps are thresholded. Since the small changes of BOLD signal typically observed in cognitive fMRI experiments represent a minimal fraction of the total energy/activity in a given area, the relevance of fMRI-based networks is uncertain, because the majority of neuronal energy/activity is ignored. Thus, another alternative for quantitative neuroimaging of fMRI-based networks is a perspective in which the activity of a neuronal population is accounted for by the demanded oxidative energy (CMRO2). In this article, we argue that network mapping can be improved by including neuronal energy/activity of both the information about baseline and small differences/fluctuations of BOLD signal. Thus, total energy/activity information can be obtained through use of calibrated fMRI to quantify differences of ΔCMRO2 and through resting-state positron emission tomography/magnetic resonance spectroscopy measurements for average CMRO2. PMID:22433047

  10. A unified framework for spiking and gap-junction interactions in distributed neuronal network simulations

    PubMed Central

    Hahne, Jan; Helias, Moritz; Kunkel, Susanne; Igarashi, Jun; Bolten, Matthias; Frommer, Andreas; Diesmann, Markus

    2015-01-01

    Contemporary simulators for networks of point and few-compartment model neurons come with a plethora of ready-to-use neuron and synapse models and support complex network topologies. Recent technological advancements have broadened the spectrum of application further to the efficient simulation of brain-scale networks on supercomputers. In distributed network simulations the amount of spike data that accrues per millisecond and process is typically low, such that a common optimization strategy is to communicate spikes at relatively long intervals, where the upper limit is given by the shortest synaptic transmission delay in the network. This approach is well-suited for simulations that employ only chemical synapses but it has so far impeded the incorporation of gap-junction models, which require instantaneous neuronal interactions. Here, we present a numerical algorithm based on a waveform-relaxation technique which allows for network simulations with gap junctions in a way that is compatible with the delayed communication strategy. Using a reference implementation in the NEST simulator, we demonstrate that the algorithm and the required data structures can be smoothly integrated with existing code such that they complement the infrastructure for spiking connections. To show that the unified framework for gap-junction and spiking interactions achieves high performance and delivers high accuracy in the presence of gap junctions, we present benchmarks for workstations, clusters, and supercomputers. Finally, we discuss limitations of the novel technology. PMID:26441628

  11. Synaptic dynamics and neuronal network connectivity are reflected in the distribution of times in Up states.

    PubMed

    Dao Duc, Khanh; Parutto, Pierre; Chen, Xiaowei; Epsztein, Jérôme; Konnerth, Arthur; Holcman, David

    2015-01-01

    The dynamics of neuronal networks connected by synaptic dynamics can sustain long periods of depolarization that can last for hundreds of milliseconds such as Up states recorded during sleep or anesthesia. Yet the underlying mechanism driving these periods remain unclear. We show here within a mean-field model that the residence time of the neuronal membrane potential in cortical Up states does not follow a Poissonian law, but presents several peaks. Furthermore, the present modeling approach allows extracting some information about the neuronal network connectivity from the time distribution histogram. Based on a synaptic-depression model, we find that these peaks, that can be observed in histograms of patch-clamp recordings are not artifacts of electrophysiological measurements, but rather are an inherent property of the network dynamics. Analysis of the equations reveals a stable focus located close to the unstable limit cycle, delimiting a region that defines the Up state. The model further shows that the peaks observed in the Up state time distribution are due to winding around the focus before escaping from the basin of attraction. Finally, we use in vivo recordings of intracellular membrane potential and we recover from the peak distribution, some information about the network connectivity. We conclude that it is possible to recover the network connectivity from the distribution of times that the neuronal membrane voltage spends in Up states.

  12. The connection-set algebra--a novel formalism for the representation of connectivity structure in neuronal network models.

    PubMed

    Djurfeldt, Mikael

    2012-07-01

    The connection-set algebra (CSA) is a novel and general formalism for the description of connectivity in neuronal network models, from small-scale to large-scale structure. The algebra provides operators to form more complex sets of connections from simpler ones and also provides parameterization of such sets. CSA is expressive enough to describe a wide range of connection patterns, including multiple types of random and/or geometrically dependent connectivity, and can serve as a concise notation for network structure in scientific writing. CSA implementations allow for scalable and efficient representation of connectivity in parallel neuronal network simulators and could even allow for avoiding explicit representation of connections in computer memory. The expressiveness of CSA makes prototyping of network structure easy. A C+ + version of the algebra has been implemented and used in a large-scale neuronal network simulation (Djurfeldt et al., IBM J Res Dev 52(1/2):31-42, 2008b) and an implementation in Python has been publicly released.

  13. Alterations of cortical GABA neurons and network oscillations in schizophrenia.

    PubMed

    Gonzalez-Burgos, Guillermo; Hashimoto, Takanori; Lewis, David A

    2010-08-01

    The hypothesis that alterations of cortical inhibitory gamma-aminobutyric acid (GABA) neurons are a central element in the pathology of schizophrenia has emerged from a series of postmortem studies. How such abnormalities may contribute to the clinical features of schizophrenia has been substantially informed by a convergence with basic neuroscience studies revealing complex details of GABA neuron function in the healthy brain. Importantly, activity of the parvalbumin-containing class of GABA neurons has been linked to the production of cortical network oscillations. Furthermore, growing knowledge supports the concept that gamma band oscillations (30-80 Hz) are an essential mechanism for cortical information transmission and processing. Herein we review recent studies further indicating that inhibition from parvalbumin-positive GABA neurons is necessary to produce gamma oscillations in cortical circuits; provide an update on postmortem studies documenting that deficits in the expression of glutamic acid decarboxylase67, which accounts for most GABA synthesis in the cortex, are widely observed in schizophrenia; and describe studies using novel, noninvasive approaches directly assessing potential relations between alterations in GABA, oscillations, and cognitive function in schizophrenia.

  14. Topologically invariant macroscopic statistics in balanced networks of conductance-based integrate-and-fire neurons.

    PubMed

    Yger, Pierre; El Boustani, Sami; Destexhe, Alain; Frégnac, Yves

    2011-10-01

    The relationship between the dynamics of neural networks and their patterns of connectivity is far from clear, despite its importance for understanding functional properties. Here, we have studied sparsely-connected networks of conductance-based integrate-and-fire (IF) neurons with balanced excitatory and inhibitory connections and with finite axonal propagation speed. We focused on the genesis of states with highly irregular spiking activity and synchronous firing patterns at low rates, called slow Synchronous Irregular (SI) states. In such balanced networks, we examined the "macroscopic" properties of the spiking activity, such as ensemble correlations and mean firing rates, for different intracortical connectivity profiles ranging from randomly connected networks to networks with Gaussian-distributed local connectivity. We systematically computed the distance-dependent correlations at the extracellular (spiking) and intracellular (membrane potential) levels between randomly assigned pairs of neurons. The main finding is that such properties, when they are averaged at a macroscopic scale, are invariant with respect to the different connectivity patterns, provided the excitatory-inhibitory balance is the same. In particular, the same correlation structure holds for different connectivity profiles. In addition, we examined the response of such networks to external input, and found that the correlation landscape can be modulated by the mean level of synchrony imposed by the external drive. This modulation was found again to be independent of the external connectivity profile. We conclude that first and second-order "mean-field" statistics of such networks do not depend on the details of the connectivity at a microscopic scale. This study is an encouraging step toward a mean-field description of topological neuronal networks.

  15. Information in a Network of Neuronal Cells: Effect of Cell Density and Short-Term Depression

    PubMed Central

    Onesto, Valentina; Cosentino, Carlo; Di Fabrizio, Enzo; Cesarelli, Mario; Amato, Francesco; Gentile, Francesco

    2016-01-01

    Neurons are specialized, electrically excitable cells which use electrical to chemical signals to transmit and elaborate information. Understanding how the cooperation of a great many of neurons in a grid may modify and perhaps improve the information quality, in contrast to few neurons in isolation, is critical for the rational design of cell-materials interfaces for applications in regenerative medicine, tissue engineering, and personalized lab-on-a-chips. In the present paper, we couple an integrate-and-fire model with information theory variables to analyse the extent of information in a network of nerve cells. We provide an estimate of the information in the network in bits as a function of cell density and short-term depression time. In the model, neurons are connected through a Delaunay triangulation of not-intersecting edges; in doing so, the number of connecting synapses per neuron is approximately constant to reproduce the early time of network development in planar neural cell cultures. In simulations where the number of nodes is varied, we observe an optimal value of cell density for which information in the grid is maximized. In simulations in which the posttransmission latency time is varied, we observe that information increases as the latency time decreases and, for specific configurations of the grid, it is largely enhanced in a resonance effect. PMID:27403421

  16. Multistability, local pattern formation, and global collective firing in a small-world network of nonleaky integrate-and-fire neurons.

    PubMed

    Rothkegel, Alexander; Lehnertz, Klaus

    2009-03-01

    We investigate numerically the collective dynamical behavior of pulse-coupled nonleaky integrate-and-fire neurons that are arranged on a two-dimensional small-world network. To ensure ongoing activity, we impose a probability for spontaneous firing for each neuron. We study network dynamics evolving from different sets of initial conditions in dependence on coupling strength and rewiring probability. Besides a homogeneous equilibrium state for low coupling strength, we observe different local patterns including cyclic waves, spiral waves, and turbulentlike patterns, which-depending on network parameters-interfere with the global collective firing of the neurons. We attribute the various network dynamics to distinct regimes in the parameter space. For the same network parameters different network dynamics can be observed depending on the set of initial conditions only. Such a multistable behavior and the interplay between local pattern formation and global collective firing may be attributable to the spatiotemporal dynamics of biological networks.

  17. Midline thalamic neurons are differentially engaged during hippocampus network oscillations.

    PubMed

    Lara-Vásquez, Ariel; Espinosa, Nelson; Durán, Ernesto; Stockle, Marcelo; Fuentealba, Pablo

    2016-07-14

    The midline thalamus is reciprocally connected with the medial temporal lobe, where neural circuitry essential for spatial navigation and memory formation resides. Yet, little information is available on the dynamic relationship between activity patterns in the midline thalamus and medial temporal lobe. Here, we report on the functional heterogeneity of anatomically-identified thalamic neurons and the differential modulation of their activity with respect to dorsal hippocampal rhythms in the anesthetized mouse. Midline thalamic neurons expressing the calcium-binding protein calretinin, irrespective of their selective co-expression of calbindin, discharged at overall low levels, did not increase their activity during hippocampal theta oscillations, and their firing rates were inhibited during hippocampal sharp wave-ripples. Conversely, thalamic neurons lacking calretinin discharged at higher rates, increased their activity during hippocampal theta waves, but remained unaffected during sharp wave-ripples. Our results indicate that the midline thalamic system comprises at least two different classes of thalamic projection neuron, which can be partly defined by their differential engagement by hippocampal pathways during specific network oscillations that accompany distinct behavioral contexts. Thus, different midline thalamic neuronal populations might be selectively recruited to support distinct stages of memory processing, consistent with the thalamus being pivotal in the dialogue of cortical circuits.

  18. DeepNeuron: an open deep learning toolbox for neuron tracing.

    PubMed

    Zhou, Zhi; Kuo, Hsien-Chi; Peng, Hanchuan; Long, Fuhui

    2018-06-06

    Reconstructing three-dimensional (3D) morphology of neurons is essential for understanding brain structures and functions. Over the past decades, a number of neuron tracing tools including manual, semiautomatic, and fully automatic approaches have been developed to extract and analyze 3D neuronal structures. Nevertheless, most of them were developed based on coding certain rules to extract and connect structural components of a neuron, showing limited performance on complicated neuron morphology. Recently, deep learning outperforms many other machine learning methods in a wide range of image analysis and computer vision tasks. Here we developed a new Open Source toolbox, DeepNeuron, which uses deep learning networks to learn features and rules from data and trace neuron morphology in light microscopy images. DeepNeuron provides a family of modules to solve basic yet challenging problems in neuron tracing. These problems include but not limited to: (1) detecting neuron signal under different image conditions, (2) connecting neuronal signals into tree(s), (3) pruning and refining tree morphology, (4) quantifying the quality of morphology, and (5) classifying dendrites and axons in real time. We have tested DeepNeuron using light microscopy images including bright-field and confocal images of human and mouse brain, on which DeepNeuron demonstrates robustness and accuracy in neuron tracing.

  19. Dopamine Attenuates Ketamine-Induced Neuronal Apoptosis in the Developing Rat Retina Independent of Early Synchronized Spontaneous Network Activity.

    PubMed

    Dong, Jing; Gao, Lingqi; Han, Junde; Zhang, Junjie; Zheng, Jijian

    2017-07-01

    Deprivation of spontaneous rhythmic electrical activity in early development by anesthesia administration, among other interventions, induces neuronal apoptosis. However, it is unclear whether enhancement of neuronal electrical activity attenuates neuronal apoptosis in either normal development or after anesthesia exposure. The present study investigated the effects of dopamine, an enhancer of spontaneous rhythmic electrical activity, on ketamine-induced neuronal apoptosis in the developing rat retina. TUNEL and immunohistochemical assays indicated that ketamine time- and dose-dependently aggravated physiological and ketamine-induced apoptosis and inhibited early-synchronized spontaneous network activity. Dopamine administration reversed ketamine-induced neuronal apoptosis, but did not reverse the inhibitory effects of ketamine on early synchronized spontaneous network activity despite enhancing it in controls. Blockade of D1, D2, and A2A receptors and inhibition of cAMP/PKA signaling partially antagonized the protective effect of dopamine against ketamine-induced apoptosis. Together, these data indicate that dopamine attenuates ketamine-induced neuronal apoptosis in the developing rat retina by activating the D1, D2, and A2A receptors, and upregulating cAMP/PKA signaling, rather than through modulation of early synchronized spontaneous network activity.

  20. Network control principles predict neuron function in the Caenorhabditis elegans connectome

    PubMed Central

    Chew, Yee Lian; Walker, Denise S.; Schafer, William R.; Barabási, Albert-László

    2017-01-01

    Recent studies on the controllability of complex systems offer a powerful mathematical framework to systematically explore the structure-function relationship in biological, social and technological networks1–3. Despite theoretical advances, we lack direct experimental proof of the validity of these widely used control principles. Here we fill this gap by applying a control framework to the connectome of the nematode C. elegans4–6, allowing us to predict the involvement of each C. elegans neuron in locomotor behaviours. We predict that control of the muscles or motor neurons requires twelve neuronal classes, which include neuronal groups previously implicated in locomotion by laser ablation7–13, as well as one previously uncharacterised neuron, PDB. We validate this prediction experimentally, finding that the ablation of PDB leads to a significant loss of dorsoventral polarity in large body bends. Importantly, control principles also allow us to investigate the involvement of individual neurons within each neuronal class. For example, we predict that, within the class of DD motor neurons, only three (DD04, DD05, or DD06) should affect locomotion when ablated individually. This prediction is also confirmed, with single-cell ablations of DD04 or DD05, but not DD02 or DD03, specifically affecting posterior body movements. Our predictions are robust to deletions of weak connections, missing connections, and rewired connections in the current connectome, indicating the potential applicability of this analytical framework to larger and less well-characterised connectomes. PMID:29045391

  1. Network control principles predict neuron function in the Caenorhabditis elegans connectome

    NASA Astrophysics Data System (ADS)

    Yan, Gang; Vértes, Petra E.; Towlson, Emma K.; Chew, Yee Lian; Walker, Denise S.; Schafer, William R.; Barabási, Albert-László

    2017-10-01

    Recent studies on the controllability of complex systems offer a powerful mathematical framework to systematically explore the structure-function relationship in biological, social, and technological networks. Despite theoretical advances, we lack direct experimental proof of the validity of these widely used control principles. Here we fill this gap by applying a control framework to the connectome of the nematode Caenorhabditis elegans, allowing us to predict the involvement of each C. elegans neuron in locomotor behaviours. We predict that control of the muscles or motor neurons requires 12 neuronal classes, which include neuronal groups previously implicated in locomotion by laser ablation, as well as one previously uncharacterized neuron, PDB. We validate this prediction experimentally, finding that the ablation of PDB leads to a significant loss of dorsoventral polarity in large body bends. Importantly, control principles also allow us to investigate the involvement of individual neurons within each neuronal class. For example, we predict that, within the class of DD motor neurons, only three (DD04, DD05, or DD06) should affect locomotion when ablated individually. This prediction is also confirmed; single cell ablations of DD04 or DD05 specifically affect posterior body movements, whereas ablations of DD02 or DD03 do not. Our predictions are robust to deletions of weak connections, missing connections, and rewired connections in the current connectome, indicating the potential applicability of this analytical framework to larger and less well-characterized connectomes.

  2. Network control principles predict neuron function in the Caenorhabditis elegans connectome.

    PubMed

    Yan, Gang; Vértes, Petra E; Towlson, Emma K; Chew, Yee Lian; Walker, Denise S; Schafer, William R; Barabási, Albert-László

    2017-10-26

    Recent studies on the controllability of complex systems offer a powerful mathematical framework to systematically explore the structure-function relationship in biological, social, and technological networks. Despite theoretical advances, we lack direct experimental proof of the validity of these widely used control principles. Here we fill this gap by applying a control framework to the connectome of the nematode Caenorhabditis elegans, allowing us to predict the involvement of each C. elegans neuron in locomotor behaviours. We predict that control of the muscles or motor neurons requires 12 neuronal classes, which include neuronal groups previously implicated in locomotion by laser ablation, as well as one previously uncharacterized neuron, PDB. We validate this prediction experimentally, finding that the ablation of PDB leads to a significant loss of dorsoventral polarity in large body bends. Importantly, control principles also allow us to investigate the involvement of individual neurons within each neuronal class. For example, we predict that, within the class of DD motor neurons, only three (DD04, DD05, or DD06) should affect locomotion when ablated individually. This prediction is also confirmed; single cell ablations of DD04 or DD05 specifically affect posterior body movements, whereas ablations of DD02 or DD03 do not. Our predictions are robust to deletions of weak connections, missing connections, and rewired connections in the current connectome, indicating the potential applicability of this analytical framework to larger and less well-characterized connectomes.

  3. Signal propagation and logic gating in networks of integrate-and-fire neurons.

    PubMed

    Vogels, Tim P; Abbott, L F

    2005-11-16

    Transmission of signals within the brain is essential for cognitive function, but it is not clear how neural circuits support reliable and accurate signal propagation over a sufficiently large dynamic range. Two modes of propagation have been studied: synfire chains, in which synchronous activity travels through feedforward layers of a neuronal network, and the propagation of fluctuations in firing rate across these layers. In both cases, a sufficient amount of noise, which was added to previous models from an external source, had to be included to support stable propagation. Sparse, randomly connected networks of spiking model neurons can generate chaotic patterns of activity. We investigate whether this activity, which is a more realistic noise source, is sufficient to allow for signal transmission. We find that, for rate-coded signals but not for synfire chains, such networks support robust and accurate signal reproduction through up to six layers if appropriate adjustments are made in synaptic strengths. We investigate the factors affecting transmission and show that multiple signals can propagate simultaneously along different pathways. Using this feature, we show how different types of logic gates can arise within the architecture of the random network through the strengthening of specific synapses.

  4. Selection of Multiarmed Spiral Waves in a Regular Network of Neurons

    PubMed Central

    Hu, Bolin; Ma, Jun; Tang, Jun

    2013-01-01

    Formation and selection of multiarmed spiral wave due to spontaneous symmetry breaking are investigated in a regular network of Hodgkin-Huxley neuron by changing the excitability and imposing spatial forcing currents on the neurons in the network. The arm number of the multiarmed spiral wave is dependent on the distribution of spatial forcing currents and excitability diversity in the network, and the selection criterion for supporting multiarmed spiral waves is discussed. A broken spiral segment is measured by a short polygonal line connected by three adjacent points (controlled nodes), and a double-spiral wave can be developed from the spiral segment. Multiarmed spiral wave is formed when a group of double-spiral waves rotate in the same direction in the network. In the numerical studies, a group of controlled nodes are selected and spatial forcing currents are imposed on these nodes, and our results show that l-arm stable spiral wave (l = 2, 3, 4,...8) can be induced to occupy the network completely. It is also confirmed that low excitability is critical to induce multiarmed spiral waves while high excitability is important to propagate the multiarmed spiral wave outside so that distinct multiarmed spiral wave can occupy the network completely. Our results confirm that symmetry breaking of target wave in the media accounts for emergence of multiarmed spiral wave, which can be developed from a group of spiral waves with single arm under appropriate condition, thus the potential formation mechanism of multiarmed spiral wave in the media is explained. PMID:23935966

  5. Endogenous cholinergic tone modulates spontaneous network level neuronal activity in primary cortical cultures grown on multi-electrode arrays.

    PubMed

    Hammond, Mark W; Xydas, Dimitris; Downes, Julia H; Bucci, Giovanna; Becerra, Victor; Warwick, Kevin; Constanti, Andrew; Nasuto, Slawomir J; Whalley, Benjamin J

    2013-03-26

    Cortical cultures grown long-term on multi-electrode arrays (MEAs) are frequently and extensively used as models of cortical networks in studies of neuronal firing activity, neuropharmacology, toxicology and mechanisms underlying synaptic plasticity. However, in contrast to the predominantly asynchronous neuronal firing activity exhibited by intact cortex, electrophysiological activity of mature cortical cultures is dominated by spontaneous epileptiform-like global burst events which hinders their effective use in network-level studies, particularly for neurally-controlled animat ('artificial animal') applications. Thus, the identification of culture features that can be exploited to produce neuronal activity more representative of that seen in vivo could increase the utility and relevance of studies that employ these preparations. Acetylcholine has a recognised neuromodulatory role affecting excitability, rhythmicity, plasticity and information flow in vivo although its endogenous production by cortical cultures and subsequent functional influence upon neuronal excitability remains unknown. Consequently, using MEA electrophysiological recording supported by immunohistochemical and RT-qPCR methods, we demonstrate for the first time, the presence of intrinsic cholinergic neurons and significant, endogenous cholinergic tone in cortical cultures with a characterisation of the muscarinic and nicotinic components that underlie modulation of spontaneous neuronal activity. We found that tonic muscarinic ACh receptor (mAChR) activation affects global excitability and burst event regularity in a culture age-dependent manner whilst, in contrast, tonic nicotinic ACh receptor (nAChR) activation can modulate burst duration and the proportion of spikes occurring within bursts in a spatio-temporal fashion. We suggest that the presence of significant endogenous cholinergic tone in cortical cultures and the comparability of its modulatory effects to those seen in intact brain

  6. Synaptic and intrinsic activation of GABAergic neurons in the cardiorespiratory brainstem network.

    PubMed

    Frank, Julie G; Mendelowitz, David

    2012-01-01

    GABAergic pathways in the brainstem play an essential role in respiratory rhythmogenesis and interactions between the respiratory and cardiovascular neuronal control networks. However, little is known about the identity and function of these GABAergic inhibitory neurons and what determines their activity. In this study we have identified a population of GABAergic neurons in the ventrolateral medulla that receive increased excitatory post-synaptic potentials during inspiration, but also have spontaneous firing in the absence of synaptic input. Using transgenic mice that express GFP under the control of the Gad1 (GAD67) gene promoter, we determined that this population of GABAergic neurons is in close apposition to cardioinhibitory parasympathetic cardiac neurons in the nucleus ambiguus (NA). These neurons fire in synchronization with inspiratory activity. Although they receive excitatory glutamatergic synaptic inputs during inspiration, this excitatory neurotransmission was not altered by blocking nicotinic receptors, and many of these GABAergic neurons continue to fire after synaptic blockade. The spontaneous firing in these GABAergic neurons was not altered by the voltage-gated calcium channel blocker cadmium chloride that blocks both neurotransmission to these neurons and voltage-gated Ca(2+) currents, but spontaneous firing was diminished by riluzole, demonstrating a role of persistent sodium channels in the spontaneous firing in these cardiorespiratory GABAergic neurons that possess a pacemaker phenotype. The spontaneously firing GABAergic neurons identified in this study that increase their activity during inspiration would support respiratory rhythm generation if they acted primarily to inhibit post-inspiratory neurons and thereby release inspiration neurons to increase their activity. This population of inspiratory-modulated GABAergic neurons could also play a role in inhibiting neurons that are most active during expiration and provide a framework for

  7. Interplay between Graph Topology and Correlations of Third Order in Spiking Neuronal Networks.

    PubMed

    Jovanović, Stojan; Rotter, Stefan

    2016-06-01

    The study of processes evolving on networks has recently become a very popular research field, not only because of the rich mathematical theory that underpins it, but also because of its many possible applications, a number of them in the field of biology. Indeed, molecular signaling pathways, gene regulation, predator-prey interactions and the communication between neurons in the brain can be seen as examples of networks with complex dynamics. The properties of such dynamics depend largely on the topology of the underlying network graph. In this work, we want to answer the following question: Knowing network connectivity, what can be said about the level of third-order correlations that will characterize the network dynamics? We consider a linear point process as a model for pulse-coded, or spiking activity in a neuronal network. Using recent results from theory of such processes, we study third-order correlations between spike trains in such a system and explain which features of the network graph (i.e. which topological motifs) are responsible for their emergence. Comparing two different models of network topology-random networks of Erdős-Rényi type and networks with highly interconnected hubs-we find that, in random networks, the average measure of third-order correlations does not depend on the local connectivity properties, but rather on global parameters, such as the connection probability. This, however, ceases to be the case in networks with a geometric out-degree distribution, where topological specificities have a strong impact on average correlations.

  8. Beyond blow-up in excitatory integrate and fire neuronal networks: Refractory period and spontaneous activity.

    PubMed

    Cáceres, María J; Perthame, Benoît

    2014-06-07

    The Network Noisy Leaky Integrate and Fire equation is among the simplest model allowing for a self-consistent description of neural networks and gives a rule to determine the probability to find a neuron at the potential v. However, its mathematical structure is still poorly understood and, concerning its solutions, very few results are available. In the midst of them, a recent result shows blow-up in finite time for fully excitatory networks. The intuitive explanation is that each firing neuron induces a discharge of the others; thus increases the activity and consequently the discharge rate of the full network. In order to better understand the details of the phenomena and show that the equation is more complex and fruitful than expected, we analyze further the model. We extend the finite time blow-up result to the case when neurons, after firing, enter a refractory state for a given period of time. We also show that spontaneous activity may occur when, additionally, randomness is included on the firing potential VF in regimes where blow-up occurs for a fixed value of VF. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Mean-field theory of a plastic network of integrate-and-fire neurons.

    PubMed

    Chen, Chun-Chung; Jasnow, David

    2010-01-01

    We consider a noise-driven network of integrate-and-fire neurons. The network evolves as result of the activities of the neurons following spike-timing-dependent plasticity rules. We apply a self-consistent mean-field theory to the system to obtain the mean activity level for the system as a function of the mean synaptic weight, which predicts a first-order transition and hysteresis between a noise-dominated regime and a regime of persistent neural activity. Assuming Poisson firing statistics for the neurons, the plasticity dynamics of a synapse under the influence of the mean-field environment can be mapped to the dynamics of an asymmetric random walk in synaptic-weight space. Using a master equation for small steps, we predict a narrow distribution of synaptic weights that scales with the square root of the plasticity rate for the stationary state of the system given plausible physiological parameter values describing neural transmission and plasticity. The dependence of the distribution on the synaptic weight of the mean-field environment allows us to determine the mean synaptic weight self-consistently. The effect of fluctuations in the total synaptic conductance and plasticity step sizes are also considered. Such fluctuations result in a smoothing of the first-order transition for low number of afferent synapses per neuron and a broadening of the synaptic-weight distribution, respectively.

  10. Dynamics of networks of excitatory and inhibitory neurons in response to time-dependent inputs.

    PubMed

    Ledoux, Erwan; Brunel, Nicolas

    2011-01-01

    We investigate the dynamics of recurrent networks of excitatory (E) and inhibitory (I) neurons in the presence of time-dependent inputs. The dynamics is characterized by the network dynamical transfer function, i.e., how the population firing rate is modulated by sinusoidal inputs at arbitrary frequencies. Two types of networks are studied and compared: (i) a Wilson-Cowan type firing rate model; and (ii) a fully connected network of leaky integrate-and-fire (LIF) neurons, in a strong noise regime. We first characterize the region of stability of the "asynchronous state" (a state in which population activity is constant in time when external inputs are constant) in the space of parameters characterizing the connectivity of the network. We then systematically characterize the qualitative behaviors of the dynamical transfer function, as a function of the connectivity. We find that the transfer function can be either low-pass, or with a single or double resonance, depending on the connection strengths and synaptic time constants. Resonances appear when the system is close to Hopf bifurcations, that can be induced by two separate mechanisms: the I-I connectivity and the E-I connectivity. Double resonances can appear when excitatory delays are larger than inhibitory delays, due to the fact that two distinct instabilities exist with a finite gap between the corresponding frequencies. In networks of LIF neurons, changes in external inputs and external noise are shown to be able to change qualitatively the network transfer function. Firing rate models are shown to exhibit the same diversity of transfer functions as the LIF network, provided delays are present. They can also exhibit input-dependent changes of the transfer function, provided a suitable static non-linearity is incorporated.

  11. INTERDISCIPLINARY PHYSICS AND RELATED AREAS OF SCIENCE AND TECHNOLOGY: Spiral Wave in Small-World Networks of Hodgkin-Huxley Neurons

    NASA Astrophysics Data System (ADS)

    Ma, Jun; Yang, Li-Jian; Wu, Ying; Zhang, Cai-Rong

    2010-09-01

    The effect of small-world connection and noise on the formation and transition of spiral wave in the networks of Hodgkin-Huxley neurons are investigated in detail. Some interesting results are found in our numerical studies. i) The quiescent neurons are activated to propagate electric signal to others by generating and developing spiral wave from spiral seed in small area. ii) A statistical factor is defined to describe the collective properties and phase transition induced by the topology of networks and noise. iii) Stable rotating spiral wave can be generated and keeps robust when the rewiring probability is below certain threshold, otherwise, spiral wave can not be developed from the spiral seed and spiral wave breakup occurs for a stable rotating spiral wave. iv) Gaussian white noise is introduced on the membrane of neurons to study the noise-induced phase transition on spiral wave in small-world networks of neurons. It is confirmed that Gaussian white noise plays active role in supporting and developing spiral wave in the networks of neurons, and appearance of smaller factor of synchronization indicates high possibility to induce spiral wave.

  12. Exercise-induced neuronal plasticity in central autonomic networks: role in cardiovascular control.

    PubMed

    Michelini, Lisete C; Stern, Javier E

    2009-09-01

    It is now well established that brain plasticity is an inherent property not only of the developing but also of the adult brain. Numerous beneficial effects of exercise, including improved memory, cognitive function and neuroprotection, have been shown to involve an important neuroplastic component. However, whether major adaptive cardiovascular adjustments during exercise, needed to ensure proper blood perfusion of peripheral tissues, also require brain neuroplasticity, is presently unknown. This review will critically evaluate current knowledge on proposed mechanisms that are likely to underlie the continuous resetting of baroreflex control of heart rate during/after exercise and following exercise training. Accumulating evidence indicates that not only somatosensory afferents (conveyed by skeletal muscle receptors, baroreceptors and/or cardiopulmonary receptors) but also projections arising from central command neurons (in particular, peptidergic hypothalamic pre-autonomic neurons) converge into the nucleus tractus solitarii (NTS) in the dorsal brainstem, to co-ordinate complex cardiovascular adaptations during dynamic exercise. This review focuses in particular on a reciprocally interconnected network between the NTS and the hypothalamic paraventricular nucleus (PVN), which is proposed to act as a pivotal anatomical and functional substrate underlying integrative feedforward and feedback cardiovascular adjustments during exercise. Recent findings supporting neuroplastic adaptive changes within the NTS-PVN reciprocal network (e.g. remodelling of afferent inputs, structural and functional neuronal plasticity and changes in neurotransmitter content) will be discussed within the context of their role as important underlying cellular mechanisms supporting the tonic activation and improved efficacy of these central pathways in response to circulatory demand at rest and during exercise, both in sedentary and in trained individuals. We hope this review will stimulate

  13. Distribution of Orientation Selectivity in Recurrent Networks of Spiking Neurons with Different Random Topologies

    PubMed Central

    Sadeh, Sadra; Rotter, Stefan

    2014-01-01

    Neurons in the primary visual cortex are more or less selective for the orientation of a light bar used for stimulation. A broad distribution of individual grades of orientation selectivity has in fact been reported in all species. A possible reason for emergence of broad distributions is the recurrent network within which the stimulus is being processed. Here we compute the distribution of orientation selectivity in randomly connected model networks that are equipped with different spatial patterns of connectivity. We show that, for a wide variety of connectivity patterns, a linear theory based on firing rates accurately approximates the outcome of direct numerical simulations of networks of spiking neurons. Distance dependent connectivity in networks with a more biologically realistic structure does not compromise our linear analysis, as long as the linearized dynamics, and hence the uniform asynchronous irregular activity state, remain stable. We conclude that linear mechanisms of stimulus processing are indeed responsible for the emergence of orientation selectivity and its distribution in recurrent networks with functionally heterogeneous synaptic connectivity. PMID:25469704

  14. Multi-channels coupling-induced pattern transition in a tri-layer neuronal network

    NASA Astrophysics Data System (ADS)

    Wu, Fuqiang; Wang, Ya; Ma, Jun; Jin, Wuyin; Hobiny, Aatef

    2018-03-01

    Neurons in nerve system show complex electrical behaviors due to complex connection types and diversity in excitability. A tri-layer network is constructed to investigate the signal propagation and pattern formation by selecting different coupling channels between layers. Each layer is set as different states, and the local kinetics is described by Hindmarsh-Rose neuron model. By changing the number of coupling channels between layers and the state of the first layer, the collective behaviors of each layer and synchronization pattern of network are investigated. A statistical factor of synchronization on each layer is calculated. It is found that quiescent state in the second layer can be excited and disordered state in the third layer is suppressed when the first layer is controlled by a pacemaker, and the developed state is dependent on the number of coupling channels. Furthermore, the collapse in the first layer can cause breakdown of other layers in the network, and the mechanism is that disordered state in the third layer is enhanced when sampled signals from the collapsed layer can impose continuous disturbance on the next layer.

  15. Functional Evaluation of Biological Neurotoxins in Networked Cultures of Stem Cell-derived Central Nervous System Neurons

    PubMed Central

    Hubbard, Kyle; Beske, Phillip; Lyman, Megan; McNutt, Patrick

    2015-01-01

    Therapeutic and mechanistic studies of the presynaptically targeted clostridial neurotoxins (CNTs) have been limited by the need for a scalable, cell-based model that produces functioning synapses and undergoes physiological responses to intoxication. Here we describe a simple and robust method to efficiently differentiate murine embryonic stem cells (ESCs) into defined lineages of synaptically active, networked neurons. Following an 8 day differentiation protocol, mouse embryonic stem cell-derived neurons (ESNs) rapidly express and compartmentalize neurotypic proteins, form neuronal morphologies and develop intrinsic electrical responses. By 18 days after differentiation (DIV 18), ESNs exhibit active glutamatergic and γ-aminobutyric acid (GABA)ergic synapses and emergent network behaviors characterized by an excitatory:inhibitory balance. To determine whether intoxication with CNTs functionally antagonizes synaptic neurotransmission, thereby replicating the in vivo pathophysiology that is responsible for clinical manifestations of botulism or tetanus, whole-cell patch clamp electrophysiology was used to quantify spontaneous miniature excitatory post-synaptic currents (mEPSCs) in ESNs exposed to tetanus neurotoxin (TeNT) or botulinum neurotoxin (BoNT) serotypes /A-/G. In all cases, ESNs exhibited near-complete loss of synaptic activity within 20 hr. Intoxicated neurons remained viable, as demonstrated by unchanged resting membrane potentials and intrinsic electrical responses. To further characterize the sensitivity of this approach, dose-dependent effects of intoxication on synaptic activity were measured 20 hr after addition of BoNT/A. Intoxication with 0.005 pM BoNT/A resulted in a significant decrement in mEPSCs, with a median inhibitory concentration (IC50) of 0.013 pM. Comparisons of median doses indicate that functional measurements of synaptic inhibition are faster, more specific and more sensitive than SNARE cleavage assays or the mouse lethality assay

  16. A low-density culture method of cerebellar granule neurons with paracrine support applicable for the study of neuronal morphogenesis.

    PubMed

    Kubota, Kenta; Seno, Takeshi; Konishi, Yoshiyuki

    2013-11-20

    Cerebellar granule neuronal cultures have been used to study the molecular mechanisms underlying neuronal functions, including neuronal morphogenesis. However, a limitation of this system is the difficulty to analyze isolated neurons because these are required to be maintained at a high density. Therefore, in the present study, we aimed to develop a simple and cost-effective method for culturing low-density cerebellar granule neurons. Cerebellar granule cells at two different densities (low- and high-density) were co-cultivated in order for the low-density culture to be supported by the paracrine signals from the high-density culture. This method enabled morphology analysis of isolated cerebellar granule neurons without astrocytic feeder cultures or supplements such as B27. Using this method, we investigated the function of a polarity factor. Studies using hippocampal neurons suggested that glycogen synthase kinase-3 (GSK-3) is an essential regulator of neuronal polarity, and inhibition of GSK-3 results in the formation of multiple axons. Pharmacological inhibitors for GSK-3 (6-bromoindirubin-3'-oxime and lithium chloride) did not cause the formation of multiple axons of cerebellar granule neurons but significantly reduced their length. Consistent results were obtained by introducing kinase-dead form of GSK-3 beta (K85A). These results indicated that GSK-3 is not directly involved in the control of neuronal polarity in cerebellar granule neurons. Overall, this study provides a simple method for culturing low-density cerebellar granule neurons and insights in to the neuronal-type dependent function of GSK-3 in neuronal morphogenesis. © 2013 Elsevier B.V. All rights reserved.

  17. Wireless Sensor Network Congestion Control Based on Standard Particle Swarm Optimization and Single Neuron PID

    PubMed Central

    Yang, Xiaoping; Chen, Xueying; Xia, Riting; Qian, Zhihong

    2018-01-01

    Aiming at the problem of network congestion caused by the large number of data transmissions in wireless routing nodes of wireless sensor network (WSN), this paper puts forward an algorithm based on standard particle swarm–neural PID congestion control (PNPID). Firstly, PID control theory was applied to the queue management of wireless sensor nodes. Then, the self-learning and self-organizing ability of neurons was used to achieve online adjustment of weights to adjust the proportion, integral and differential parameters of the PID controller. Finally, the standard particle swarm optimization to neural PID (NPID) algorithm of initial values of proportion, integral and differential parameters and neuron learning rates were used for online optimization. This paper describes experiments and simulations which show that the PNPID algorithm effectively stabilized queue length near the expected value. At the same time, network performance, such as throughput and packet loss rate, was greatly improved, which alleviated network congestion and improved network QoS. PMID:29671822

  18. Wireless Sensor Network Congestion Control Based on Standard Particle Swarm Optimization and Single Neuron PID.

    PubMed

    Yang, Xiaoping; Chen, Xueying; Xia, Riting; Qian, Zhihong

    2018-04-19

    Aiming at the problem of network congestion caused by the large number of data transmissions in wireless routing nodes of wireless sensor network (WSN), this paper puts forward an algorithm based on standard particle swarm⁻neural PID congestion control (PNPID). Firstly, PID control theory was applied to the queue management of wireless sensor nodes. Then, the self-learning and self-organizing ability of neurons was used to achieve online adjustment of weights to adjust the proportion, integral and differential parameters of the PID controller. Finally, the standard particle swarm optimization to neural PID (NPID) algorithm of initial values of proportion, integral and differential parameters and neuron learning rates were used for online optimization. This paper describes experiments and simulations which show that the PNPID algorithm effectively stabilized queue length near the expected value. At the same time, network performance, such as throughput and packet loss rate, was greatly improved, which alleviated network congestion and improved network QoS.

  19. Joint statistics of strongly correlated neurons via dimensionality reduction

    NASA Astrophysics Data System (ADS)

    Deniz, Taşkın; Rotter, Stefan

    2017-06-01

    The relative timing of action potentials in neurons recorded from local cortical networks often shows a non-trivial dependence, which is then quantified by cross-correlation functions. Theoretical models emphasize that such spike train correlations are an inevitable consequence of two neurons being part of the same network and sharing some synaptic input. For non-linear neuron models, however, explicit correlation functions are difficult to compute analytically, and perturbative methods work only for weak shared input. In order to treat strong correlations, we suggest here an alternative non-perturbative method. Specifically, we study the case of two leaky integrate-and-fire neurons with strong shared input. Correlation functions derived from simulated spike trains fit our theoretical predictions very accurately. Using our method, we computed the non-linear correlation transfer as well as correlation functions that are asymmetric due to inhomogeneous intrinsic parameters or unequal input.

  20. Local excitation-inhibition ratio for synfire chain propagation in feed-forward neuronal networks

    NASA Astrophysics Data System (ADS)

    Guo, Xinmeng; Yu, Haitao; Wang, Jiang; Liu, Jing; Cao, Yibin; Deng, Bin

    2017-09-01

    A leading hypothesis holds that spiking activity propagates along neuronal sub-populations which are connected in a feed-forward manner, and the propagation efficiency would be affected by the dynamics of sub-populations. In this paper, how the interaction between local excitation and inhibition effects on synfire chain propagation in feed-forward network (FFN) is investigated. The simulation results show that there is an appropriate excitation-inhibition (EI) ratio maximizing the performance of synfire chain propagation. The optimal EI ratio can significantly enhance the selectivity of FFN to synchronous signals, which thereby increases the stability to background noise. Moreover, the effect of network topology on synfire chain propagation is also investigated. It is found that synfire chain propagation can be maximized by an optimal interlayer linking probability. We also find that external noise is detrimental to synchrony propagation by inducing spiking jitter. The results presented in this paper may provide insights into the effects of network dynamics on neuronal computations.

  1. Neurobiologically realistic determinants of self-organized criticality in networks of spiking neurons.

    PubMed

    Rubinov, Mikail; Sporns, Olaf; Thivierge, Jean-Philippe; Breakspear, Michael

    2011-06-01

    Self-organized criticality refers to the spontaneous emergence of self-similar dynamics in complex systems poised between order and randomness. The presence of self-organized critical dynamics in the brain is theoretically appealing and is supported by recent neurophysiological studies. Despite this, the neurobiological determinants of these dynamics have not been previously sought. Here, we systematically examined the influence of such determinants in hierarchically modular networks of leaky integrate-and-fire neurons with spike-timing-dependent synaptic plasticity and axonal conduction delays. We characterized emergent dynamics in our networks by distributions of active neuronal ensemble modules (neuronal avalanches) and rigorously assessed these distributions for power-law scaling. We found that spike-timing-dependent synaptic plasticity enabled a rapid phase transition from random subcritical dynamics to ordered supercritical dynamics. Importantly, modular connectivity and low wiring cost broadened this transition, and enabled a regime indicative of self-organized criticality. The regime only occurred when modular connectivity, low wiring cost and synaptic plasticity were simultaneously present, and the regime was most evident when between-module connection density scaled as a power-law. The regime was robust to variations in other neurobiologically relevant parameters and favored systems with low external drive and strong internal interactions. Increases in system size and connectivity facilitated internal interactions, permitting reductions in external drive and facilitating convergence of postsynaptic-response magnitude and synaptic-plasticity learning rate parameter values towards neurobiologically realistic levels. We hence infer a novel association between self-organized critical neuronal dynamics and several neurobiologically realistic features of structural connectivity. The central role of these features in our model may reflect their importance for

  2. Collective dynamics in heterogeneous networks of neuronal cellular automata

    NASA Astrophysics Data System (ADS)

    Manchanda, Kaustubh; Bose, Amitabha; Ramaswamy, Ramakrishna

    2017-12-01

    We examine the collective dynamics of heterogeneous random networks of model neuronal cellular automata. Each automaton has b active states, a single silent state and r - b - 1 refractory states, and can show 'spiking' or 'bursting' behavior, depending on the values of b. We show that phase transitions that occur in the dynamical activity can be related to phase transitions in the structure of Erdõs-Rényi graphs as a function of edge probability. Different forms of heterogeneity allow distinct structural phase transitions to become relevant. We also show that the dynamics on the network can be described by a semi-annealed process and, as a result, can be related to the Boolean Lyapunov exponent.

  3. Cell Assembly Dynamics of Sparsely-Connected Inhibitory Networks: A Simple Model for the Collective Activity of Striatal Projection Neurons.

    PubMed

    Angulo-Garcia, David; Berke, Joshua D; Torcini, Alessandro

    2016-02-01

    Striatal projection neurons form a sparsely-connected inhibitory network, and this arrangement may be essential for the appropriate temporal organization of behavior. Here we show that a simplified, sparse inhibitory network of Leaky-Integrate-and-Fire neurons can reproduce some key features of striatal population activity, as observed in brain slices. In particular we develop a new metric to determine the conditions under which sparse inhibitory networks form anti-correlated cell assemblies with time-varying activity of individual cells. We find that under these conditions the network displays an input-specific sequence of cell assembly switching, that effectively discriminates similar inputs. Our results support the proposal that GABAergic connections between striatal projection neurons allow stimulus-selective, temporally-extended sequential activation of cell assemblies. Furthermore, we help to show how altered intrastriatal GABAergic signaling may produce aberrant network-level information processing in disorders such as Parkinson's and Huntington's diseases.

  4. Neurons with hysteresis form a network that can learn without any changes in synaptic connection strengths

    NASA Astrophysics Data System (ADS)

    Hoffmann, Geoffrey W.; Benson, Maurice W.

    1986-08-01

    A neural network concept derived from an analogy between the immune system and the central nerous system is outlined. The theory is based on a nervous that is slightly more complicated than the conventional McCullogh-Pitts type of neuron, in that it exhibits hysteresis at the single cell level. This added complication is compensated by the fact that a network of such neurons is able to learn without the necessity for any changes in synaptic connection strengths. The learning occurs as a natural consequence of interactions between the network and its enviornment, with environmental stimuli moving the system around in an N-dimensional phase space, until a point in phase space is reached such that the system's responses are appropriate for dealing with the stimuli. Due to the hysteresis associated with each neuron, the system tends to stay in the region of phase space where it is located. The theory includes a role for sleep in learning.

  5. A novel enteric neuron-glia coculture system reveals the role of glia in neuronal development.

    PubMed

    Le Berre-Scoul, Catherine; Chevalier, Julien; Oleynikova, Elena; Cossais, François; Talon, Sophie; Neunlist, Michel; Boudin, Hélène

    2017-01-15

    Unlike astrocytes in the brain, the potential role of enteric glial cells (EGCs) in the formation of the enteric neuronal circuit is currently unknown. To examine the role of EGCs in the formation of the neuronal network, we developed a novel neuron-enriched culture model from embryonic rat intestine grown in indirect coculture with EGCs. We found that EGCs shape axonal complexity and synapse density in enteric neurons, through purinergic- and glial cell line-derived neurotrophic factor-dependent pathways. Using a novel and valuable culture model to study enteric neuron-glia interactions, our study identified EGCs as a key cellular actor regulating neuronal network maturation. In the nervous system, the formation of neuronal circuitry results from a complex and coordinated action of intrinsic and extrinsic factors. In the CNS, extrinsic mediators derived from astrocytes have been shown to play a key role in neuronal maturation, including dendritic shaping, axon guidance and synaptogenesis. In the enteric nervous system (ENS), the potential role of enteric glial cells (EGCs) in the maturation of developing enteric neuronal circuit is currently unknown. A major obstacle in addressing this question is the difficulty in obtaining a valuable experimental model in which enteric neurons could be isolated and maintained without EGCs. We adapted a cell culture method previously developed for CNS neurons to establish a neuron-enriched primary culture from embryonic rat intestine which was cultured in indirect coculture with EGCs. We demonstrated that enteric neurons grown in such conditions showed several structural, phenotypic and functional hallmarks of proper development and maturation. However, when neurons were grown without EGCs, the complexity of the axonal arbour and the density of synapses were markedly reduced, suggesting that glial-derived factors contribute strongly to the formation of the neuronal circuitry. We found that these effects played by EGCs were

  6. Spike-timing-dependent plasticity enhanced synchronization transitions induced by autapses in adaptive Newman-Watts neuronal networks.

    PubMed

    Gong, Yubing; Wang, Baoying; Xie, Huijuan

    2016-12-01

    In this paper, we numerically study the effect of spike-timing-dependent plasticity (STDP) on synchronization transitions induced by autaptic activity in adaptive Newman-Watts Hodgkin-Huxley neuron networks. It is found that synchronization transitions induced by autaptic delay vary with the adjusting rate A p of STDP and become strongest at a certain A p value, and the A p value increases when network randomness or network size increases. It is also found that the synchronization transitions induced by autaptic delay become strongest at a certain network randomness and network size, and the values increase and related synchronization transitions are enhanced when A p increases. These results show that there is optimal STDP that can enhance the synchronization transitions induced by autaptic delay in the adaptive neuronal networks. These findings provide a new insight into the roles of STDP and autapses for the information transmission in neural systems. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  7. Multiple Spatial Coherence Resonances and Spatial Patterns in a Noise-Driven Heterogeneous Neuronal Network

    NASA Astrophysics Data System (ADS)

    Li, Yu-Ye; Ding, Xue-Li

    2014-12-01

    Heterogeneity of the neurons and noise are inevitable in the real neuronal network. In this paper, Gaussian white noise induced spatial patterns including spiral waves and multiple spatial coherence resonances are studied in a network composed of Morris—Lecar neurons with heterogeneity characterized by parameter diversity. The relationship between the resonances and the transitions between ordered spiral waves and disordered spatial patterns are achieved. When parameter diversity is introduced, the maxima of multiple resonances increases first, and then decreases as diversity strength increases, which implies that the coherence degrees induced by noise are enhanced at an intermediate diversity strength. The synchronization degree of spatial patterns including ordered spiral waves and disordered patterns is identified to be a very low level. The results suggest that the nervous system can profit from both heterogeneity and noise, and the multiple spatial coherence resonances are achieved via the emergency of spiral waves instead of synchronization patterns.

  8. A method for validating Rent's rule for technological and biological networks.

    PubMed

    Alcalde Cuesta, Fernando; González Sequeiros, Pablo; Lozano Rojo, Álvaro

    2017-07-14

    Rent's rule is empirical power law introduced in an effort to describe and optimize the wiring complexity of computer logic graphs. It is known that brain and neuronal networks also obey Rent's rule, which is consistent with the idea that wiring costs play a fundamental role in brain evolution and development. Here we propose a method to validate this power law for a certain range of network partitions. This method is based on the bifurcation phenomenon that appears when the network is subjected to random alterations preserving its degree distribution. It has been tested on a set of VLSI circuits and real networks, including biological and technological ones. We also analyzed the effect of different types of random alterations on the Rentian scaling in order to test the influence of the degree distribution. There are network architectures quite sensitive to these randomization procedures with significant increases in the values of the Rent exponents.

  9. Network activity influences the subthreshold and spiking visual responses of pyramidal neurons in the three-layer turtle cortex.

    PubMed

    Wright, Nathaniel C; Wessel, Ralf

    2017-10-01

    A primary goal of systems neuroscience is to understand cortical function, typically by studying spontaneous and stimulus-modulated cortical activity. Mounting evidence suggests a strong and complex relationship exists between the ongoing and stimulus-modulated cortical state. To date, most work in this area has been based on spiking in populations of neurons. While advantageous in many respects, this approach is limited in scope: it records the activity of a minority of neurons and gives no direct indication of the underlying subthreshold dynamics. Membrane potential recordings can fill these gaps in our understanding, but stable recordings are difficult to obtain in vivo. Here, we recorded subthreshold cortical visual responses in the ex vivo turtle eye-attached whole brain preparation, which is ideally suited for such a study. We found that, in the absence of visual stimulation, the network was "synchronous"; neurons displayed network-mediated transitions between hyperpolarized (Down) and depolarized (Up) membrane potential states. The prevalence of these slow-wave transitions varied across turtles and recording sessions. Visual stimulation evoked similar Up states, which were on average larger and less reliable when the ongoing state was more synchronous. Responses were muted when immediately preceded by large, spontaneous Up states. Evoked spiking was sparse, highly variable across trials, and mediated by concerted synaptic inputs that were, in general, only very weakly correlated with inputs to nearby neurons. Together, these results highlight the multiplexed influence of the cortical network on the spontaneous and sensory-evoked activity of individual cortical neurons. NEW & NOTEWORTHY Most studies of cortical activity focus on spikes. Subthreshold membrane potential recordings can provide complementary insight, but stable recordings are difficult to obtain in vivo. Here, we recorded the membrane potentials of cortical neurons during ongoing and visually

  10. Human neural stem cell-derived cultures in three-dimensional substrates form spontaneously functional neuronal networks.

    PubMed

    Smith, Imogen; Silveirinha, Vasco; Stein, Jason L; de la Torre-Ubieta, Luis; Farrimond, Jonathan A; Williamson, Elizabeth M; Whalley, Benjamin J

    2017-04-01

    Differentiated human neural stem cells were cultured in an inert three-dimensional (3D) scaffold and, unlike two-dimensional (2D) but otherwise comparable monolayer cultures, formed spontaneously active, functional neuronal networks that responded reproducibly and predictably to conventional pharmacological treatments to reveal functional, glutamatergic synapses. Immunocytochemical and electron microscopy analysis revealed a neuronal and glial population, where markers of neuronal maturity were observed in the former. Oligonucleotide microarray analysis revealed substantial differences in gene expression conferred by culturing in a 3D vs a 2D environment. Notable and numerous differences were seen in genes coding for neuronal function, the extracellular matrix and cytoskeleton. In addition to producing functional networks, differentiated human neural stem cells grown in inert scaffolds offer several significant advantages over conventional 2D monolayers. These advantages include cost savings and improved physiological relevance, which make them better suited for use in the pharmacological and toxicological assays required for development of stem cell-based treatments and the reduction of animal use in medical research. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  11. Stability and chaos of Rulkov map-based neuron network with electrical synapse

    NASA Astrophysics Data System (ADS)

    Wang, Caixia; Cao, Hongjun

    2015-02-01

    In this paper, stability and chaos of a simple system consisting of two identical Rulkov map-based neurons with the bidirectional electrical synapse are investigated in detail. On the one hand, as a function of control parameters and electrical coupling strengthes, the conditions for stability of fixed points of this system are obtained by using the qualitative analysis. On the other hand, chaos in the sense of Marotto is proved by a strict mathematical way. These results could be useful for building-up large-scale neurons networks with specific dynamics and rich biophysical phenomena.

  12. Functional characterization of GABAA receptor-mediated modulation of cortical neuron network activity in microelectrode array recordings.

    PubMed

    Bader, Benjamin M; Steder, Anne; Klein, Anders Bue; Frølund, Bente; Schroeder, Olaf H U; Jensen, Anders A

    2017-01-01

    The numerous γ-aminobutyric acid type A receptor (GABAAR) subtypes are differentially expressed and mediate distinct functions at neuronal level. In this study we have investigated GABAAR-mediated modulation of the spontaneous activity patterns of primary neuronal networks from murine frontal cortex by characterizing the effects induced by a wide selection of pharmacological tools at a plethora of activity parameters in microelectrode array (MEA) recordings. The basic characteristics of the primary cortical neurons used in the recordings were studied in some detail, and the expression levels of various GABAAR subunits were investigated by western blotting and RT-qPCR. In the MEA recordings, the pan-GABAAR agonist muscimol and the GABABR agonist baclofen were observed to mediate phenotypically distinct changes in cortical network activity. Selective augmentation of αβγ GABAAR signaling by diazepam and of δ-containing GABAAR (δ-GABAAR) signaling by DS1 produced pronounced changes in the majority of the activity parameters, both drugs mediating similar patterns of activity changes as muscimol. The apparent importance of δ-GABAAR signaling for network activity was largely corroborated by the effects induced by the functionally selective δ-GABAAR agonists THIP and Thio-THIP, whereas the δ-GABAAR selective potentiator DS2 only mediated modest effects on network activity, even when co-applied with low THIP concentrations. Interestingly, diazepam exhibited dramatically right-shifted concentration-response relationships at many of the activity parameters when co-applied with a trace concentration of DS1 compared to when applied alone. In contrast, the potencies and efficacies displayed by DS1 at the networks were not substantially altered by the concomitant presence of diazepam. In conclusion, the holistic nature of the information extractable from the MEA recordings offers interesting insights into the contributions of various GABAAR subtypes/subgroups to cortical

  13. Improved system identification using artificial neural networks and analysis of individual differences in responses of an identified neuron.

    PubMed

    Costalago Meruelo, Alicia; Simpson, David M; Veres, Sandor M; Newland, Philip L

    2016-03-01

    Mathematical modelling is used routinely to understand the coding properties and dynamics of responses of neurons and neural networks. Here we analyse the effectiveness of Artificial Neural Networks (ANNs) as a modelling tool for motor neuron responses. We used ANNs to model the synaptic responses of an identified motor neuron, the fast extensor motor neuron, of the desert locust in response to displacement of a sensory organ, the femoral chordotonal organ, which monitors movements of the tibia relative to the femur of the leg. The aim of the study was threefold: first to determine the potential value of ANNs as tools to model and investigate neural networks, second to understand the generalisation properties of ANNs across individuals and to different input signals and third, to understand individual differences in responses of an identified neuron. A metaheuristic algorithm was developed to design the ANN architectures. The performance of the models generated by the ANNs was compared with those generated through previous mathematical models of the same neuron. The results suggest that ANNs are significantly better than LNL and Wiener models in predicting specific neural responses to Gaussian White Noise, but not significantly different when tested with sinusoidal inputs. They are also able to predict responses of the same neuron in different individuals irrespective of which animal was used to develop the model, although notable differences between some individuals were evident. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  14. Extending the Cortical Grasping Network: Pre-supplementary Motor Neuron Activity During Vision and Grasping of Objects

    PubMed Central

    Lanzilotto, Marco; Livi, Alessandro; Maranesi, Monica; Gerbella, Marzio; Barz, Falk; Ruther, Patrick; Fogassi, Leonardo; Rizzolatti, Giacomo; Bonini, Luca

    2016-01-01

    Grasping relies on a network of parieto-frontal areas lying on the dorsolateral and dorsomedial parts of the hemispheres. However, the initiation and sequencing of voluntary actions also requires the contribution of mesial premotor regions, particularly the pre-supplementary motor area F6. We recorded 233 F6 neurons from 2 monkeys with chronic linear multishank neural probes during reaching–grasping visuomotor tasks. We showed that F6 neurons play a role in the control of forelimb movements and some of them (26%) exhibit visual and/or motor specificity for the target object. Interestingly, area F6 neurons form 2 functionally distinct populations, showing either visually-triggered or movement-related bursts of activity, in contrast to the sustained visual-to-motor activity displayed by ventral premotor area F5 neurons recorded in the same animals and with the same task during previous studies. These findings suggest that F6 plays a role in object grasping and extend existing models of the cortical grasping network. PMID:27733538

  15. Effect of acute stretch injury on action potential and network activity of rat neocortical neurons in culture.

    PubMed

    Magou, George C; Pfister, Bryan J; Berlin, Joshua R

    2015-10-22

    The basis for acute seizures following traumatic brain injury (TBI) remains unclear. Animal models of TBI have revealed acute hyperexcitablility in cortical neurons that could underlie seizure activity, but studying initiating events causing hyperexcitability is difficult in these models. In vitro models of stretch injury with cultured cortical neurons, a surrogate for TBI, allow facile investigation of cellular changes after injury but they have only demonstrated post-injury hypoexcitability. The goal of this study was to determine if neuronal hyperexcitability could be triggered by in vitro stretch injury. Controlled uniaxial stretch injury was delivered to a spatially delimited region of a spontaneously active network of cultured rat cortical neurons, yielding a region of stretch-injured neurons and adjacent regions of non-stretched neurons that did not directly experience stretch injury. Spontaneous electrical activity was measured in non-stretched and stretch-injured neurons, and in control neuronal networks not subjected to stretch injury. Non-stretched neurons in stretch-injured cultures displayed a three-fold increase in action potential firing rate and bursting activity 30-60 min post-injury. Stretch-injured neurons, however, displayed dramatically lower rates of action potential firing and bursting. These results demonstrate that acute hyperexcitability can be observed in non-stretched neurons located in regions adjacent to the site of stretch injury, consistent with reports that seizure activity can arise from regions surrounding the site of localized brain injury. Thus, this in vitro procedure for localized neuronal stretch injury may provide a model to study the earliest cellular changes in neuronal function associated with acute post-traumatic seizures. Copyright © 2015. Published by Elsevier B.V.

  16. Identification of the connections in biologically inspired neural networks

    NASA Technical Reports Server (NTRS)

    Demuth, H.; Leung, K.; Beale, M.; Hicklin, J.

    1990-01-01

    We developed an identification method to find the strength of the connections between neurons from their behavior in small biologically-inspired artificial neural networks. That is, given the network external inputs and the temporal firing pattern of the neurons, we can calculate a solution for the strengths of the connections between neurons and the initial neuron activations if a solution exists. The method determines directly if there is a solution to a particular neural network problem. No training of the network is required. It should be noted that this is a first pass at the solution of a difficult problem. The neuron and network models chosen are related to biology but do not contain all of its complexities, some of which we hope to add to the model in future work. A variety of new results have been obtained. First, the method has been tailored to produce connection weight matrix solutions for networks with important features of biological neural (bioneural) networks. Second, a computationally efficient method of finding a robust central solution has been developed. This later method also enables us to find the most consistent solution in the presence of noisy data. Prospects of applying our method to identify bioneural network connections are exciting because such connections are almost impossible to measure in the laboratory. Knowledge of such connections would facilitate an understanding of bioneural networks and would allow the construction of the electronic counterparts of bioneural networks on very large scale integrated (VLSI) circuits.

  17. Efficient Training of Supervised Spiking Neural Network via Accurate Synaptic-Efficiency Adjustment Method.

    PubMed

    Xie, Xiurui; Qu, Hong; Yi, Zhang; Kurths, Jurgen

    2017-06-01

    The spiking neural network (SNN) is the third generation of neural networks and performs remarkably well in cognitive tasks, such as pattern recognition. The temporal neural encode mechanism found in biological hippocampus enables SNN to possess more powerful computation capability than networks with other encoding schemes. However, this temporal encoding approach requires neurons to process information serially on time, which reduces learning efficiency significantly. To keep the powerful computation capability of the temporal encoding mechanism and to overcome its low efficiency in the training of SNNs, a new training algorithm, the accurate synaptic-efficiency adjustment method is proposed in this paper. Inspired by the selective attention mechanism of the primate visual system, our algorithm selects only the target spike time as attention areas, and ignores voltage states of the untarget ones, resulting in a significant reduction of training time. Besides, our algorithm employs a cost function based on the voltage difference between the potential of the output neuron and the firing threshold of the SNN, instead of the traditional precise firing time distance. A normalized spike-timing-dependent-plasticity learning window is applied to assigning this error to different synapses for instructing their training. Comprehensive simulations are conducted to investigate the learning properties of our algorithm, with input neurons emitting both single spike and multiple spikes. Simulation results indicate that our algorithm possesses higher learning performance than the existing other methods and achieves the state-of-the-art efficiency in the training of SNN.

  18. Implementing Signature Neural Networks with Spiking Neurons

    PubMed Central

    Carrillo-Medina, José Luis; Latorre, Roberto

    2016-01-01

    Spiking Neural Networks constitute the most promising approach to develop realistic Artificial Neural Networks (ANNs). Unlike traditional firing rate-based paradigms, information coding in spiking models is based on the precise timing of individual spikes. It has been demonstrated that spiking ANNs can be successfully and efficiently applied to multiple realistic problems solvable with traditional strategies (e.g., data classification or pattern recognition). In recent years, major breakthroughs in neuroscience research have discovered new relevant computational principles in different living neural systems. Could ANNs benefit from some of these recent findings providing novel elements of inspiration? This is an intriguing question for the research community and the development of spiking ANNs including novel bio-inspired information coding and processing strategies is gaining attention. From this perspective, in this work, we adapt the core concepts of the recently proposed Signature Neural Network paradigm—i.e., neural signatures to identify each unit in the network, local information contextualization during the processing, and multicoding strategies for information propagation regarding the origin and the content of the data—to be employed in a spiking neural network. To the best of our knowledge, none of these mechanisms have been used yet in the context of ANNs of spiking neurons. This paper provides a proof-of-concept for their applicability in such networks. Computer simulations show that a simple network model like the discussed here exhibits complex self-organizing properties. The combination of multiple simultaneous encoding schemes allows the network to generate coexisting spatio-temporal patterns of activity encoding information in different spatio-temporal spaces. As a function of the network and/or intra-unit parameters shaping the corresponding encoding modality, different forms of competition among the evoked patterns can emerge even in the

  19. Implementing Signature Neural Networks with Spiking Neurons.

    PubMed

    Carrillo-Medina, José Luis; Latorre, Roberto

    2016-01-01

    Spiking Neural Networks constitute the most promising approach to develop realistic Artificial Neural Networks (ANNs). Unlike traditional firing rate-based paradigms, information coding in spiking models is based on the precise timing of individual spikes. It has been demonstrated that spiking ANNs can be successfully and efficiently applied to multiple realistic problems solvable with traditional strategies (e.g., data classification or pattern recognition). In recent years, major breakthroughs in neuroscience research have discovered new relevant computational principles in different living neural systems. Could ANNs benefit from some of these recent findings providing novel elements of inspiration? This is an intriguing question for the research community and the development of spiking ANNs including novel bio-inspired information coding and processing strategies is gaining attention. From this perspective, in this work, we adapt the core concepts of the recently proposed Signature Neural Network paradigm-i.e., neural signatures to identify each unit in the network, local information contextualization during the processing, and multicoding strategies for information propagation regarding the origin and the content of the data-to be employed in a spiking neural network. To the best of our knowledge, none of these mechanisms have been used yet in the context of ANNs of spiking neurons. This paper provides a proof-of-concept for their applicability in such networks. Computer simulations show that a simple network model like the discussed here exhibits complex self-organizing properties. The combination of multiple simultaneous encoding schemes allows the network to generate coexisting spatio-temporal patterns of activity encoding information in different spatio-temporal spaces. As a function of the network and/or intra-unit parameters shaping the corresponding encoding modality, different forms of competition among the evoked patterns can emerge even in the absence

  20. Neuronal pathway finding: from neurons to initial neural networks.

    PubMed

    Roscigno, Cecelia I

    2004-10-01

    Neuronal pathway finding is crucial for structured cellular organization and development of neural circuits within the nervous system. Neuronal pathway finding within the visual system has been extensively studied and therefore is used as a model to review existing knowledge regarding concepts of this developmental process. General principles of neuron pathway finding throughout the nervous system exist. Comprehension of these concepts guides neuroscience nurses in gaining an understanding of the developmental course of action, the implications of different anomalies, as well as the theoretical basis and nursing implications of some provocative new therapies being proposed to treat neurodegenerative diseases and neurologic injuries. These therapies have limitations in light of current ethical, developmental, and delivery modes and what is known about the development of neuronal pathways.

  1. Detection and clustering of features in aerial images by neuron network-based algorithm

    NASA Astrophysics Data System (ADS)

    Vozenilek, Vit

    2015-12-01

    The paper presents the algorithm for detection and clustering of feature in aerial photographs based on artificial neural networks. The presented approach is not focused on the detection of specific topographic features, but on the combination of general features analysis and their use for clustering and backward projection of clusters to aerial image. The basis of the algorithm is a calculation of the total error of the network and a change of weights of the network to minimize the error. A classic bipolar sigmoid was used for the activation function of the neurons and the basic method of backpropagation was used for learning. To verify that a set of features is able to represent the image content from the user's perspective, the web application was compiled (ASP.NET on the Microsoft .NET platform). The main achievements include the knowledge that man-made objects in aerial images can be successfully identified by detection of shapes and anomalies. It was also found that the appropriate combination of comprehensive features that describe the colors and selected shapes of individual areas can be useful for image analysis.

  2. Investigating local and long-range neuronal network dynamics by simultaneous optogenetics, reverse microdialysis and silicon probe recordings in vivo

    PubMed Central

    Taylor, Hannah; Schmiedt, Joscha T.; Çarçak, Nihan; Onat, Filiz; Di Giovanni, Giuseppe; Lambert, Régis; Leresche, Nathalie; Crunelli, Vincenzo; David, Francois

    2014-01-01

    Background The advent of optogenetics has given neuroscientists the opportunity to excite or inhibit neuronal population activity with high temporal resolution and cellular selectivity. Thus, when combined with recordings of neuronal ensemble activity in freely moving animals optogenetics can provide an unprecedented snapshot of the contribution of neuronal assemblies to (patho)physiological conditions in vivo. Still, the combination of optogenetic and silicone probe (or tetrode) recordings does not allow investigation of the role played by voltage- and transmitter-gated channels of the opsin-transfected neurons and/or other adjacent neurons in controlling neuronal activity. New method and results We demonstrate that optogenetics and silicone probe recordings can be combined with intracerebral reverse microdialysis for the long-term delivery of neuroactive drugs around the optic fiber and silicone probe. In particular, we show the effect of antagonists of T-type Ca2+ channels, hyperpolarization-activated cyclic nucleotide-gated channels and metabotropic glutamate receptors on silicone probe-recorded activity of the local opsin-transfected neurons in the ventrobasal thalamus, and demonstrate the changes that the block of these thalamic channels/receptors brings about in the network dynamics of distant somatotopic cortical neuronal ensembles. Comparison with existing methods This is the first demonstration of successfully combining optogenetics and neuronal ensemble recordings with reverse microdialysis. This combination of techniques overcomes some of the disadvantages that are associated with the use of intracerebral injection of a drug-containing solution at the site of laser activation. Conclusions The combination of reverse microdialysis, silicone probe recordings and optogenetics can unravel the short and long-term effects of specific transmitter- and voltage-gated channels on laser-modulated firing at the site of optogenetic stimulation and the actions that

  3. A Neuronal Network Model for Pitch Selectivity and Representation

    PubMed Central

    Huang, Chengcheng; Rinzel, John

    2016-01-01

    Pitch is a perceptual correlate of periodicity. Sounds with distinct spectra can elicit the same pitch. Despite the importance of pitch perception, understanding the cellular mechanism of pitch perception is still a major challenge and a mechanistic model of pitch is lacking. A multi-stage neuronal network model is developed for pitch frequency estimation using biophysically-based, high-resolution coincidence detector neurons. The neuronal units respond only to highly coincident input among convergent auditory nerve fibers across frequency channels. Their selectivity for only very fast rising slopes of convergent input enables these slope-detectors to distinguish the most prominent coincidences in multi-peaked input time courses. Pitch can then be estimated from the first-order interspike intervals of the slope-detectors. The regular firing pattern of the slope-detector neurons are similar for sounds sharing the same pitch despite the distinct timbres. The decoded pitch strengths also correlate well with the salience of pitch perception as reported by human listeners. Therefore, our model can serve as a neural representation for pitch. Our model performs successfully in estimating the pitch of missing fundamental complexes and reproducing the pitch variation with respect to the frequency shift of inharmonic complexes. It also accounts for the phase sensitivity of pitch perception in the cases of Schroeder phase, alternating phase and random phase relationships. Moreover, our model can also be applied to stochastic sound stimuli, iterated-ripple-noise, and account for their multiple pitch perceptions. PMID:27378900

  4. A Neuronal Network Model for Pitch Selectivity and Representation.

    PubMed

    Huang, Chengcheng; Rinzel, John

    2016-01-01

    Pitch is a perceptual correlate of periodicity. Sounds with distinct spectra can elicit the same pitch. Despite the importance of pitch perception, understanding the cellular mechanism of pitch perception is still a major challenge and a mechanistic model of pitch is lacking. A multi-stage neuronal network model is developed for pitch frequency estimation using biophysically-based, high-resolution coincidence detector neurons. The neuronal units respond only to highly coincident input among convergent auditory nerve fibers across frequency channels. Their selectivity for only very fast rising slopes of convergent input enables these slope-detectors to distinguish the most prominent coincidences in multi-peaked input time courses. Pitch can then be estimated from the first-order interspike intervals of the slope-detectors. The regular firing pattern of the slope-detector neurons are similar for sounds sharing the same pitch despite the distinct timbres. The decoded pitch strengths also correlate well with the salience of pitch perception as reported by human listeners. Therefore, our model can serve as a neural representation for pitch. Our model performs successfully in estimating the pitch of missing fundamental complexes and reproducing the pitch variation with respect to the frequency shift of inharmonic complexes. It also accounts for the phase sensitivity of pitch perception in the cases of Schroeder phase, alternating phase and random phase relationships. Moreover, our model can also be applied to stochastic sound stimuli, iterated-ripple-noise, and account for their multiple pitch perceptions.

  5. Theoretical Neuroanatomy:Analyzing the Structure, Dynamics,and Function of Neuronal Networks

    NASA Astrophysics Data System (ADS)

    Seth, Anil K.; Edelman, Gerald M.

    The mammalian brain is an extraordinary object: its networks give rise to our conscious experiences as well as to the generation of adaptive behavior for the organism within its environment. Progress in understanding the structure, dynamics and function of the brain faces many challenges. Biological neural networks change over time, their detailed structure is difficult to elucidate, and they are highly heterogeneous both in their neuronal units and synaptic connections. In facing these challenges, graph-theoretic and information-theoretic approaches have yielded a number of useful insights and promise many more.

  6. Self-sustained asynchronous irregular states and Up-Down states in thalamic, cortical and thalamocortical networks of nonlinear integrate-and-fire neurons.

    PubMed

    Destexhe, Alain

    2009-12-01

    Randomly-connected networks of integrate-and-fire (IF) neurons are known to display asynchronous irregular (AI) activity states, which resemble the discharge activity recorded in the cerebral cortex of awake animals. However, it is not clear whether such activity states are specific to simple IF models, or if they also exist in networks where neurons are endowed with complex intrinsic properties similar to electrophysiological measurements. Here, we investigate the occurrence of AI states in networks of nonlinear IF neurons, such as the adaptive exponential IF (Brette-Gerstner-Izhikevich) model. This model can display intrinsic properties such as low-threshold spike (LTS), regular spiking (RS) or fast-spiking (FS). We successively investigate the oscillatory and AI dynamics of thalamic, cortical and thalamocortical networks using such models. AI states can be found in each case, sometimes with surprisingly small network size of the order of a few tens of neurons. We show that the presence of LTS neurons in cortex or in thalamus, explains the robust emergence of AI states for relatively small network sizes. Finally, we investigate the role of spike-frequency adaptation (SFA). In cortical networks with strong SFA in RS cells, the AI state is transient, but when SFA is reduced, AI states can be self-sustained for long times. In thalamocortical networks, AI states are found when the cortex is itself in an AI state, but with strong SFA, the thalamocortical network displays Up and Down state transitions, similar to intracellular recordings during slow-wave sleep or anesthesia. Self-sustained Up and Down states could also be generated by two-layer cortical networks with LTS cells. These models suggest that intrinsic properties such as adaptation and low-threshold bursting activity are crucial for the genesis and control of AI states in thalamocortical networks.

  7. On the sample complexity of learning for networks of spiking neurons with nonlinear synaptic interactions.

    PubMed

    Schmitt, Michael

    2004-09-01

    We study networks of spiking neurons that use the timing of pulses to encode information. Nonlinear interactions model the spatial groupings of synapses on the neural dendrites and describe the computations performed at local branches. Within a theoretical framework of learning we analyze the question of how many training examples these networks must receive to be able to generalize well. Bounds for this sample complexity of learning can be obtained in terms of a combinatorial parameter known as the pseudodimension. This dimension characterizes the computational richness of a neural network and is given in terms of the number of network parameters. Two types of feedforward architectures are considered: constant-depth networks and networks of unconstrained depth. We derive asymptotically tight bounds for each of these network types. Constant depth networks are shown to have an almost linear pseudodimension, whereas the pseudodimension of general networks is quadratic. Networks of spiking neurons that use temporal coding are becoming increasingly more important in practical tasks such as computer vision, speech recognition, and motor control. The question of how well these networks generalize from a given set of training examples is a central issue for their successful application as adaptive systems. The results show that, although coding and computation in these networks is quite different and in many cases more powerful, their generalization capabilities are at least as good as those of traditional neural network models.

  8. Model-Free Reconstruction of Excitatory Neuronal Connectivity from Calcium Imaging Signals

    PubMed Central

    Stetter, Olav; Battaglia, Demian; Soriano, Jordi; Geisel, Theo

    2012-01-01

    A systematic assessment of global neural network connectivity through direct electrophysiological assays has remained technically infeasible, even in simpler systems like dissociated neuronal cultures. We introduce an improved algorithmic approach based on Transfer Entropy to reconstruct structural connectivity from network activity monitored through calcium imaging. We focus in this study on the inference of excitatory synaptic links. Based on information theory, our method requires no prior assumptions on the statistics of neuronal firing and neuronal connections. The performance of our algorithm is benchmarked on surrogate time series of calcium fluorescence generated by the simulated dynamics of a network with known ground-truth topology. We find that the functional network topology revealed by Transfer Entropy depends qualitatively on the time-dependent dynamic state of the network (bursting or non-bursting). Thus by conditioning with respect to the global mean activity, we improve the performance of our method. This allows us to focus the analysis to specific dynamical regimes of the network in which the inferred functional connectivity is shaped by monosynaptic excitatory connections, rather than by collective synchrony. Our method can discriminate between actual causal influences between neurons and spurious non-causal correlations due to light scattering artifacts, which inherently affect the quality of fluorescence imaging. Compared to other reconstruction strategies such as cross-correlation or Granger Causality methods, our method based on improved Transfer Entropy is remarkably more accurate. In particular, it provides a good estimation of the excitatory network clustering coefficient, allowing for discrimination between weakly and strongly clustered topologies. Finally, we demonstrate the applicability of our method to analyses of real recordings of in vitro disinhibited cortical cultures where we suggest that excitatory connections are characterized

  9. A neuronal network model for context-dependence of pitch change perception.

    PubMed

    Huang, Chengcheng; Englitz, Bernhard; Shamma, Shihab; Rinzel, John

    2015-01-01

    Many natural stimuli have perceptual ambiguities that can be cognitively resolved by the surrounding context. In audition, preceding context can bias the perception of speech and non-speech stimuli. Here, we develop a neuronal network model that can account for how context affects the perception of pitch change between a pair of successive complex tones. We focus especially on an ambiguous comparison-listeners experience opposite percepts (either ascending or descending) for an ambiguous tone pair depending on the spectral location of preceding context tones. We developed a recurrent, firing-rate network model, which detects frequency-change-direction of successively played stimuli and successfully accounts for the context-dependent perception demonstrated in behavioral experiments. The model consists of two tonotopically organized, excitatory populations, E up and E down, that respond preferentially to ascending or descending stimuli in pitch, respectively. These preferences are generated by an inhibitory population that provides inhibition asymmetric in frequency to the two populations; context dependence arises from slow facilitation of inhibition. We show that contextual influence depends on the spectral distribution of preceding tones and the tuning width of inhibitory neurons. Further, we demonstrate, using phase-space analysis, how the facilitated inhibition from previous stimuli and the waning inhibition from the just-preceding tone shape the competition between the E up and E down populations. In sum, our model accounts for contextual influences on the pitch change perception of an ambiguous tone pair by introducing a novel decoding strategy based on direction-selective units. The model's network architecture and slow facilitating inhibition emerge as predictions of neuronal mechanisms for these perceptual dynamics. Since the model structure does not depend on the specific stimuli, we show that it generalizes to other contextual effects and stimulus types.

  10. DL-ReSuMe: A Delay Learning-Based Remote Supervised Method for Spiking Neurons.

    PubMed

    Taherkhani, Aboozar; Belatreche, Ammar; Li, Yuhua; Maguire, Liam P

    2015-12-01

    Recent research has shown the potential capability of spiking neural networks (SNNs) to model complex information processing in the brain. There is biological evidence to prove the use of the precise timing of spikes for information coding. However, the exact learning mechanism in which the neuron is trained to fire at precise times remains an open problem. The majority of the existing learning methods for SNNs are based on weight adjustment. However, there is also biological evidence that the synaptic delay is not constant. In this paper, a learning method for spiking neurons, called delay learning remote supervised method (DL-ReSuMe), is proposed to merge the delay shift approach and ReSuMe-based weight adjustment to enhance the learning performance. DL-ReSuMe uses more biologically plausible properties, such as delay learning, and needs less weight adjustment than ReSuMe. Simulation results have shown that the proposed DL-ReSuMe approach achieves learning accuracy and learning speed improvements compared with ReSuMe.

  11. Comparisons of Neuronal and Excitatory Network Properties between the Rat Brainstem Nuclei that Participate in Vertical and Horizontal Gaze Holding

    PubMed Central

    Sugimura, Taketoshi; Yanagawa, Yuchio

    2017-01-01

    Gaze holding is primarily controlled by neural structures including the prepositus hypoglossi nucleus (PHN) for horizontal gaze and the interstitial nucleus of Cajal (INC) for vertical and torsional gaze. In contrast to the accumulating findings of the PHN, there is no report regarding the membrane properties of INC neurons or the local networks in the INC. In this study, to verify whether the neural structure of the INC is similar to that of the PHN, we investigated the neuronal and network properties of the INC using whole-cell recordings in rat brainstem slices. Three types of afterhyperpolarization (AHP) profiles and five firing patterns observed in PHN neurons were also observed in INC neurons. However, the overall distributions based on the AHP profile and the firing patterns of INC neurons were different from those of PHN neurons. The application of burst stimulation to a nearby site of a recorded INC neuron induced an increase in the frequency of spontaneous EPSCs. The duration of the increased EPSC frequency of INC neurons was not significantly different from that of PHN neurons. The percent of duration reduction induced by a Ca2+-permeable AMPA (CP-AMPA) receptor antagonist was significantly smaller in the INC than in the PHN. These findings suggest that local excitatory networks that activate sustained EPSC responses also exist in the INC, but their activation mechanisms including the contribution of CP-AMPA receptors differ between the INC and the PHN. PMID:28966973

  12. Neuron splitting in compute-bound parallel network simulations enables runtime scaling with twice as many processors.

    PubMed

    Hines, Michael L; Eichner, Hubert; Schürmann, Felix

    2008-08-01

    Neuron tree topology equations can be split into two subtrees and solved on different processors with no change in accuracy, stability, or computational effort; communication costs involve only sending and receiving two double precision values by each subtree at each time step. Splitting cells is useful in attaining load balance in neural network simulations, especially when there is a wide range of cell sizes and the number of cells is about the same as the number of processors. For compute-bound simulations load balance results in almost ideal runtime scaling. Application of the cell splitting method to two published network models exhibits good runtime scaling on twice as many processors as could be effectively used with whole-cell balancing.

  13. The Influence of Synaptic Weight Distribution on Neuronal Population Dynamics

    PubMed Central

    Buice, Michael; Koch, Christof; Mihalas, Stefan

    2013-01-01

    The manner in which different distributions of synaptic weights onto cortical neurons shape their spiking activity remains open. To characterize a homogeneous neuronal population, we use the master equation for generalized leaky integrate-and-fire neurons with shot-noise synapses. We develop fast semi-analytic numerical methods to solve this equation for either current or conductance synapses, with and without synaptic depression. We show that its solutions match simulations of equivalent neuronal networks better than those of the Fokker-Planck equation and we compute bounds on the network response to non-instantaneous synapses. We apply these methods to study different synaptic weight distributions in feed-forward networks. We characterize the synaptic amplitude distributions using a set of measures, called tail weight numbers, designed to quantify the preponderance of very strong synapses. Even if synaptic amplitude distributions are equated for both the total current and average synaptic weight, distributions with sparse but strong synapses produce higher responses for small inputs, leading to a larger operating range. Furthermore, despite their small number, such synapses enable the network to respond faster and with more stability in the face of external fluctuations. PMID:24204219

  14. Single-Neuron NMDA Receptor Phenotype Influences Neuronal Rewiring and Reintegration following Traumatic Injury

    PubMed Central

    Patel, Tapan P.; Ventre, Scott C.; Geddes-Klein, Donna; Singh, Pallab K.

    2014-01-01

    Alterations in the activity of neural circuits are a common consequence of traumatic brain injury (TBI), but the relationship between single-neuron properties and the aggregate network behavior is not well understood. We recently reported that the GluN2B-containing NMDA receptors (NMDARs) are key in mediating mechanical forces during TBI, and that TBI produces a complex change in the functional connectivity of neuronal networks. Here, we evaluated whether cell-to-cell heterogeneity in the connectivity and aggregate contribution of GluN2B receptors to [Ca2+]i before injury influenced the functional rewiring, spontaneous activity, and network plasticity following injury using primary rat cortical dissociated neurons. We found that the functional connectivity of a neuron to its neighbors, combined with the relative influx of calcium through distinct NMDAR subtypes, together contributed to the individual neuronal response to trauma. Specifically, individual neurons whose [Ca2+]i oscillations were largely due to GluN2B NMDAR activation lost many of their functional targets 1 h following injury. In comparison, neurons with large GluN2A contribution or neurons with high functional connectivity both independently protected against injury-induced loss in connectivity. Mechanistically, we found that traumatic injury resulted in increased uncorrelated network activity, an effect linked to reduction of the voltage-sensitive Mg2+ block of GluN2B-containing NMDARs. This uncorrelated activation of GluN2B subtypes after injury significantly limited the potential for network remodeling in response to a plasticity stimulus. Together, our data suggest that two single-cell characteristics, the aggregate contribution of NMDAR subtypes and the number of functional connections, influence network structure following traumatic injury. PMID:24647941

  15. Replicating receptive fields of simple and complex cells in primary visual cortex in a neuronal network model with temporal and population sparseness and reliability.

    PubMed

    Tanaka, Takuma; Aoyagi, Toshio; Kaneko, Takeshi

    2012-10-01

    We propose a new principle for replicating receptive field properties of neurons in the primary visual cortex. We derive a learning rule for a feedforward network, which maintains a low firing rate for the output neurons (resulting in temporal sparseness) and allows only a small subset of the neurons in the network to fire at any given time (resulting in population sparseness). Our learning rule also sets the firing rates of the output neurons at each time step to near-maximum or near-minimum levels, resulting in neuronal reliability. The learning rule is simple enough to be written in spatially and temporally local forms. After the learning stage is performed using input image patches of natural scenes, output neurons in the model network are found to exhibit simple-cell-like receptive field properties. When the output of these simple-cell-like neurons are input to another model layer using the same learning rule, the second-layer output neurons after learning become less sensitive to the phase of gratings than the simple-cell-like input neurons. In particular, some of the second-layer output neurons become completely phase invariant, owing to the convergence of the connections from first-layer neurons with similar orientation selectivity to second-layer neurons in the model network. We examine the parameter dependencies of the receptive field properties of the model neurons after learning and discuss their biological implications. We also show that the localized learning rule is consistent with experimental results concerning neuronal plasticity and can replicate the receptive fields of simple and complex cells.

  16. Parallel multipoint recording of aligned and cultured neurons on micro channel array toward cellular network analysis.

    PubMed

    Tonomura, Wataru; Moriguchi, Hiroyuki; Jimbo, Yasuhiko; Konishi, Satoshi

    2010-08-01

    This paper describes an advanced Micro Channel Array (MCA) for recording electrophysiological signals of neuronal networks at multiple points simultaneously. The developed MCA is designed for neuronal network analysis which has been studied by the co-authors using the Micro Electrode Arrays (MEA) system, and employs the principles of extracellular recordings. A prerequisite for extracellular recordings with good signal-to-noise ratio is a tight contact between cells and electrodes. The MCA described herein has the following advantages. The electrodes integrated around individual micro channels are electrically isolated to enable parallel multipoint recording. Reliable clamping of a targeted cell through micro channels is expected to improve the cellular selectivity and the attachment between the cell and the electrode toward steady electrophysiological recordings. We cultured hippocampal neurons on the developed MCA. As a result, the spontaneous and evoked spike potentials could be recorded by sucking and clamping the cells at multiple points. In this paper, we describe the design and fabrication of the MCA and the successful electrophysiological recordings leading to the development of an effective cellular network analysis device.

  17. Physical and biological regulation of neuron regenerative growth and network formation on recombinant dragline silks.

    PubMed

    An, Bo; Tang-Schomer, Min; Huang, Wenwen; He, Jiuyang; Jones, Justin; Lewis, Randolph V; Kaplan, David L

    2015-04-01

    Recombinant spider silks produced in transgenic goat milk were studied as cell culture matrices for neuronal growth. Major ampullate spidroin 1 (MaSp1) supported neuronal growth, axon extension and network connectivity, with cell morphology comparable to the gold standard poly-lysine. In addition, neurons growing on MaSp1 films had increased neural cell adhesion molecule (NCAM) expression at both mRNA and protein levels. The results indicate that MaSp1 films present useful surface charge and substrate stiffness to support the growth of primary rat cortical neurons. Moreover, a putative neuron-specific surface binding sequence GRGGL within MaSp1 may contribute to the biological regulation of neuron growth. These findings indicate that MaSp1 could regulate neuron growth through its physical and biological features. This dual regulation mode of MaSp1 could provide an alternative strategy for generating functional silk materials for neural tissue engineering. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Probabilistic inference in discrete spaces can be implemented into networks of LIF neurons.

    PubMed

    Probst, Dimitri; Petrovici, Mihai A; Bytschok, Ilja; Bill, Johannes; Pecevski, Dejan; Schemmel, Johannes; Meier, Karlheinz

    2015-01-01

    The means by which cortical neural networks are able to efficiently solve inference problems remains an open question in computational neuroscience. Recently, abstract models of Bayesian computation in neural circuits have been proposed, but they lack a mechanistic interpretation at the single-cell level. In this article, we describe a complete theoretical framework for building networks of leaky integrate-and-fire neurons that can sample from arbitrary probability distributions over binary random variables. We test our framework for a model inference task based on a psychophysical phenomenon (the Knill-Kersten optical illusion) and further assess its performance when applied to randomly generated distributions. As the local computations performed by the network strongly depend on the interaction between neurons, we compare several types of couplings mediated by either single synapses or interneuron chains. Due to its robustness to substrate imperfections such as parameter noise and background noise correlations, our model is particularly interesting for implementation on novel, neuro-inspired computing architectures, which can thereby serve as a fast, low-power substrate for solving real-world inference problems.

  19. Probabilistic inference in discrete spaces can be implemented into networks of LIF neurons

    PubMed Central

    Probst, Dimitri; Petrovici, Mihai A.; Bytschok, Ilja; Bill, Johannes; Pecevski, Dejan; Schemmel, Johannes; Meier, Karlheinz

    2015-01-01

    The means by which cortical neural networks are able to efficiently solve inference problems remains an open question in computational neuroscience. Recently, abstract models of Bayesian computation in neural circuits have been proposed, but they lack a mechanistic interpretation at the single-cell level. In this article, we describe a complete theoretical framework for building networks of leaky integrate-and-fire neurons that can sample from arbitrary probability distributions over binary random variables. We test our framework for a model inference task based on a psychophysical phenomenon (the Knill-Kersten optical illusion) and further assess its performance when applied to randomly generated distributions. As the local computations performed by the network strongly depend on the interaction between neurons, we compare several types of couplings mediated by either single synapses or interneuron chains. Due to its robustness to substrate imperfections such as parameter noise and background noise correlations, our model is particularly interesting for implementation on novel, neuro-inspired computing architectures, which can thereby serve as a fast, low-power substrate for solving real-world inference problems. PMID:25729361

  20. The NEST Dry-Run Mode: Efficient Dynamic Analysis of Neuronal Network Simulation Code.

    PubMed

    Kunkel, Susanne; Schenck, Wolfram

    2017-01-01

    NEST is a simulator for spiking neuronal networks that commits to a general purpose approach: It allows for high flexibility in the design of network models, and its applications range from small-scale simulations on laptops to brain-scale simulations on supercomputers. Hence, developers need to test their code for various use cases and ensure that changes to code do not impair scalability. However, running a full set of benchmarks on a supercomputer takes up precious compute-time resources and can entail long queuing times. Here, we present the NEST dry-run mode, which enables comprehensive dynamic code analysis without requiring access to high-performance computing facilities. A dry-run simulation is carried out by a single process, which performs all simulation steps except communication as if it was part of a parallel environment with many processes. We show that measurements of memory usage and runtime of neuronal network simulations closely match the corresponding dry-run data. Furthermore, we demonstrate the successful application of the dry-run mode in the areas of profiling and performance modeling.

  1. The NEST Dry-Run Mode: Efficient Dynamic Analysis of Neuronal Network Simulation Code

    PubMed Central

    Kunkel, Susanne; Schenck, Wolfram

    2017-01-01

    NEST is a simulator for spiking neuronal networks that commits to a general purpose approach: It allows for high flexibility in the design of network models, and its applications range from small-scale simulations on laptops to brain-scale simulations on supercomputers. Hence, developers need to test their code for various use cases and ensure that changes to code do not impair scalability. However, running a full set of benchmarks on a supercomputer takes up precious compute-time resources and can entail long queuing times. Here, we present the NEST dry-run mode, which enables comprehensive dynamic code analysis without requiring access to high-performance computing facilities. A dry-run simulation is carried out by a single process, which performs all simulation steps except communication as if it was part of a parallel environment with many processes. We show that measurements of memory usage and runtime of neuronal network simulations closely match the corresponding dry-run data. Furthermore, we demonstrate the successful application of the dry-run mode in the areas of profiling and performance modeling. PMID:28701946

  2. Searching for Collective Behavior in a Large Network of Sensory Neurons

    PubMed Central

    Tkačik, Gašper; Marre, Olivier; Amodei, Dario; Schneidman, Elad; Bialek, William; Berry, Michael J.

    2014-01-01

    Maximum entropy models are the least structured probability distributions that exactly reproduce a chosen set of statistics measured in an interacting network. Here we use this principle to construct probabilistic models which describe the correlated spiking activity of populations of up to 120 neurons in the salamander retina as it responds to natural movies. Already in groups as small as 10 neurons, interactions between spikes can no longer be regarded as small perturbations in an otherwise independent system; for 40 or more neurons pairwise interactions need to be supplemented by a global interaction that controls the distribution of synchrony in the population. Here we show that such “K-pairwise” models—being systematic extensions of the previously used pairwise Ising models—provide an excellent account of the data. We explore the properties of the neural vocabulary by: 1) estimating its entropy, which constrains the population's capacity to represent visual information; 2) classifying activity patterns into a small set of metastable collective modes; 3) showing that the neural codeword ensembles are extremely inhomogenous; 4) demonstrating that the state of individual neurons is highly predictable from the rest of the population, allowing the capacity for error correction. PMID:24391485

  3. Searching for collective behavior in a large network of sensory neurons.

    PubMed

    Tkačik, Gašper; Marre, Olivier; Amodei, Dario; Schneidman, Elad; Bialek, William; Berry, Michael J

    2014-01-01

    Maximum entropy models are the least structured probability distributions that exactly reproduce a chosen set of statistics measured in an interacting network. Here we use this principle to construct probabilistic models which describe the correlated spiking activity of populations of up to 120 neurons in the salamander retina as it responds to natural movies. Already in groups as small as 10 neurons, interactions between spikes can no longer be regarded as small perturbations in an otherwise independent system; for 40 or more neurons pairwise interactions need to be supplemented by a global interaction that controls the distribution of synchrony in the population. Here we show that such "K-pairwise" models--being systematic extensions of the previously used pairwise Ising models--provide an excellent account of the data. We explore the properties of the neural vocabulary by: 1) estimating its entropy, which constrains the population's capacity to represent visual information; 2) classifying activity patterns into a small set of metastable collective modes; 3) showing that the neural codeword ensembles are extremely inhomogenous; 4) demonstrating that the state of individual neurons is highly predictable from the rest of the population, allowing the capacity for error correction.

  4. Synchronised firing patterns in a random network of adaptive exponential integrate-and-fire neuron model.

    PubMed

    Borges, F S; Protachevicz, P R; Lameu, E L; Bonetti, R C; Iarosz, K C; Caldas, I L; Baptista, M S; Batista, A M

    2017-06-01

    We have studied neuronal synchronisation in a random network of adaptive exponential integrate-and-fire neurons. We study how spiking or bursting synchronous behaviour appears as a function of the coupling strength and the probability of connections, by constructing parameter spaces that identify these synchronous behaviours from measurements of the inter-spike interval and the calculation of the order parameter. Moreover, we verify the robustness of synchronisation by applying an external perturbation to each neuron. The simulations show that bursting synchronisation is more robust than spike synchronisation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Heterogeneity induces rhythms of weakly coupled circadian neurons

    NASA Astrophysics Data System (ADS)

    Gu, Changgui; Liang, Xiaoming; Yang, Huijie; Rohling, Jos H. T.

    2016-02-01

    The main clock located in the suprachiasmatic nucleus (SCN) regulates circadian rhythms in mammals. The SCN is composed of approximately twenty thousand heterogeneous self-oscillating neurons, that have intrinsic periods varying from 22 h to 28 h. They are coupled through neurotransmitters and neuropeptides to form a network and output a uniform periodic rhythm. Previous studies found that the heterogeneity of the neurons leads to attenuation of the circadian rhythm with strong cellular coupling. In the present study, we investigate the heterogeneity of the neurons and of the network in the condition of constant darkness. Interestingly, we found that the heterogeneity of weakly coupled neurons enables them to oscillate and strengthen the circadian rhythm. In addition, we found that the period of the SCN network increases with the increase of the degree of heterogeneity. As the network heterogeneity does not change the dynamics of the rhythm, our study shows that the heterogeneity of the neurons is vitally important for rhythm generation in weakly coupled systems, such as the SCN, and it provides a new method to strengthen the circadian rhythm, as well as an alternative explanation for differences in free running periods between species in the absence of the daily cycle.

  6. Extraction of Inter-Aural Time Differences Using a Spiking Neuron Network Model of the Medial Superior Olive.

    PubMed

    Encke, Jörg; Hemmert, Werner

    2018-01-01

    The mammalian auditory system is able to extract temporal and spectral features from sound signals at the two ears. One important cue for localization of low-frequency sound sources in the horizontal plane are inter-aural time differences (ITDs) which are first analyzed in the medial superior olive (MSO) in the brainstem. Neural recordings of ITD tuning curves at various stages along the auditory pathway suggest that ITDs in the mammalian brainstem are not represented in form of a Jeffress-type place code. An alternative is the hemispheric opponent-channel code, according to which ITDs are encoded as the difference in the responses of the MSO nuclei in the two hemispheres. In this study, we present a physiologically-plausible, spiking neuron network model of the mammalian MSO circuit and apply two different methods of extracting ITDs from arbitrary sound signals. The network model is driven by a functional model of the auditory periphery and physiological models of the cochlear nucleus and the MSO. Using a linear opponent-channel decoder, we show that the network is able to detect changes in ITD with a precision down to 10 μs and that the sensitivity of the decoder depends on the slope of the ITD-rate functions. A second approach uses an artificial neuronal network to predict ITDs directly from the spiking output of the MSO and ANF model. Using this predictor, we show that the MSO-network is able to reliably encode static and time-dependent ITDs over a large frequency range, also for complex signals like speech.

  7. Defects formation and spiral waves in a network of neurons in presence of electromagnetic induction.

    PubMed

    Rostami, Zahra; Jafari, Sajad

    2018-04-01

    Complex anatomical and physiological structure of an excitable tissue (e.g., cardiac tissue) in the body can represent different electrical activities through normal or abnormal behavior. Abnormalities of the excitable tissue coming from different biological reasons can lead to formation of some defects. Such defects can cause some successive waves that may end up to some additional reorganizing beating behaviors like spiral waves or target waves. In this study, formation of defects and the resulting emitted waves in an excitable tissue are investigated. We have considered a square array network of neurons with nearest-neighbor connections to describe the excitable tissue. Fundamentally, electrophysiological properties of ion currents in the body are responsible for exhibition of electrical spatiotemporal patterns. More precisely, fluctuation of accumulated ions inside and outside of cell causes variable electrical and magnetic field. Considering undeniable mutual effects of electrical field and magnetic field, we have proposed the new Hindmarsh-Rose (HR) neuronal model for the local dynamics of each individual neuron in the network. In this new neuronal model, the influence of magnetic flow on membrane potential is defined. This improved model holds more bifurcation parameters. Moreover, the dynamical behavior of the tissue is investigated in different states of quiescent, spiking, bursting and even chaotic state. The resulting spatiotemporal patterns are represented and the time series of some sampled neurons are displayed, as well.

  8. Factors Underlying Bursting Behavior in a Network of Cultured Hippocampal Neurons Exposed to Zero Magnesium

    PubMed Central

    Mangan, Patrick S.; Kapur, Jaideep

    2010-01-01

    Factors contributing to reduced magnesium-induced neuronal action potential bursting were investigated in primary hippocampal cell culture at high and low culture density. In nominally zero external magnesium medium, pyramidal neurons from high-density cultures produced recurrent spontaneous action potential bursts superimposed on prolonged depolarizations. These bursts were partially attenuated by the NMDA receptor antagonist D-APV. Pharmacological analysis of miniature excitatory postsynaptic currents (EPSCs) revealed 2 components: one sensitive to D-APV and another to the AMPA receptor antagonist DNQX. The components were kinetically distinct. Participation of NMDA receptors in reduced magnesium-induced synaptic events was supported by the localization of the NR1 subunit of the NMDA receptor with the presynaptic vesicular protein synaptophysin. Presynaptically, zero magnesium induced a significant increase in EPSC frequency likely attributable to increased neuronal hyperexcitability induced by reduced membrane surface charge screening. Mean quantal content was significantly increased in zero magnesium. Cells from low-density cultures did not exhibit action potential bursting in zero magnesium but did show increased EPSC frequency. Low-density neurons had less synaptophysin immunofluorescence and fewer active synapses as determined by FM1-43 analysis. These results demonstrate that multiple factors are involved in network bursting. Increased probability of transmitter release presynaptically, enhanced NMDA receptor-mediated excitability postsynaptically, and extent of neuronal interconnectivity contribute to initiation and maintenance of elevated network excitability. PMID:14534286

  9. Extending the Cortical Grasping Network: Pre-supplementary Motor Neuron Activity During Vision and Grasping of Objects.

    PubMed

    Lanzilotto, Marco; Livi, Alessandro; Maranesi, Monica; Gerbella, Marzio; Barz, Falk; Ruther, Patrick; Fogassi, Leonardo; Rizzolatti, Giacomo; Bonini, Luca

    2016-12-01

    Grasping relies on a network of parieto-frontal areas lying on the dorsolateral and dorsomedial parts of the hemispheres. However, the initiation and sequencing of voluntary actions also requires the contribution of mesial premotor regions, particularly the pre-supplementary motor area F6. We recorded 233 F6 neurons from 2 monkeys with chronic linear multishank neural probes during reaching-grasping visuomotor tasks. We showed that F6 neurons play a role in the control of forelimb movements and some of them (26%) exhibit visual and/or motor specificity for the target object. Interestingly, area F6 neurons form 2 functionally distinct populations, showing either visually-triggered or movement-related bursts of activity, in contrast to the sustained visual-to-motor activity displayed by ventral premotor area F5 neurons recorded in the same animals and with the same task during previous studies. These findings suggest that F6 plays a role in object grasping and extend existing models of the cortical grasping network. © The Author 2016. Published by Oxford University Press.

  10. Contributions of diverse excitatory and inhibitory neurons to recurrent network activity in cerebral cortex.

    PubMed

    Neske, Garrett T; Patrick, Saundra L; Connors, Barry W

    2015-01-21

    The recurrent synaptic architecture of neocortex allows for self-generated network activity. One form of such activity is the Up state, in which neurons transiently receive barrages of excitatory and inhibitory synaptic inputs that depolarize many neurons to spike threshold before returning to a relatively quiescent Down state. The extent to which different cell types participate in Up states is still unclear. Inhibitory interneurons have particularly diverse intrinsic properties and synaptic connections with the local network, suggesting that different interneurons might play different roles in activated network states. We have studied the firing, subthreshold behavior, and synaptic conductances of identified cell types during Up and Down states in layers 5 and 2/3 in mouse barrel cortex in vitro. We recorded from pyramidal cells and interneurons expressing parvalbumin (PV), somatostatin (SOM), vasoactive intestinal peptide (VIP), or neuropeptide Y. PV cells were the most active interneuron subtype during the Up state, yet the other subtypes also received substantial synaptic conductances and often generated spikes. In all cell types except PV cells, the beginning of the Up state was dominated by synaptic inhibition, which decreased thereafter; excitation was more persistent, suggesting that inhibition is not the dominant force in terminating Up states. Compared with barrel cortex, SOM and VIP cells were much less active in entorhinal cortex during Up states. Our results provide a measure of functional connectivity of various neuron types in barrel cortex and suggest differential roles for interneuron types in the generation and control of persistent network activity. Copyright © 2015 the authors 0270-6474/15/351089-17$15.00/0.

  11. Brain without mind: Computer simulation of neural networks with modifiable neuronal interactions

    NASA Astrophysics Data System (ADS)

    Clark, John W.; Rafelski, Johann; Winston, Jeffrey V.

    1985-07-01

    Aspects of brain function are examined in terms of a nonlinear dynamical system of highly interconnected neuron-like binary decision elements. The model neurons operate synchronously in discrete time, according to deterministic or probabilistic equations of motion. Plasticity of the nervous system, which underlies such cognitive collective phenomena as adaptive development, learning, and memory, is represented by temporal modification of interneuronal connection strengths depending on momentary or recent neural activity. A formal basis is presented for the construction of local plasticity algorithms, or connection-modification routines, spanning a large class. To build an intuitive understanding of the behavior of discrete-time network models, extensive computer simulations have been carried out (a) for nets with fixed, quasirandom connectivity and (b) for nets with connections that evolve under one or another choice of plasticity algorithm. From the former experiments, insights are gained concerning the spontaneous emergence of order in the form of cyclic modes of neuronal activity. In the course of the latter experiments, a simple plasticity routine (“brainwashing,” or “anti-learning”) was identified which, applied to nets with initially quasirandom connectivity, creates model networks which provide more felicitous starting points for computer experiments on the engramming of content-addressable memories and on learning more generally. The potential relevance of this algorithm to developmental neurobiology and to sleep states is discussed. The model considered is at the same time a synthesis of earlier synchronous neural-network models and an elaboration upon them; accordingly, the present article offers both a focused review of the dynamical properties of such systems and a selection of new findings derived from computer simulation.

  12. Modeling mesoscopic cortical dynamics using a mean-field model of conductance-based networks of adaptive exponential integrate-and-fire neurons.

    PubMed

    Zerlaut, Yann; Chemla, Sandrine; Chavane, Frederic; Destexhe, Alain

    2018-02-01

    Voltage-sensitive dye imaging (VSDi) has revealed fundamental properties of neocortical processing at macroscopic scales. Since for each pixel VSDi signals report the average membrane potential over hundreds of neurons, it seems natural to use a mean-field formalism to model such signals. Here, we present a mean-field model of networks of Adaptive Exponential (AdEx) integrate-and-fire neurons, with conductance-based synaptic interactions. We study a network of regular-spiking (RS) excitatory neurons and fast-spiking (FS) inhibitory neurons. We use a Master Equation formalism, together with a semi-analytic approach to the transfer function of AdEx neurons to describe the average dynamics of the coupled populations. We compare the predictions of this mean-field model to simulated networks of RS-FS cells, first at the level of the spontaneous activity of the network, which is well predicted by the analytical description. Second, we investigate the response of the network to time-varying external input, and show that the mean-field model predicts the response time course of the population. Finally, to model VSDi signals, we consider a one-dimensional ring model made of interconnected RS-FS mean-field units. We found that this model can reproduce the spatio-temporal patterns seen in VSDi of awake monkey visual cortex as a response to local and transient visual stimuli. Conversely, we show that the model allows one to infer physiological parameters from the experimentally-recorded spatio-temporal patterns.

  13. Effect of spike-timing-dependent plasticity on stochastic burst synchronization in a scale-free neuronal network.

    PubMed

    Kim, Sang-Yoon; Lim, Woochang

    2018-06-01

    We consider an excitatory population of subthreshold Izhikevich neurons which cannot fire spontaneously without noise. As the coupling strength passes a threshold, individual neurons exhibit noise-induced burstings. This neuronal population has adaptive dynamic synaptic strengths governed by the spike-timing-dependent plasticity (STDP). However, STDP was not considered in previous works on stochastic burst synchronization (SBS) between noise-induced burstings of sub-threshold neurons. Here, we study the effect of additive STDP on SBS by varying the noise intensity D in the Barabási-Albert scale-free network (SFN). One of our main findings is a Matthew effect in synaptic plasticity which occurs due to a positive feedback process. Good burst synchronization (with higher bursting measure) gets better via long-term potentiation (LTP) of synaptic strengths, while bad burst synchronization (with lower bursting measure) gets worse via long-term depression (LTD). Consequently, a step-like rapid transition to SBS occurs by changing D , in contrast to a relatively smooth transition in the absence of STDP. We also investigate the effects of network architecture on SBS by varying the symmetric attachment degree [Formula: see text] and the asymmetry parameter [Formula: see text] in the SFN, and Matthew effects are also found to occur by varying [Formula: see text] and [Formula: see text]. Furthermore, emergences of LTP and LTD of synaptic strengths are investigated in details via our own microscopic methods based on both the distributions of time delays between the burst onset times of the pre- and the post-synaptic neurons and the pair-correlations between the pre- and the post-synaptic instantaneous individual burst rates (IIBRs). Finally, a multiplicative STDP case (depending on states) with soft bounds is also investigated in comparison with the additive STDP case (independent of states) with hard bounds. Due to the soft bounds, a Matthew effect with some quantitative

  14. Modeling extracellular fields for a three-dimensional network of cells using NEURON.

    PubMed

    Appukuttan, Shailesh; Brain, Keith L; Manchanda, Rohit

    2017-10-01

    Computational modeling of biological cells usually ignores their extracellular fields, assuming them to be inconsequential. Though such an assumption might be justified in certain cases, it is debatable for networks of tightly packed cells, such as in the central nervous system and the syncytial tissues of cardiac and smooth muscle. In the present work, we demonstrate a technique to couple the extracellular fields of individual cells within the NEURON simulation environment. The existing features of the simulator are extended by explicitly defining current balance equations, resulting in the coupling of the extracellular fields of adjacent cells. With this technique, we achieved continuity of extracellular space for a network model, thereby allowing the exploration of extracellular interactions computationally. Using a three-dimensional network model, passive and active electrical properties were evaluated under varying levels of extracellular volumes. Simultaneous intracellular and extracellular recordings for synaptic and action potentials were analyzed, and the potential of ephaptic transmission towards functional coupling of cells was explored. We have implemented a true bi-domain representation of a network of cells, with the extracellular domain being continuous throughout the entire model. This has hitherto not been achieved using NEURON, or other compartmental modeling platforms. We have demonstrated the coupling of the extracellular field of every cell in a three-dimensional model to obtain a continuous uniform extracellular space. This technique provides a framework for the investigation of interactions in tightly packed networks of cells via their extracellular fields. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Altered neuronal network and rescue in a human MECP2 duplication model

    PubMed Central

    Nageshappa, Savitha; Carromeu, Cassiano; Trujillo, Cleber A.; Mesci, Pinar; Espuny-Camacho, Ira; Pasciuto, Emanuela; Vanderhaeghen, Pierre; Verfaillie, Catherine; Raitano, Susanna; Kumar, Anujith; Carvalho, Claudia M.B.; Bagni, Claudia; Ramocki, Melissa B.; Araujo, Bruno H. S.; Torres, Laila B.; Lupski, James R.; Van Esch, Hilde; Muotri, Alysson R.

    2015-01-01

    Increased dosage of MeCP2 results in a dramatic neurodevelopmental phenotype with onset at birth. We generated induced pluripotent stem cells (iPSC) from patients with the MECP2 duplication syndrome (MECP2dup), carrying different duplication sizes, to study the impact of increased MeCP2 dosage in human neurons. We show that cortical neurons derived from these different MECP2dup iPSC lines have increase synaptogenesis and dendritic complexity. Additionally, using multi-electrodes arrays, we show that neuronal network synchronization was altered in MECP2dup-derived neurons. Given MeCP2 function at the epigenetic level, we tested if these alterations were reversible using a library of compounds with defined activity on epigenetic pathways. One histone deacetylase inhibitor, NCH-51, was validated as a potential clinical candidate. Interestingly, this compound has never been considered before as a therapeutic alternative for neurological disorders. Our model recapitulates early stages of the human MECP2 duplication syndrome and represents a promising cellular tool to facilitate therapeutic drug screening for severe neurodevelopmental disorders. PMID:26347316

  16. Neuronal networks and self-organizing maps: new computer techniques in the acoustic evaluation of the infant cry.

    PubMed

    Schönweiler, R; Kaese, S; Möller, S; Rinscheid, A; Ptok, M

    1996-12-05

    Neuronal networks are computer-based techniques for the evaluation and control of complex information systems and processes. So far, they have been used in engineering, telecommunications, artificial speech and speech recognition. A new approach in neuronal network is the self-organizing map (Kohonen map). In the phase of 'learning', the map adapts to the patterns of the primary signals. If, the phase of 'using the map', the input signal hits the field of the primary signals, it resembles them and is called a 'winner'. In our study, we recorded the cries of newborns and young infants using digital audio tape (DAT) and a high quality microphone. The cries were elicited by tactile stimuli wearing headphones. In 27 cases, delayed auditory feedback was presented to the children using a headphone and an additional three-head tape-recorder. Spectrographic characteristics of the cries were classified by 20-step bark spectra and then applied to the neuronal networks. It was possible to recognize similarities of different cries of the same children as well as interindividual differences, which are also audible to experienced listeners. Differences were obvious in profound hearing loss. We know much about the cries of both healthy and sick infants, but a reliable investigation regimen, which can be used for clinical routine purposes, has yet not been developed. If, in the future, it becomes possible to classify spectrographic characteristics automatically, even if they are not audible, neuronal networks may be helpful in the early diagnosis of infant diseases.

  17. Very long transients, irregular firing, and chaotic dynamics in networks of randomly connected inhibitory integrate-and-fire neurons.

    PubMed

    Zillmer, Rüdiger; Brunel, Nicolas; Hansel, David

    2009-03-01

    We present results of an extensive numerical study of the dynamics of networks of integrate-and-fire neurons connected randomly through inhibitory interactions. We first consider delayed interactions with infinitely fast rise and decay. Depending on the parameters, the network displays transients which are short or exponentially long in the network size. At the end of these transients, the dynamics settle on a periodic attractor. If the number of connections per neuron is large ( approximately 1000) , this attractor is a cluster state with a short period. In contrast, if the number of connections per neuron is small ( approximately 100) , the attractor has complex dynamics and very long period. During the long transients the neurons fire in a highly irregular manner. They can be viewed as quasistationary states in which, depending on the coupling strength, the pattern of activity is asynchronous or displays population oscillations. In the first case, the average firing rates and the variability of the single-neuron activity are well described by a mean-field theory valid in the thermodynamic limit. Bifurcations of the long transient dynamics from asynchronous to synchronous activity are also well predicted by this theory. The transient dynamics display features reminiscent of stable chaos. In particular, despite being linearly stable, the trajectories of the transient dynamics are destabilized by finite perturbations as small as O(1/N) . We further show that stable chaos is also observed for postsynaptic currents with finite decay time. However, we report in this type of network that chaotic dynamics characterized by positive Lyapunov exponents can also be observed. We show in fact that chaos occurs when the decay time of the synaptic currents is long compared to the synaptic delay, provided that the network is sufficiently large.

  18. Neuron-Like Networks Between Ribosomal Proteins Within the Ribosome

    NASA Astrophysics Data System (ADS)

    Poirot, Olivier; Timsit, Youri

    2016-05-01

    From brain to the World Wide Web, information-processing networks share common scale invariant properties. Here, we reveal the existence of neural-like networks at a molecular scale within the ribosome. We show that with their extensions, ribosomal proteins form complex assortative interaction networks through which they communicate through tiny interfaces. The analysis of the crystal structures of 50S eubacterial particles reveals that most of these interfaces involve key phylogenetically conserved residues. The systematic observation of interactions between basic and aromatic amino acids at the interfaces and along the extension provides new structural insights that may contribute to decipher the molecular mechanisms of signal transmission within or between the ribosomal proteins. Similar to neurons interacting through “molecular synapses”, ribosomal proteins form a network that suggest an analogy with a simple molecular brain in which the “sensory-proteins” innervate the functional ribosomal sites, while the “inter-proteins” interconnect them into circuits suitable to process the information flow that circulates during protein synthesis. It is likely that these circuits have evolved to coordinate both the complex macromolecular motions and the binding of the multiple factors during translation. This opens new perspectives on nanoscale information transfer and processing.

  19. Identification of a neuronal transcription factor network involved in medulloblastoma development

    PubMed Central

    2013-01-01

    Background Medulloblastomas, the most frequent malignant brain tumours affecting children, comprise at least 4 distinct clinicogenetic subgroups. Aberrant sonic hedgehog (SHH) signalling is observed in approximately 25% of tumours and defines one subgroup. Although alterations in SHH pathway genes (e.g. PTCH1, SUFU) are observed in many of these tumours, high throughput genomic analyses have identified few other recurring mutations. Here, we have mutagenised the Ptch+/- murine tumour model using the Sleeping Beauty transposon system to identify additional genes and pathways involved in SHH subgroup medulloblastoma development. Results Mutagenesis significantly increased medulloblastoma frequency and identified 17 candidate cancer genes, including orthologs of genes somatically mutated (PTEN, CREBBP) or associated with poor outcome (PTEN, MYT1L) in the human disease. Strikingly, these candidate genes were enriched for transcription factors (p=2x10-5), the majority of which (6/7; Crebbp, Myt1L, Nfia, Nfib, Tead1 and Tgif2) were linked within a single regulatory network enriched for genes associated with a differentiated neuronal phenotype. Furthermore, activity of this network varied significantly between the human subgroups, was associated with metastatic disease, and predicted poor survival specifically within the SHH subgroup of tumours. Igf2, previously implicated in medulloblastoma, was the most differentially expressed gene in murine tumours with network perturbation, and network activity in both mouse and human tumours was characterised by enrichment for multiple gene-sets indicating increased cell proliferation, IGF signalling, MYC target upregulation, and decreased neuronal differentiation. Conclusions Collectively, our data support a model of medulloblastoma development in SB-mutagenised Ptch+/- mice which involves disruption of a novel transcription factor network leading to Igf2 upregulation, proliferation of GNPs, and tumour formation. Moreover, our

  20. Identification of a neuronal transcription factor network involved in medulloblastoma development.

    PubMed

    Lastowska, Maria; Al-Afghani, Hani; Al-Balool, Haya H; Sheth, Harsh; Mercer, Emma; Coxhead, Jonathan M; Redfern, Chris P F; Peters, Heiko; Burt, Alastair D; Santibanez-Koref, Mauro; Bacon, Chris M; Chesler, Louis; Rust, Alistair G; Adams, David J; Williamson, Daniel; Clifford, Steven C; Jackson, Michael S

    2013-07-11

    Medulloblastomas, the most frequent malignant brain tumours affecting children, comprise at least 4 distinct clinicogenetic subgroups. Aberrant sonic hedgehog (SHH) signalling is observed in approximately 25% of tumours and defines one subgroup. Although alterations in SHH pathway genes (e.g. PTCH1, SUFU) are observed in many of these tumours, high throughput genomic analyses have identified few other recurring mutations. Here, we have mutagenised the Ptch+/- murine tumour model using the Sleeping Beauty transposon system to identify additional genes and pathways involved in SHH subgroup medulloblastoma development. Mutagenesis significantly increased medulloblastoma frequency and identified 17 candidate cancer genes, including orthologs of genes somatically mutated (PTEN, CREBBP) or associated with poor outcome (PTEN, MYT1L) in the human disease. Strikingly, these candidate genes were enriched for transcription factors (p=2x10-5), the majority of which (6/7; Crebbp, Myt1L, Nfia, Nfib, Tead1 and Tgif2) were linked within a single regulatory network enriched for genes associated with a differentiated neuronal phenotype. Furthermore, activity of this network varied significantly between the human subgroups, was associated with metastatic disease, and predicted poor survival specifically within the SHH subgroup of tumours. Igf2, previously implicated in medulloblastoma, was the most differentially expressed gene in murine tumours with network perturbation, and network activity in both mouse and human tumours was characterised by enrichment for multiple gene-sets indicating increased cell proliferation, IGF signalling, MYC target upregulation, and decreased neuronal differentiation. Collectively, our data support a model of medulloblastoma development in SB-mutagenised Ptch+/- mice which involves disruption of a novel transcription factor network leading to Igf2 upregulation, proliferation of GNPs, and tumour formation. Moreover, our results identify rational

  1. The iso-response method: measuring neuronal stimulus integration with closed-loop experiments

    PubMed Central

    Gollisch, Tim; Herz, Andreas V. M.

    2012-01-01

    Throughout the nervous system, neurons integrate high-dimensional input streams and transform them into an output of their own. This integration of incoming signals involves filtering processes and complex non-linear operations. The shapes of these filters and non-linearities determine the computational features of single neurons and their functional roles within larger networks. A detailed characterization of signal integration is thus a central ingredient to understanding information processing in neural circuits. Conventional methods for measuring single-neuron response properties, such as reverse correlation, however, are often limited by the implicit assumption that stimulus integration occurs in a linear fashion. Here, we review a conceptual and experimental alternative that is based on exploring the space of those sensory stimuli that result in the same neural output. As demonstrated by recent results in the auditory and visual system, such iso-response stimuli can be used to identify the non-linearities relevant for stimulus integration, disentangle consecutive neural processing steps, and determine their characteristics with unprecedented precision. Automated closed-loop experiments are crucial for this advance, allowing rapid search strategies for identifying iso-response stimuli during experiments. Prime targets for the method are feed-forward neural signaling chains in sensory systems, but the method has also been successfully applied to feedback systems. Depending on the specific question, “iso-response” may refer to a predefined firing rate, single-spike probability, first-spike latency, or other output measures. Examples from different studies show that substantial progress in understanding neural dynamics and coding can be achieved once rapid online data analysis and stimulus generation, adaptive sampling, and computational modeling are tightly integrated into experiments. PMID:23267315

  2. Development of spiral wave in a regular network of excitatory neurons due to stochastic poisoning of ion channels

    NASA Astrophysics Data System (ADS)

    Wu, Xinyi; Ma, Jun; Li, Fan; Jia, Ya

    2013-12-01

    Some experimental evidences show that spiral wave could be observed in the cortex of brain, and the propagation of this spiral wave plays an important role in signal communication as a pacemaker. The profile of spiral wave generated in a numerical way is often perfect while the observed profile in experiments is not perfect and smooth. In this paper, formation and development of spiral wave in a regular network of Morris-Lecar neurons, which neurons are placed on nodes uniformly in a two-dimensional array and each node is coupled with nearest-neighbor type, are investigated by considering the effect of stochastic ion channels poisoning and channel noise. The formation and selection of spiral wave could be detected as follows. (1) External forcing currents with diversity are imposed on neurons in the network of excitatory neurons with nearest-neighbor connection, a target-like wave emerges and its potential mechanism is discussed; (2) artificial defects and local poisoned area are selected in the network to induce new wave to interact with the target wave; (3) spiral wave can be induced to occupy the network when the target wave is blocked by the artificial defects or poisoned area with regular border lines; (4) the stochastic poisoning effect is introduced by randomly modifying the border lines (areas) of specific regions in the network. It is found that spiral wave can be also developed to occupy the network under appropriate poisoning ratio. The process of growth for the poisoned area of ion channels poisoning is measured, the effect of channels noise is also investigated. It is confirmed that perfect spiral wave emerges in the network under gradient poisoning even if the channel noise is considered.

  3. Fokker-Planck description of conductance-based integrate-and-fire neuronal networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kovacic, Gregor; Tao, Louis; Rangan, Aaditya V.

    2009-08-15

    Steady dynamics of coupled conductance-based integrate-and-fire neuronal networks in the limit of small fluctuations is studied via the equilibrium states of a Fokker-Planck equation. An asymptotic approximation for the membrane-potential probability density function is derived and the corresponding gain curves are found. Validity conditions are discussed for the Fokker-Planck description and verified via direct numerical simulations.

  4. Spiking Neurons for Analysis of Patterns

    NASA Technical Reports Server (NTRS)

    Huntsberger, Terrance

    2008-01-01

    Artificial neural networks comprising spiking neurons of a novel type have been conceived as improved pattern-analysis and pattern-recognition computational systems. These neurons are represented by a mathematical model denoted the state-variable model (SVM), which among other things, exploits a computational parallelism inherent in spiking-neuron geometry. Networks of SVM neurons offer advantages of speed and computational efficiency, relative to traditional artificial neural networks. The SVM also overcomes some of the limitations of prior spiking-neuron models. There are numerous potential pattern-recognition, tracking, and data-reduction (data preprocessing) applications for these SVM neural networks on Earth and in exploration of remote planets. Spiking neurons imitate biological neurons more closely than do the neurons of traditional artificial neural networks. A spiking neuron includes a central cell body (soma) surrounded by a tree-like interconnection network (dendrites). Spiking neurons are so named because they generate trains of output pulses (spikes) in response to inputs received from sensors or from other neurons. They gain their speed advantage over traditional neural networks by using the timing of individual spikes for computation, whereas traditional artificial neurons use averages of activity levels over time. Moreover, spiking neurons use the delays inherent in dendritic processing in order to efficiently encode the information content of incoming signals. Because traditional artificial neurons fail to capture this encoding, they have less processing capability, and so it is necessary to use more gates when implementing traditional artificial neurons in electronic circuitry. Such higher-order functions as dynamic tasking are effected by use of pools (collections) of spiking neurons interconnected by spike-transmitting fibers. The SVM includes adaptive thresholds and submodels of transport of ions (in imitation of such transport in biological

  5. Delay selection by spike-timing-dependent plasticity in recurrent networks of spiking neurons receiving oscillatory inputs.

    PubMed

    Kerr, Robert R; Burkitt, Anthony N; Thomas, Doreen A; Gilson, Matthieu; Grayden, David B

    2013-01-01

    Learning rules, such as spike-timing-dependent plasticity (STDP), change the structure of networks of neurons based on the firing activity. A network level understanding of these mechanisms can help infer how the brain learns patterns and processes information. Previous studies have shown that STDP selectively potentiates feed-forward connections that have specific axonal delays, and that this underlies behavioral functions such as sound localization in the auditory brainstem of the barn owl. In this study, we investigate how STDP leads to the selective potentiation of recurrent connections with different axonal and dendritic delays during oscillatory activity. We develop analytical models of learning with additive STDP in recurrent networks driven by oscillatory inputs, and support the results using simulations with leaky integrate-and-fire neurons. Our results show selective potentiation of connections with specific axonal delays, which depended on the input frequency. In addition, we demonstrate how this can lead to a network becoming selective in the amplitude of its oscillatory response to this frequency. We extend this model of axonal delay selection within a single recurrent network in two ways. First, we show the selective potentiation of connections with a range of both axonal and dendritic delays. Second, we show axonal delay selection between multiple groups receiving out-of-phase, oscillatory inputs. We discuss the application of these models to the formation and activation of neuronal ensembles or cell assemblies in the cortex, and also to missing fundamental pitch perception in the auditory brainstem.

  6. Delay Selection by Spike-Timing-Dependent Plasticity in Recurrent Networks of Spiking Neurons Receiving Oscillatory Inputs

    PubMed Central

    Kerr, Robert R.; Burkitt, Anthony N.; Thomas, Doreen A.; Gilson, Matthieu; Grayden, David B.

    2013-01-01

    Learning rules, such as spike-timing-dependent plasticity (STDP), change the structure of networks of neurons based on the firing activity. A network level understanding of these mechanisms can help infer how the brain learns patterns and processes information. Previous studies have shown that STDP selectively potentiates feed-forward connections that have specific axonal delays, and that this underlies behavioral functions such as sound localization in the auditory brainstem of the barn owl. In this study, we investigate how STDP leads to the selective potentiation of recurrent connections with different axonal and dendritic delays during oscillatory activity. We develop analytical models of learning with additive STDP in recurrent networks driven by oscillatory inputs, and support the results using simulations with leaky integrate-and-fire neurons. Our results show selective potentiation of connections with specific axonal delays, which depended on the input frequency. In addition, we demonstrate how this can lead to a network becoming selective in the amplitude of its oscillatory response to this frequency. We extend this model of axonal delay selection within a single recurrent network in two ways. First, we show the selective potentiation of connections with a range of both axonal and dendritic delays. Second, we show axonal delay selection between multiple groups receiving out-of-phase, oscillatory inputs. We discuss the application of these models to the formation and activation of neuronal ensembles or cell assemblies in the cortex, and also to missing fundamental pitch perception in the auditory brainstem. PMID:23408878

  7. FPGA implementation of a biological neural network based on the Hodgkin-Huxley neuron model

    PubMed Central

    Yaghini Bonabi, Safa; Asgharian, Hassan; Safari, Saeed; Nili Ahmadabadi, Majid

    2014-01-01

    A set of techniques for efficient implementation of Hodgkin-Huxley-based (H-H) model of a neural network on FPGA (Field Programmable Gate Array) is presented. The central implementation challenge is H-H model complexity that puts limits on the network size and on the execution speed. However, basics of the original model cannot be compromised when effect of synaptic specifications on the network behavior is the subject of study. To solve the problem, we used computational techniques such as CORDIC (Coordinate Rotation Digital Computer) algorithm and step-by-step integration in the implementation of arithmetic circuits. In addition, we employed different techniques such as sharing resources to preserve the details of model as well as increasing the network size in addition to keeping the network execution speed close to real time while having high precision. Implementation of a two mini-columns network with 120/30 excitatory/inhibitory neurons is provided to investigate the characteristic of our method in practice. The implementation techniques provide an opportunity to construct large FPGA-based network models to investigate the effect of different neurophysiological mechanisms, like voltage-gated channels and synaptic activities, on the behavior of a neural network in an appropriate execution time. Additional to inherent properties of FPGA, like parallelism and re-configurability, our approach makes the FPGA-based system a proper candidate for study on neural control of cognitive robots and systems as well. PMID:25484854

  8. FPGA implementation of a biological neural network based on the Hodgkin-Huxley neuron model.

    PubMed

    Yaghini Bonabi, Safa; Asgharian, Hassan; Safari, Saeed; Nili Ahmadabadi, Majid

    2014-01-01

    A set of techniques for efficient implementation of Hodgkin-Huxley-based (H-H) model of a neural network on FPGA (Field Programmable Gate Array) is presented. The central implementation challenge is H-H model complexity that puts limits on the network size and on the execution speed. However, basics of the original model cannot be compromised when effect of synaptic specifications on the network behavior is the subject of study. To solve the problem, we used computational techniques such as CORDIC (Coordinate Rotation Digital Computer) algorithm and step-by-step integration in the implementation of arithmetic circuits. In addition, we employed different techniques such as sharing resources to preserve the details of model as well as increasing the network size in addition to keeping the network execution speed close to real time while having high precision. Implementation of a two mini-columns network with 120/30 excitatory/inhibitory neurons is provided to investigate the characteristic of our method in practice. The implementation techniques provide an opportunity to construct large FPGA-based network models to investigate the effect of different neurophysiological mechanisms, like voltage-gated channels and synaptic activities, on the behavior of a neural network in an appropriate execution time. Additional to inherent properties of FPGA, like parallelism and re-configurability, our approach makes the FPGA-based system a proper candidate for study on neural control of cognitive robots and systems as well.

  9. Early Correlated Network Activity in the Hippocampus: Its Putative Role in Shaping Neuronal Circuits.

    PubMed

    Griguoli, Marilena; Cherubini, Enrico

    2017-01-01

    Synchronized neuronal activity occurring at different developmental stages in various brain structures represents a hallmark of developmental circuits. This activity, which differs in its specific patterns among animal species may play a crucial role in de novo formation and in shaping neuronal networks. In the rodent hippocampus in vitro , the so-called giant depolarizing potentials (GDPs) constitute a primordial form of neuronal synchrony preceding more organized forms of activity such as oscillations in the theta and gamma frequency range. GDPs are generated at the network level by the interaction of the neurotransmitters glutamate and GABA which, immediately after birth, exert both a depolarizing and excitatory action on their targets. GDPs are triggered by GABAergic interneurons, which in virtue of their extensive axonal branching operate as functional hubs to synchronize large ensembles of cells. Intrinsic bursting activity, driven by a persistent sodium conductance and facilitated by the low expression of Kv7.2 and Kv7.3 channel subunits, responsible for I M , exerts a permissive role in GDP generation. Here, we discuss how GDPs are generated in a probabilistic way when neuronal excitability within a local circuit reaches a certain threshold and how GDP-associated calcium transients act as coincident detectors for enhancing synaptic strength at emerging GABAergic and glutamatergic synapses. We discuss the possible in vivo correlate of this activity. Finally, we debate recent data showing how, in several animal models of neuropsychiatric disorders including autism, a GDPs dysfunction is associated to morphological alterations of neuronal circuits and behavioral deficits reminiscent of those observed in patients.

  10. Validation of long-term primary neuronal cultures and network activity through the integration of reversibly bonded microbioreactors and MEA substrates.

    PubMed

    Biffi, Emilia; Menegon, Andrea; Piraino, Francesco; Pedrocchi, Alessandra; Fiore, Gianfranco B; Rasponi, Marco

    2012-01-01

    In vitro recording of neuronal electrical activity is a widely used technique to understand brain functions and to study the effect of drugs on the central nervous system. The integration of microfluidic devices with microelectrode arrays (MEAs) enables the recording of networks activity in a controlled microenvironment. In this work, an integrated microfluidic system for neuronal cultures was developed, reversibly coupling a PDMS microfluidic device with a commercial flat MEA through magnetic forces. Neurons from mouse embryos were cultured in a 100 µm channel and their activity was followed up to 18 days in vitro. The maturation of the networks and their morphological and functional characteristics were comparable with those of networks cultured in macro-environments and described in literature. In this work, we successfully demonstrated the ability of long-term culturing of primary neuronal cells in a reversible bonded microfluidic device (based on magnetism) that will be fundamental for neuropharmacological studies. Copyright © 2011 Wiley Periodicals, Inc.

  11. Planar patch clamp for neuronal networks--considerations and future perspectives.

    PubMed

    Bosca, Alessandro; Martina, Marzia; Py, Christophe

    2014-01-01

    The patch-clamp technique is generally accepted as the gold standard for studying ion channel activity allowing investigators to either "clamp" membrane voltage and directly measure transmembrane currents through ion channels, or to passively monitor spontaneously occurring intracellular voltage oscillations. However, this resulting high information content comes at a price. The technique is labor-intensive and requires highly trained personnel and expensive equipment. This seriously limits its application as an interrogation tool for drug development. Patch-clamp chips have been developed in the last decade to overcome the tedious manipulations associated with the use of glass pipettes in conventional patch-clamp experiments. In this chapter, we describe some of the main materials and fabrication protocols that have been developed to date for the production of patch-clamp chips. We also present the concept of a patch-clamp chip array providing high resolution patch-clamp recordings from individual cells at multiple sites in a network of communicating neurons. On this chip, the neurons are aligned with the aperture-probes using chemical patterning. In the discussion we review the potential use of this technology for pharmaceutical assays, neuronal physiology and synaptic plasticity studies.

  12. Response sensitivity of barrel neuron subpopulations to simulated thalamic input.

    PubMed

    Pesavento, Michael J; Rittenhouse, Cynthia D; Pinto, David J

    2010-06-01

    Our goal is to examine the relationship between neuron- and network-level processing in the context of a well-studied cortical function, the processing of thalamic input by whisker-barrel circuits in rodent neocortex. Here we focus on neuron-level processing and investigate the responses of excitatory and inhibitory barrel neurons to simulated thalamic inputs applied using the dynamic clamp method in brain slices. Simulated inputs are modeled after real thalamic inputs recorded in vivo in response to brief whisker deflections. Our results suggest that inhibitory neurons require more input to reach firing threshold, but then fire earlier, with less variability, and respond to a broader range of inputs than do excitatory neurons. Differences in the responses of barrel neuron subtypes depend on their intrinsic membrane properties. Neurons with a low input resistance require more input to reach threshold but then fire earlier than neurons with a higher input resistance, regardless of the neuron's classification. Our results also suggest that the response properties of excitatory versus inhibitory barrel neurons are consistent with the response sensitivities of the ensemble barrel network. The short response latency of inhibitory neurons may serve to suppress ensemble barrel responses to asynchronous thalamic input. Correspondingly, whereas neurons acting as part of the barrel circuit in vivo are highly selective for temporally correlated thalamic input, excitatory barrel neurons acting alone in vitro are less so. These data suggest that network-level processing of thalamic input in barrel cortex depends on neuron-level processing of the same input by excitatory and inhibitory barrel neurons.

  13. Bistability and up/down state alternations in inhibition-dominated randomly connected networks of LIF neurons.

    PubMed

    Tartaglia, Elisa M; Brunel, Nicolas

    2017-09-20

    Electrophysiological recordings in cortex in vivo have revealed a rich variety of dynamical regimes ranging from irregular asynchronous states to a diversity of synchronized states, depending on species, anesthesia, and external stimulation. The average population firing rate in these states is typically low. We study analytically and numerically a network of sparsely connected excitatory and inhibitory integrate-and-fire neurons in the inhibition-dominated, low firing rate regime. For sufficiently high values of the external input, the network exhibits an asynchronous low firing frequency state (L). Depending on synaptic time constants, we show that two scenarios may occur when external inputs are decreased: (1) the L state can destabilize through a Hopf bifucation as the external input is decreased, leading to synchronized oscillations spanning d δ to β frequencies; (2) the network can reach a bistable region, between the low firing frequency network state (L) and a quiescent one (Q). Adding an adaptation current to excitatory neurons leads to spontaneous alternations between L and Q states, similar to experimental observations on UP and DOWN states alternations.

  14. Mean Field Analysis of Large-Scale Interacting Populations of Stochastic Conductance-Based Spiking Neurons Using the Klimontovich Method

    NASA Astrophysics Data System (ADS)

    Gandolfo, Daniel; Rodriguez, Roger; Tuckwell, Henry C.

    2017-03-01

    We investigate the dynamics of large-scale interacting neural populations, composed of conductance based, spiking model neurons with modifiable synaptic connection strengths, which are possibly also subjected to external noisy currents. The network dynamics is controlled by a set of neural population probability distributions (PPD) which are constructed along the same lines as in the Klimontovich approach to the kinetic theory of plasmas. An exact non-closed, nonlinear, system of integro-partial differential equations is derived for the PPDs. As is customary, a closing procedure leads to a mean field limit. The equations we have obtained are of the same type as those which have been recently derived using rigorous techniques of probability theory. The numerical solutions of these so called McKean-Vlasov-Fokker-Planck equations, which are only valid in the limit of infinite size networks, actually shows that the statistical measures as obtained from PPDs are in good agreement with those obtained through direct integration of the stochastic dynamical system for large but finite size networks. Although numerical solutions have been obtained for networks of Fitzhugh-Nagumo model neurons, which are often used to approximate Hodgkin-Huxley model neurons, the theory can be readily applied to networks of general conductance-based model neurons of arbitrary dimension.

  15. Self-Organized Information Processing in Neuronal Networks: Replacing Layers in Deep Networks by Dynamics

    NASA Astrophysics Data System (ADS)

    Kirst, Christoph

    It is astonishing how the sub-parts of a brain co-act to produce coherent behavior. What are mechanism that coordinate information processing and communication and how can those be changed flexibly in order to cope with variable contexts? Here we show that when information is encoded in the deviations around a collective dynamical reference state of a recurrent network the propagation of these fluctuations is strongly dependent on precisely this underlying reference. Information here 'surfs' on top of the collective dynamics and switching between states enables fast and flexible rerouting of information. This in turn affects local processing and consequently changes in the global reference dynamics that re-regulate the distribution of information. This provides a generic mechanism for self-organized information processing as we demonstrate with an oscillatory Hopfield network that performs contextual pattern recognition. Deep neural networks have proven to be very successful recently. Here we show that generating information channels via collective reference dynamics can effectively compress a deep multi-layer architecture into a single layer making this mechanism a promising candidate for the organization of information processing in biological neuronal networks.

  16. A neuronal network model with simplified tonotopicity for tinnitus generation and its relief by sound therapy.

    PubMed

    Nagashino, Hirofumi; Kinouchi, Yohsuke; Danesh, Ali A; Pandya, Abhijit S

    2013-01-01

    Tinnitus is the perception of sound in the ears or in the head where no external source is present. Sound therapy is one of the most effective techniques for tinnitus treatment that have been proposed. In order to investigate mechanisms of tinnitus generation and the clinical effects of sound therapy, we have proposed conceptual and computational models with plasticity using a neural oscillator or a neuronal network model. In the present paper, we propose a neuronal network model with simplified tonotopicity of the auditory system as more detailed structure. In this model an integrate-and-fire neuron model is employed and homeostatic plasticity is incorporated. The computer simulation results show that the present model can show the generation of oscillation and its cessation by external input. It suggests that the present framework is promising as a modeling for the tinnitus generation and the effects of sound therapy.

  17. Simultaneous submicrometric 3D imaging of the micro-vascular network and the neuronal system in a mouse spinal cord

    PubMed Central

    Fratini, Michela; Bukreeva, Inna; Campi, Gaetano; Brun, Francesco; Tromba, Giuliana; Modregger, Peter; Bucci, Domenico; Battaglia, Giuseppe; Spanò, Raffaele; Mastrogiacomo, Maddalena; Requardt, Herwig; Giove, Federico; Bravin, Alberto; Cedola, Alessia

    2015-01-01

    Faults in vascular (VN) and neuronal networks of spinal cord are responsible for serious neurodegenerative pathologies. Because of inadequate investigation tools, the lacking knowledge of the complete fine structure of VN and neuronal system represents a crucial problem. Conventional 2D imaging yields incomplete spatial coverage leading to possible data misinterpretation, whereas standard 3D computed tomography imaging achieves insufficient resolution and contrast. We show that X-ray high-resolution phase-contrast tomography allows the simultaneous visualization of three-dimensional VN and neuronal systems of ex-vivo mouse spinal cord at scales spanning from millimeters to hundreds of nanometers, with nor contrast agent nor sectioning and neither destructive sample-preparation. We image both the 3D distribution of micro-capillary network and the micrometric nerve fibers, axon-bundles and neuron soma. Our approach is very suitable for pre-clinical investigation of neurodegenerative pathologies and spinal-cord-injuries, in particular to resolve the entangled relationship between VN and neuronal system. PMID:25686728

  18. Repeated Stimulation of Cultured Networks of Rat Cortical Neurons Induces Parallel Memory Traces

    ERIC Educational Resources Information Center

    le Feber, Joost; Witteveen, Tim; van Veenendaal, Tamar M.; Dijkstra, Jelle

    2015-01-01

    During systems consolidation, memories are spontaneously replayed favoring information transfer from hippocampus to neocortex. However, at present no empirically supported mechanism to accomplish a transfer of memory from hippocampal to extra-hippocampal sites has been offered. We used cultured neuronal networks on multielectrode arrays and…

  19. Posttranscriptional control of neuronal development by microRNA networks.

    PubMed

    Gao, Fen-Biao

    2008-01-01

    The proper development of the nervous system requires precise spatial and temporal control of gene expression at both the transcriptional and translational levels. In different experimental model systems, microRNAs (miRNAs) - a class of small, endogenous, noncoding RNAs that control the translation and stability of many mRNAs - are emerging as important regulators of various aspects of neuronal development. Further dissection of the in vivo physiological functions of individual miRNAs promises to offer novel mechanistic insights into the gene regulatory networks that ensure the precise assembly of a functional nervous system.

  20. Physical and biological regulation of neuron regenerative growth and network formation on recombinant dragline silks

    DOE PAGES

    An, Bo; Tang-Schomer, Min D.; Huang, Wenwen; ...

    2015-02-11

    In this paper, recombinant spider silks produced in transgenic goat milk were studied as cell culture matrices for neuronal growth. Major ampullate spidroin 1 (MaSp1) supported neuronal growth, axon extension and network connectivity, with cell morphology comparable to the gold standard poly-lysine. In addition, neurons growing on MaSp1 films had increased neural cell adhesion molecule (NCAM) expression at both mRNA and protein levels. The results indicate that MaSp1 films present useful surface charge and substrate stiffness to support the growth of primary rat cortical neurons. Moreover, a putative neuron-specific surface binding sequence GRGGL within MaSp1 may contribute to the biologicalmore » regulation of neuron growth. These findings indicate that MaSp1 could regulate neuron growth through its physical and biological features. Finally, this dual regulation mode of MaSp1 could provide an alternative strategy for generating functional silk materials for neural tissue engineering.« less

  1. Physical and biological regulation of neuron regenerative growth and network formation on recombinant dragline silks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    An, Bo; Tang-Schomer, Min D.; Huang, Wenwen

    In this paper, recombinant spider silks produced in transgenic goat milk were studied as cell culture matrices for neuronal growth. Major ampullate spidroin 1 (MaSp1) supported neuronal growth, axon extension and network connectivity, with cell morphology comparable to the gold standard poly-lysine. In addition, neurons growing on MaSp1 films had increased neural cell adhesion molecule (NCAM) expression at both mRNA and protein levels. The results indicate that MaSp1 films present useful surface charge and substrate stiffness to support the growth of primary rat cortical neurons. Moreover, a putative neuron-specific surface binding sequence GRGGL within MaSp1 may contribute to the biologicalmore » regulation of neuron growth. These findings indicate that MaSp1 could regulate neuron growth through its physical and biological features. Finally, this dual regulation mode of MaSp1 could provide an alternative strategy for generating functional silk materials for neural tissue engineering.« less

  2. Continuous and lurching traveling pulses in neuronal networks with delay and spatially decaying connectivity

    PubMed Central

    Golomb, David; Ermentrout, G. Bard

    1999-01-01

    Propagation of discharges in cortical and thalamic systems, which is used as a probe for examining network circuitry, is studied by constructing a one-dimensional model of integrate-and-fire neurons that are coupled by excitatory synapses with delay. Each neuron fires only one spike. The velocity and stability of propagating continuous pulses are calculated analytically. Above a certain critical value of the constant delay, these pulses lose stability. Instead, lurching pulses propagate with discontinuous and periodic spatio-temporal characteristics. The parameter regime for which lurching occurs is strongly affected by the footprint (connectivity) shape; bistability may occur with a square footprint shape but not with an exponential footprint shape. For strong synaptic coupling, the velocity of both continuous and lurching pulses increases logarithmically with the synaptic coupling strength gsyn for an exponential footprint shape, and it is bounded for a step footprint shape. We conclude that the differences in velocity and shape between the front of thalamic spindle waves in vitro and cortical paroxysmal discharges stem from their different effective delay; in thalamic networks, large effective delay between inhibitory neurons arises from their effective interaction via the excitatory cells which display postinhibitory rebound. PMID:10557346

  3. Network dynamics of 3D engineered neuronal cultures: a new experimental model for in-vitro electrophysiology.

    PubMed

    Frega, Monica; Tedesco, Mariateresa; Massobrio, Paolo; Pesce, Mattia; Martinoia, Sergio

    2014-06-30

    Despite the extensive use of in-vitro models for neuroscientific investigations and notwithstanding the growing field of network electrophysiology, all studies on cultured cells devoted to elucidate neurophysiological mechanisms and computational properties, are based on 2D neuronal networks. These networks are usually grown onto specific rigid substrates (also with embedded electrodes) and lack of most of the constituents of the in-vivo like environment: cell morphology, cell-to-cell interaction and neuritic outgrowth in all directions. Cells in a brain region develop in a 3D space and interact with a complex multi-cellular environment and extracellular matrix. Under this perspective, 3D networks coupled to micro-transducer arrays, represent a new and powerful in-vitro model capable of better emulating in-vivo physiology. In this work, we present a new experimental paradigm constituted by 3D hippocampal networks coupled to Micro-Electrode-Arrays (MEAs) and we show how the features of the recorded network dynamics differ from the corresponding 2D network model. Further development of the proposed 3D in-vitro model by adding embedded functionalized scaffolds might open new prospects for manipulating, stimulating and recording the neuronal activity to elucidate neurophysiological mechanisms and to design bio-hybrid microsystems.

  4. Network dynamics of 3D engineered neuronal cultures: a new experimental model for in-vitro electrophysiology

    PubMed Central

    Frega, Monica; Tedesco, Mariateresa; Massobrio, Paolo; Pesce, Mattia; Martinoia, Sergio

    2014-01-01

    Despite the extensive use of in-vitro models for neuroscientific investigations and notwithstanding the growing field of network electrophysiology, all studies on cultured cells devoted to elucidate neurophysiological mechanisms and computational properties, are based on 2D neuronal networks. These networks are usually grown onto specific rigid substrates (also with embedded electrodes) and lack of most of the constituents of the in-vivo like environment: cell morphology, cell-to-cell interaction and neuritic outgrowth in all directions. Cells in a brain region develop in a 3D space and interact with a complex multi-cellular environment and extracellular matrix. Under this perspective, 3D networks coupled to micro-transducer arrays, represent a new and powerful in-vitro model capable of better emulating in-vivo physiology. In this work, we present a new experimental paradigm constituted by 3D hippocampal networks coupled to Micro-Electrode-Arrays (MEAs) and we show how the features of the recorded network dynamics differ from the corresponding 2D network model. Further development of the proposed 3D in-vitro model by adding embedded functionalized scaffolds might open new prospects for manipulating, stimulating and recording the neuronal activity to elucidate neurophysiological mechanisms and to design bio-hybrid microsystems. PMID:24976386

  5. A biophysical observation model for field potentials of networks of leaky integrate-and-fire neurons.

    PubMed

    Beim Graben, Peter; Rodrigues, Serafim

    2012-01-01

    We present a biophysical approach for the coupling of neural network activity as resulting from proper dipole currents of cortical pyramidal neurons to the electric field in extracellular fluid. Starting from a reduced three-compartment model of a single pyramidal neuron, we derive an observation model for dendritic dipole currents in extracellular space and thereby for the dendritic field potential (DFP) that contributes to the local field potential (LFP) of a neural population. This work aligns and satisfies the widespread dipole assumption that is motivated by the "open-field" configuration of the DFP around cortical pyramidal cells. Our reduced three-compartment scheme allows to derive networks of leaky integrate-and-fire (LIF) models, which facilitates comparison with existing neural network and observation models. In particular, by means of numerical simulations we compare our approach with an ad hoc model by Mazzoni et al. (2008), and conclude that our biophysically motivated approach yields substantial improvement.

  6. A biophysical observation model for field potentials of networks of leaky integrate-and-fire neurons

    PubMed Central

    beim Graben, Peter; Rodrigues, Serafim

    2013-01-01

    We present a biophysical approach for the coupling of neural network activity as resulting from proper dipole currents of cortical pyramidal neurons to the electric field in extracellular fluid. Starting from a reduced three-compartment model of a single pyramidal neuron, we derive an observation model for dendritic dipole currents in extracellular space and thereby for the dendritic field potential (DFP) that contributes to the local field potential (LFP) of a neural population. This work aligns and satisfies the widespread dipole assumption that is motivated by the “open-field” configuration of the DFP around cortical pyramidal cells. Our reduced three-compartment scheme allows to derive networks of leaky integrate-and-fire (LIF) models, which facilitates comparison with existing neural network and observation models. In particular, by means of numerical simulations we compare our approach with an ad hoc model by Mazzoni et al. (2008), and conclude that our biophysically motivated approach yields substantial improvement. PMID:23316157

  7. Nicotine Modulates Multiple Regions in the Limbic Stress Network Regulating Activation of Hypophysiotrophic Neurons in Hypothalamic Paraventricular Nucleus

    PubMed Central

    Yu, Guoliang; Sharp, Burt M.

    2012-01-01

    Nicotine intake affects CNS responses to stressors. We reported that nicotine self-administration (SA) augmented the hypothalamo-pituitary-adrenal (HPA) stress response, in part due to altered neurotransmission and neuropeptide expression within hypothalamic paraventricular nucleus (PVN). Limbic-PVN interactions involving medial prefrontal cortex, amygdala, bed nucleus of the stria terminalis (BST) greatly impact the HPA stress response. Therefore, we investigated the effects of nicotine SA on stress-induced neuronal activation in limbic-PVN network, using c-Fos protein immunohistochemistry and retrograde tracing. Nicotine decreased stress-induced c-Fos in prelimbic cortex (PrL), anteroventral BST (avBST), and peri-PVN; but increased c-Fos induction in medial amygdala (MeA), locus coeruleus, and PVN. Fluoro-gold (FG) was injected into avBST or PVN, since GABAergic neurons in avBST projecting to PVN corticotrophin-releasing factor (CRF) neurons relay information from both PrL glutamatergic and MeA GABAergic neurons. The stress-induced c-Fos expression in retrograde-labeled FG+ neurons was decreased in PrL by nicotine, but increased in MeA, and also reduced in avBST. Therefore, within limbic-PVN network, nicotine SA exerts selective regional effects on neuronal activation by stress. These findings expand the mechanistic framework by demonstrating altered limbic-BST-PVN interactions underlying the disinhibition of PVN CRF neurons, an essential component of the amplified HPA response to stress by nicotine. PMID:22578217

  8. Multiple mechanisms switch an electrically coupled, synaptically inhibited neuron between competing rhythmic oscillators.

    PubMed

    Gutierrez, Gabrielle J; O'Leary, Timothy; Marder, Eve

    2013-03-06

    Rhythmic oscillations are common features of nervous systems. One of the fundamental questions posed by these rhythms is how individual neurons or groups of neurons are recruited into different network oscillations. We modeled competing fast and slow oscillators connected to a hub neuron with electrical and inhibitory synapses. We explore the patterns of coordination shown in the network as a function of the electrical coupling and inhibitory synapse strengths with the help of a novel visualization method that we call the "parameterscape." The hub neuron can be switched between the fast and slow oscillators by multiple network mechanisms, indicating that a given change in network state can be achieved by degenerate cellular mechanisms. These results have importance for interpreting experiments employing optogenetic, genetic, and pharmacological manipulations to understand circuit dynamics. Copyright © 2013 Elsevier Inc. All rights reserved.

  9. A framework for analyzing the relationship between gene expression and morphological, topological, and dynamical patterns in neuronal networks.

    PubMed

    de Arruda, Henrique Ferraz; Comin, Cesar Henrique; Miazaki, Mauro; Viana, Matheus Palhares; Costa, Luciano da Fontoura

    2015-04-30

    A key point in developmental biology is to understand how gene expression influences the morphological and dynamical patterns that are observed in living beings. In this work we propose a methodology capable of addressing this problem that is based on estimating the mutual information and Pearson correlation between the intensity of gene expression and measurements of several morphological properties of the cells. A similar approach is applied in order to identify effects of gene expression over the system dynamics. Neuronal networks were artificially grown over a lattice by considering a reference model used to generate artificial neurons. The input parameters of the artificial neurons were determined according to two distinct patterns of gene expression and the dynamical response was assessed by considering the integrate-and-fire model. As far as single gene dependence is concerned, we found that the interaction between the gene expression and the network topology, as well as between the former and the dynamics response, is strongly affected by the gene expression pattern. In addition, we observed a high correlation between the gene expression and some topological measurements of the neuronal network for particular patterns of gene expression. To our best understanding, there are no similar analyses to compare with. A proper understanding of gene expression influence requires jointly studying the morphology, topology, and dynamics of neurons. The proposed framework represents a first step towards predicting gene expression patterns from morphology and connectivity. Copyright © 2015. Published by Elsevier B.V.

  10. Deep learning and shapes similarity for joint segmentation and tracing single neurons in SEM images

    NASA Astrophysics Data System (ADS)

    Rao, Qiang; Xiao, Chi; Han, Hua; Chen, Xi; Shen, Lijun; Xie, Qiwei

    2017-02-01

    Extracting the structure of single neurons is critical for understanding how they function within the neural circuits. Recent developments in microscopy techniques, and the widely recognized need for openness and standardization provide a community resource for automated reconstruction of dendritic and axonal morphology of single neurons. In order to look into the fine structure of neurons, we use the Automated Tape-collecting Ultra Microtome Scanning Electron Microscopy (ATUM-SEM) to get images sequence of serial sections of animal brain tissue that densely packed with neurons. Different from other neuron reconstruction method, we propose a method that enhances the SEM images by detecting the neuronal membranes with deep convolutional neural network (DCNN) and segments single neurons by active contour with group shape similarity. We joint the segmentation and tracing together and they interact with each other by alternate iteration that tracing aids the selection of candidate region patch for active contour segmentation while the segmentation provides the neuron geometrical features which improve the robustness of tracing. The tracing model mainly relies on the neuron geometrical features and is updated after neuron being segmented on the every next section. Our method enables the reconstruction of neurons of the drosophila mushroom body which is cut to serial sections and imaged under SEM. Our method provides an elementary step for the whole reconstruction of neuronal networks.

  11. Response of Cultured Neuronal Network Activity After High-Intensity Power Frequency Magnetic Field Exposure

    PubMed Central

    Saito, Atsushi; Takahashi, Masayuki; Makino, Kei; Suzuki, Yukihisa; Jimbo, Yasuhiko; Nakasono, Satoshi

    2018-01-01

    High-intensity and low frequency (1–100 kHz) time-varying electromagnetic fields stimulate the human body through excitation of the nervous system. In power frequency range (50/60 Hz), a frequency-dependent threshold of the external electric field-induced neuronal modulation in cultured neuronal networks was used as one of the biological indicator in international guidelines; however, the threshold of the magnetic field-induced neuronal modulation has not been elucidated. In this study, we exposed rat brain-derived neuronal networks to a high-intensity power frequency magnetic field (hPF-MF), and evaluated the modulation of synchronized bursting activity using a multi-electrode array (MEA)-based extracellular recording technique. As a result of short-term hPF-MF exposure (50–400 mT root-mean-square (rms), 50 Hz, sinusoidal wave, 6 s), the synchronized bursting activity was increased in the 400 mT-exposed group. On the other hand, no change was observed in the 50–200 mT-exposed groups. In order to clarify the mechanisms of the 400 mT hPF-MF exposure-induced neuronal response, we evaluated it after blocking inhibitory synapses using bicuculline methiodide (BMI); subsequently, increase in bursting activity was observed with BMI application, and the response of 400 mT hPF-MF exposure disappeared. Therefore, it was suggested that the response of hPF-MF exposure was involved in the inhibitory input. Next, we screened the inhibitory pacemaker-like neuronal activity which showed autonomous 4–10 Hz firing with CNQX and D-AP5 application, and it was confirmed that the activity was reduced after 400 mT hPF-MF exposure. Comparison of these experimental results with estimated values of the induced electric field (E-field) in the culture medium revealed that the change in synchronized bursting activity occurred over 0.3 V/m, which was equivalent to the findings of a previous study that used the external electric fields. In addition, the results suggested that the

  12. Environmentally induced amplitude death and firing provocation in large-scale networks of neuronal systems

    NASA Astrophysics Data System (ADS)

    Pankratova, Evgeniya V.; Kalyakulina, Alena I.

    2016-12-01

    We study the dynamics of multielement neuronal systems taking into account both the direct interaction between the cells via linear coupling and nondiffusive cell-to-cell communication via common environment. For the cells exhibiting individual bursting behavior, we have revealed the dependence of the network activity on its scale. Particularly, we show that small-scale networks demonstrate the inability to maintain complicated oscillations: for a small number of elements in an ensemble, the phenomenon of amplitude death is observed. The existence of threshold network scales and mechanisms causing firing in artificial and real multielement neural networks, as well as their significance for biological applications, are discussed.

  13. Reverse engineering a mouse embryonic stem cell-specific transcriptional network reveals a new modulator of neuronal differentiation.

    PubMed

    De Cegli, Rossella; Iacobacci, Simona; Flore, Gemma; Gambardella, Gennaro; Mao, Lei; Cutillo, Luisa; Lauria, Mario; Klose, Joachim; Illingworth, Elizabeth; Banfi, Sandro; di Bernardo, Diego

    2013-01-01

    Gene expression profiles can be used to infer previously unknown transcriptional regulatory interaction among thousands of genes, via systems biology 'reverse engineering' approaches. We 'reverse engineered' an embryonic stem (ES)-specific transcriptional network from 171 gene expression profiles, measured in ES cells, to identify master regulators of gene expression ('hubs'). We discovered that E130012A19Rik (E13), highly expressed in mouse ES cells as compared with differentiated cells, was a central 'hub' of the network. We demonstrated that E13 is a protein-coding gene implicated in regulating the commitment towards the different neuronal subtypes and glia cells. The overexpression and knock-down of E13 in ES cell lines, undergoing differentiation into neurons and glia cells, caused a strong up-regulation of the glutamatergic neurons marker Vglut2 and a strong down-regulation of the GABAergic neurons marker GAD65 and of the radial glia marker Blbp. We confirmed E13 expression in the cerebral cortex of adult mice and during development. By immuno-based affinity purification, we characterized protein partners of E13, involved in the Polycomb complex. Our results suggest a role of E13 in regulating the division between glutamatergic projection neurons and GABAergic interneurons and glia cells possibly by epigenetic-mediated transcriptional regulation.

  14. GaAs Optoelectronic Integrated-Circuit Neurons

    NASA Technical Reports Server (NTRS)

    Lin, Steven H.; Kim, Jae H.; Psaltis, Demetri

    1992-01-01

    Monolithic GaAs optoelectronic integrated circuits developed for use as artificial neurons. Neural-network computer contains planar arrays of optoelectronic neurons, and variable synaptic connections between neurons effected by diffraction of light from volume hologram in photorefractive material. Basic principles of neural-network computers explained more fully in "Optoelectronic Integrated Circuits For Neural Networks" (NPO-17652). In present circuits, devices replaced by metal/semiconductor field effect transistors (MESFET's), which consume less power.

  15. Intersection of diverse neuronal genomes and neuropsychiatric disease: The Brain Somatic Mosaicism Network.

    PubMed

    McConnell, Michael J; Moran, John V; Abyzov, Alexej; Akbarian, Schahram; Bae, Taejeong; Cortes-Ciriano, Isidro; Erwin, Jennifer A; Fasching, Liana; Flasch, Diane A; Freed, Donald; Ganz, Javier; Jaffe, Andrew E; Kwan, Kenneth Y; Kwon, Minseok; Lodato, Michael A; Mills, Ryan E; Paquola, Apua C M; Rodin, Rachel E; Rosenbluh, Chaggai; Sestan, Nenad; Sherman, Maxwell A; Shin, Joo Heon; Song, Saera; Straub, Richard E; Thorpe, Jeremy; Weinberger, Daniel R; Urban, Alexander E; Zhou, Bo; Gage, Fred H; Lehner, Thomas; Senthil, Geetha; Walsh, Christopher A; Chess, Andrew; Courchesne, Eric; Gleeson, Joseph G; Kidd, Jeffrey M; Park, Peter J; Pevsner, Jonathan; Vaccarino, Flora M

    2017-04-28

    Neuropsychiatric disorders have a complex genetic architecture. Human genetic population-based studies have identified numerous heritable sequence and structural genomic variants associated with susceptibility to neuropsychiatric disease. However, these germline variants do not fully account for disease risk. During brain development, progenitor cells undergo billions of cell divisions to generate the ~80 billion neurons in the brain. The failure to accurately repair DNA damage arising during replication, transcription, and cellular metabolism amid this dramatic cellular expansion can lead to somatic mutations. Somatic mutations that alter subsets of neuronal transcriptomes and proteomes can, in turn, affect cell proliferation and survival and lead to neurodevelopmental disorders. The long life span of individual neurons and the direct relationship between neural circuits and behavior suggest that somatic mutations in small populations of neurons can significantly affect individual neurodevelopment. The Brain Somatic Mosaicism Network has been founded to study somatic mosaicism both in neurotypical human brains and in the context of complex neuropsychiatric disorders. Copyright © 2017, American Association for the Advancement of Science.

  16. Computational model of electrically coupled, intrinsically distinct pacemaker neurons.

    PubMed

    Soto-Treviño, Cristina; Rabbah, Pascale; Marder, Eve; Nadim, Farzan

    2005-07-01

    Electrical coupling between neurons with similar properties is often studied. Nonetheless, the role of electrical coupling between neurons with widely different intrinsic properties also occurs, but is less well understood. Inspired by the pacemaker group of the crustacean pyloric network, we developed a multicompartment, conductance-based model of a small network of intrinsically distinct, electrically coupled neurons. In the pyloric network, a small intrinsically bursting neuron, through gap junctions, drives 2 larger, tonically spiking neurons to reliably burst in-phase with it. Each model neuron has 2 compartments, one responsible for spike generation and the other for producing a slow, large-amplitude oscillation. We illustrate how these compartments interact and determine the dynamics of the model neurons. Our model captures the dynamic oscillation range measured from the isolated and coupled biological neurons. At the network level, we explore the range of coupling strengths for which synchronous bursting oscillations are possible. The spatial segregation of ionic currents significantly enhances the ability of the 2 neurons to burst synchronously, and the oscillation range of the model pacemaker network depends not only on the strength of the electrical synapse but also on the identity of the neuron receiving inputs. We also compare the activity of the electrically coupled, distinct neurons with that of a network of coupled identical bursting neurons. For small to moderate coupling strengths, the network of identical elements, when receiving asymmetrical inputs, can have a smaller dynamic range of oscillation than that of its constituent neurons in isolation.

  17. Simplicity and efficiency of integrate-and-fire neuron models.

    PubMed

    Plesser, Hans E; Diesmann, Markus

    2009-02-01

    Lovelace and Cios (2008) recently proposed a very simple spiking neuron (VSSN) model for simulations of large neuronal networks as an efficient replacement for the integrate-and-fire neuron model. We argue that the VSSN model falls behind key advances in neuronal network modeling over the past 20 years, in particular, techniques that permit simulators to compute the state of the neuron without repeated summation over the history of input spikes and to integrate the subthreshold dynamics exactly. State-of-the-art solvers for networks of integrate-and-fire model neurons are substantially more efficient than the VSSN simulator and allow routine simulations of networks of some 10(5) neurons and 10(9) connections on moderate computer clusters.

  18. Medical image processing using neural networks based on multivalued and universal binary neurons

    NASA Astrophysics Data System (ADS)

    Aizenberg, Igor N.; Aizenberg, Naum N.; Gotko, Eugen S.; Sochka, Vladimir A.

    1998-06-01

    Cellular Neural Networks (CNN) has become a very good mean for solution of the different kind of image processing problems. CNN based on multi-valued neurons (CNN-MVN) and CNN based on universal binary neurons (CNN-UBN) are the specific kinds of the CNN. MVN and UBN are neurons with complex-valued weights, and complex internal arithmetic. Their main feature is possibility of implementation of the arbitrary mapping between inputs and output described by the MVN, and arbitrary (not only threshold) Boolean function (UBN). Great advantage of the CNN is possibility of implementation of the any linear and many non-linear filters in spatial domain. Together with noise removing using CNN it is possible to implement filters, which can amplify high and medium frequencies. These filters are a very good mean for solution of the enhancement problem, and problem of details extraction against complex background. So, CNN make it possible to organize all the processing process from filtering until extraction of the important details. Organization of this process for medical image processing is considered in the paper. A major attention will be concentrated on the processing of the x-ray and ultrasound images corresponding to different oncology (or closed to oncology) pathologies. Additionally we will consider new structure of the neural network for solution of the problem of differential diagnostics of breast cancer.

  19. Mean-field description and propagation of chaos in networks of Hodgkin-Huxley and FitzHugh-Nagumo neurons

    PubMed Central

    2012-01-01

    We derive the mean-field equations arising as the limit of a network of interacting spiking neurons, as the number of neurons goes to infinity. The neurons belong to a fixed number of populations and are represented either by the Hodgkin-Huxley model or by one of its simplified version, the FitzHugh-Nagumo model. The synapses between neurons are either electrical or chemical. The network is assumed to be fully connected. The maximum conductances vary randomly. Under the condition that all neurons’ initial conditions are drawn independently from the same law that depends only on the population they belong to, we prove that a propagation of chaos phenomenon takes place, namely that in the mean-field limit, any finite number of neurons become independent and, within each population, have the same probability distribution. This probability distribution is a solution of a set of implicit equations, either nonlinear stochastic differential equations resembling the McKean-Vlasov equations or non-local partial differential equations resembling the McKean-Vlasov-Fokker-Planck equations. We prove the well-posedness of the McKean-Vlasov equations, i.e. the existence and uniqueness of a solution. We also show the results of some numerical experiments that indicate that the mean-field equations are a good representation of the mean activity of a finite size network, even for modest sizes. These experiments also indicate that the McKean-Vlasov-Fokker-Planck equations may be a good way to understand the mean-field dynamics through, e.g. a bifurcation analysis. Mathematics Subject Classification (2000): 60F99, 60B10, 92B20, 82C32, 82C80, 35Q80. PMID:22657695

  20. Optimization behavior of brainstem respiratory neurons. A cerebral neural network model.

    PubMed

    Poon, C S

    1991-01-01

    A recent model of respiratory control suggested that the steady-state respiratory responses to CO2 and exercise may be governed by an optimal control law in the brainstem respiratory neurons. It was not certain, however, whether such complex optimization behavior could be accomplished by a realistic biological neural network. To test this hypothesis, we developed a hybrid computer-neural model in which the dynamics of the lung, brain and other tissue compartments were simulated on a digital computer. Mimicking the "controller" was a human subject who pedalled on a bicycle with varying speed (analog of ventilatory output) with a view to minimize an analog signal of the total cost of breathing (chemical and mechanical) which was computed interactively and displayed on an oscilloscope. In this manner, the visuomotor cortex served as a proxy (homolog) of the brainstem respiratory neurons in the model. Results in 4 subjects showed a linear steady-state ventilatory CO2 response to arterial PCO2 during simulated CO2 inhalation and a nearly isocapnic steady-state response during simulated exercise. Thus, neural optimization is a plausible mechanism for respiratory control during exercise and can be achieved by a neural network with cognitive computational ability without the need for an exercise stimulus.

  1. One-to-one neuron-electrode interfacing.

    PubMed

    Greenbaum, Alon; Anava, Sarit; Ayali, Amir; Shein, Mark; David-Pur, Moshe; Ben-Jacob, Eshel; Hanein, Yael

    2009-09-15

    The question of neuronal network development and organization is a principle one, which is closely related to aspects of neuronal and network form-function interactions. In-vitro two-dimensional neuronal cultures have proved to be an attractive and successful model for the study of these questions. Research is constraint however by the search for techniques aimed at culturing stable networks, whose electrical activity can be reliably and consistently monitored. A simple approach to form small interconnected neuronal circuits while achieving one-to-one neuron-electrode interfacing is presented. Locust neurons were cultured on a novel bio-chip consisting of carbon-nanotube multi-electrode-arrays. The cells self-organized to position themselves in close proximity to the bio-chip electrodes. The organization of the cells on the electrodes was analyzed using time lapse microscopy, fluorescence imaging and scanning electron microscopy. Electrical recordings from well identified cells is presented and discussed. The unique properties of the bio-chip and the specific neuron-nanotube interactions, together with the use of relatively large insect ganglion cells, allowed long-term stabilization (as long as 10 days) of predefined neural network topology as well as high fidelity electrical recording of individual neuron firing. This novel preparation opens ample opportunity for future investigation into key neurobiological questions and principles.

  2. Nicotine modulates multiple regions in the limbic stress network regulating activation of hypophysiotrophic neurons in hypothalamic paraventricular nucleus.

    PubMed

    Yu, Guoliang; Sharp, Burt M

    2012-08-01

    Nicotine intake affects CNS responses to stressors. We reported that nicotine self-administration (SA) augmented the hypothalamo-pituitary-adrenal (HPA) stress response, in part because of the altered neurotransmission and neuropeptide expression within hypothalamic paraventricular nucleus (PVN). Limbic-PVN interactions involving medial prefrontal cortex, amygdala, and bed nucleus of the stria terminalis (BST) greatly impact the HPA stress response. Therefore, we investigated the effects of nicotine SA on stress-induced neuronal activation in limbic-PVN network, using c-Fos protein immunohistochemistry and retrograde tracing. Nicotine decreased stress-induced c-Fos in prelimbic cortex (PrL), anteroventral BST (avBST), and peri-PVN, but increased c-Fos induction in medial amygdala (MeA), locus coeruleus, and PVN. Fluoro-gold (FG) was injected into avBST or PVN, as GABAergic neurons in avBST projecting to PVN corticotrophin-releasing factor neurons relay information from both PrL glutamatergic and MeA GABAergic neurons. The stress-induced c-Fos expression in retrograde-labeled FG+ neurons was decreased in PrL by nicotine, but increased in MeA, and also reduced in avBST. Therefore, within limbic-PVN network, nicotine SA exerts selective regional effects on neuronal activation by stress. These findings expand the mechanistic framework by demonstrating altered limbic-BST-PVN interactions underlying the disinhibition of PVN corticotrophin-releasing factor neurons, an essential component of the amplified HPA response to stress by nicotine. © 2012 The Authors. Journal of Neurochemistry © 2012 International Society for Neurochemistry.

  3. An online supervised learning method based on gradient descent for spiking neurons.

    PubMed

    Xu, Yan; Yang, Jing; Zhong, Shuiming

    2017-09-01

    The purpose of supervised learning with temporal encoding for spiking neurons is to make the neurons emit a specific spike train encoded by precise firing times of spikes. The gradient-descent-based (GDB) learning methods are widely used and verified in the current research. Although the existing GDB multi-spike learning (or spike sequence learning) methods have good performance, they work in an offline manner and still have some limitations. This paper proposes an online GDB spike sequence learning method for spiking neurons that is based on the online adjustment mechanism of real biological neuron synapses. The method constructs error function and calculates the adjustment of synaptic weights as soon as the neurons emit a spike during their running process. We analyze and synthesize desired and actual output spikes to select appropriate input spikes in the calculation of weight adjustment in this paper. The experimental results show that our method obviously improves learning performance compared with the offline learning manner and has certain advantage on learning accuracy compared with other learning methods. Stronger learning ability determines that the method has large pattern storage capacity. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Self-Organization of Microcircuits in Networks of Spiking Neurons with Plastic Synapses.

    PubMed

    Ocker, Gabriel Koch; Litwin-Kumar, Ashok; Doiron, Brent

    2015-08-01

    The synaptic connectivity of cortical networks features an overrepresentation of certain wiring motifs compared to simple random-network models. This structure is shaped, in part, by synaptic plasticity that promotes or suppresses connections between neurons depending on their joint spiking activity. Frequently, theoretical studies focus on how feedforward inputs drive plasticity to create this network structure. We study the complementary scenario of self-organized structure in a recurrent network, with spike timing-dependent plasticity driven by spontaneous dynamics. We develop a self-consistent theory for the evolution of network structure by combining fast spiking covariance with a slow evolution of synaptic weights. Through a finite-size expansion of network dynamics we obtain a low-dimensional set of nonlinear differential equations for the evolution of two-synapse connectivity motifs. With this theory in hand, we explore how the form of the plasticity rule drives the evolution of microcircuits in cortical networks. When potentiation and depression are in approximate balance, synaptic dynamics depend on weighted divergent, convergent, and chain motifs. For additive, Hebbian STDP these motif interactions create instabilities in synaptic dynamics that either promote or suppress the initial network structure. Our work provides a consistent theoretical framework for studying how spiking activity in recurrent networks interacts with synaptic plasticity to determine network structure.

  5. Self-Organization of Microcircuits in Networks of Spiking Neurons with Plastic Synapses

    PubMed Central

    Ocker, Gabriel Koch; Litwin-Kumar, Ashok; Doiron, Brent

    2015-01-01

    The synaptic connectivity of cortical networks features an overrepresentation of certain wiring motifs compared to simple random-network models. This structure is shaped, in part, by synaptic plasticity that promotes or suppresses connections between neurons depending on their joint spiking activity. Frequently, theoretical studies focus on how feedforward inputs drive plasticity to create this network structure. We study the complementary scenario of self-organized structure in a recurrent network, with spike timing-dependent plasticity driven by spontaneous dynamics. We develop a self-consistent theory for the evolution of network structure by combining fast spiking covariance with a slow evolution of synaptic weights. Through a finite-size expansion of network dynamics we obtain a low-dimensional set of nonlinear differential equations for the evolution of two-synapse connectivity motifs. With this theory in hand, we explore how the form of the plasticity rule drives the evolution of microcircuits in cortical networks. When potentiation and depression are in approximate balance, synaptic dynamics depend on weighted divergent, convergent, and chain motifs. For additive, Hebbian STDP these motif interactions create instabilities in synaptic dynamics that either promote or suppress the initial network structure. Our work provides a consistent theoretical framework for studying how spiking activity in recurrent networks interacts with synaptic plasticity to determine network structure. PMID:26291697

  6. Sequentially switching cell assemblies in random inhibitory networks of spiking neurons in the striatum.

    PubMed

    Ponzi, Adam; Wickens, Jeff

    2010-04-28

    The striatum is composed of GABAergic medium spiny neurons with inhibitory collaterals forming a sparse random asymmetric network and receiving an excitatory glutamatergic cortical projection. Because the inhibitory collaterals are sparse and weak, their role in striatal network dynamics is puzzling. However, here we show by simulation of a striatal inhibitory network model composed of spiking neurons that cells form assemblies that fire in sequential coherent episodes and display complex identity-temporal spiking patterns even when cortical excitation is simply constant or fluctuating noisily. Strongly correlated large-scale firing rate fluctuations on slow behaviorally relevant timescales of hundreds of milliseconds are shown by members of the same assembly whereas members of different assemblies show strong negative correlation, and we show how randomly connected spiking networks can generate this activity. Cells display highly irregular spiking with high coefficients of variation, broadly distributed low firing rates, and interspike interval distributions that are consistent with exponentially tailed power laws. Although firing rates vary coherently on slow timescales, precise spiking synchronization is absent in general. Our model only requires the minimal but striatally realistic assumptions of sparse to intermediate random connectivity, weak inhibitory synapses, and sufficient cortical excitation so that some cells are depolarized above the firing threshold during up states. Our results are in good qualitative agreement with experimental studies, consistent with recently determined striatal anatomy and physiology, and support a new view of endogenously generated metastable state switching dynamics of the striatal network underlying its information processing operations.

  7. Relationship between neuronal network architecture and naming performance in temporal lobe epilepsy: A connectome based approach using machine learning.

    PubMed

    Munsell, B C; Wu, G; Fridriksson, J; Thayer, K; Mofrad, N; Desisto, N; Shen, D; Bonilha, L

    2017-09-09

    Impaired confrontation naming is a common symptom of temporal lobe epilepsy (TLE). The neurobiological mechanisms underlying this impairment are poorly understood but may indicate a structural disorganization of broadly distributed neuronal networks that support naming ability. Importantly, naming is frequently impaired in other neurological disorders and by contrasting the neuronal structures supporting naming in TLE with other diseases, it will become possible to elucidate the common systems supporting naming. We aimed to evaluate the neuronal networks that support naming in TLE by using a machine learning algorithm intended to predict naming performance in subjects with medication refractory TLE using only the structural brain connectome reconstructed from diffusion tensor imaging. A connectome-based prediction framework was developed using network properties from anatomically defined brain regions across the entire brain, which were used in a multi-task machine learning algorithm followed by support vector regression. Nodal eigenvector centrality, a measure of regional network integration, predicted approximately 60% of the variance in naming. The nodes with the highest regression weight were bilaterally distributed among perilimbic sub-networks involving mainly the medial and lateral temporal lobe regions. In the context of emerging evidence regarding the role of large structural networks that support language processing, our results suggest intact naming relies on the integration of sub-networks, as opposed to being dependent on isolated brain areas. In the case of TLE, these sub-networks may be disproportionately indicative naming processes that are dependent semantic integration from memory and lexical retrieval, as opposed to multi-modal perception or motor speech production. Copyright © 2017. Published by Elsevier Inc.

  8. Structure-function analysis of genetically defined neuronal populations.

    PubMed

    Groh, Alexander; Krieger, Patrik

    2013-10-01

    Morphological and functional classification of individual neurons is a crucial aspect of the characterization of neuronal networks. Systematic structural and functional analysis of individual neurons is now possible using transgenic mice with genetically defined neurons that can be visualized in vivo or in brain slice preparations. Genetically defined neurons are useful for studying a particular class of neurons and also for more comprehensive studies of the neuronal content of a network. Specific subsets of neurons can be identified by fluorescence imaging of enhanced green fluorescent protein (eGFP) or another fluorophore expressed under the control of a cell-type-specific promoter. The advantages of such genetically defined neurons are not only their homogeneity and suitability for systematic descriptions of networks, but also their tremendous potential for cell-type-specific manipulation of neuronal networks in vivo. This article describes a selection of procedures for visualizing and studying the anatomy and physiology of genetically defined neurons in transgenic mice. We provide information about basic equipment, reagents, procedures, and analytical approaches for obtaining three-dimensional (3D) cell morphologies and determining the axonal input and output of genetically defined neurons. We exemplify with genetically labeled cortical neurons, but the procedures are applicable to other brain regions with little or no alterations.

  9. Neuroprotective Role of Gap Junctions in a Neuron Astrocyte Network Model.

    PubMed

    Huguet, Gemma; Joglekar, Anoushka; Messi, Leopold Matamba; Buckalew, Richard; Wong, Sarah; Terman, David

    2016-07-26

    A detailed biophysical model for a neuron/astrocyte network is developed to explore mechanisms responsible for the initiation and propagation of cortical spreading depolarizations and the role of astrocytes in maintaining ion homeostasis, thereby preventing these pathological waves. Simulations of the model illustrate how properties of spreading depolarizations, such as wave speed and duration of depolarization, depend on several factors, including the neuron and astrocyte Na(+)-K(+) ATPase pump strengths. In particular, we consider the neuroprotective role of astrocyte gap junction coupling. The model demonstrates that a syncytium of electrically coupled astrocytes can maintain a physiological membrane potential in the presence of an elevated extracellular K(+) concentration and efficiently distribute the excess K(+) across the syncytium. This provides an effective neuroprotective mechanism for delaying or preventing the initiation of spreading depolarizations. Copyright © 2016 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  10. Examining Neuronal Connectivity and Its Role in Learning and Memory

    NASA Astrophysics Data System (ADS)

    Gala, Rohan

    Learning and long-term memory formation are accompanied with changes in the patterns and weights of synaptic connections in the underlying neuronal network. However, the fundamental rules that drive connectivity changes, and the precise structure-function relationships within neuronal networks remain elusive. Technological improvements over the last few decades have enabled the observation of large but specific subsets of neurons and their connections in unprecedented detail. Devising robust and automated computational methods is critical to distill information from ever-increasing volumes of raw experimental data. Moreover, statistical models and theoretical frameworks are required to interpret the data and assemble evidence into understanding of brain function. In this thesis, I first describe computational methods to reconstruct connectivity based on light microscopy imaging experiments. Next, I use these methods to quantify structural changes in connectivity based on in vivo time-lapse imaging experiments. Finally, I present a theoretical model of associative learning that can explain many stereotypical features of experimentally observed connectivity.

  11. Stochastic multiresonance in coupled excitable FHN neurons

    NASA Astrophysics Data System (ADS)

    Li, Huiyan; Sun, Xiaojuan; Xiao, Jinghua

    2018-04-01

    In this paper, effects of noise on Watts-Strogatz small-world neuronal networks, which are stimulated by a subthreshold signal, have been investigated. With the numerical simulations, it is surprisingly found that there exist several optimal noise intensities at which the subthreshold signal can be detected efficiently. This indicates the occurrence of stochastic multiresonance in the studied neuronal networks. Moreover, it is revealed that the occurrence of stochastic multiresonance has close relationship with the period of subthreshold signal Te and the noise-induced mean period of the neuronal networks T0. In detail, we find that noise could induce the neuronal networks to generate stochastic resonance for M times if Te is not very large and falls into the interval ( M × T 0 , ( M + 1 ) × T 0 ) with M being a positive integer. In real neuronal system, subthreshold signal detection is very meaningful. Thus, the obtained results in this paper could give some important implications on detecting subthreshold signal and propagating neuronal information in neuronal systems.

  12. Overexpression of cypin alters dendrite morphology, single neuron activity, and network properties via distinct mechanisms

    NASA Astrophysics Data System (ADS)

    Rodríguez, Ana R.; O'Neill, Kate M.; Swiatkowski, Przemyslaw; Patel, Mihir V.; Firestein, Bonnie L.

    2018-02-01

    Objective. This study investigates the effect that overexpression of cytosolic PSD-95 interactor (cypin), a regulator of synaptic PSD-95 protein localization and a core regulator of dendrite branching, exerts on the electrical activity of rat hippocampal neurons and networks. Approach. We cultured rat hippocampal neurons and used lipid-mediated transfection and lentiviral gene transfer to achieve high levels of cypin or cypin mutant (cypinΔPDZ PSD-95 non-binding) expression cellularly and network-wide, respectively. Main results. Our analysis revealed that although overexpression of cypin and cypinΔPDZ increase dendrite numbers and decrease spine density, cypin and cypinΔPDZ distinctly regulate neuronal activity. At the single cell level, cypin promotes decreases in bursting activity while cypinΔPDZ reduces sEPSC frequency and further decreases bursting compared to cypin. At the network level, by using the Fano factor as a measure of spike count variability, cypin overexpression results in an increase in variability of spike count, and this effect is abolished when cypin cannot bind PSD-95. This variability is also dependent on baseline activity levels and on mean spike rate over time. Finally, our spike sorting data show that overexpression of cypin results in a more complex distribution of spike waveforms and that binding to PSD-95 is essential for this complexity. Significance. Our data suggest that dendrite morphology does not play a major role in cypin action on electrical activity.

  13. Reverse engineering a mouse embryonic stem cell-specific transcriptional network reveals a new modulator of neuronal differentiation

    PubMed Central

    De Cegli, Rossella; Iacobacci, Simona; Flore, Gemma; Gambardella, Gennaro; Mao, Lei; Cutillo, Luisa; Lauria, Mario; Klose, Joachim; Illingworth, Elizabeth; Banfi, Sandro; di Bernardo, Diego

    2013-01-01

    Gene expression profiles can be used to infer previously unknown transcriptional regulatory interaction among thousands of genes, via systems biology ‘reverse engineering’ approaches. We ‘reverse engineered’ an embryonic stem (ES)-specific transcriptional network from 171 gene expression profiles, measured in ES cells, to identify master regulators of gene expression (‘hubs’). We discovered that E130012A19Rik (E13), highly expressed in mouse ES cells as compared with differentiated cells, was a central ‘hub’ of the network. We demonstrated that E13 is a protein-coding gene implicated in regulating the commitment towards the different neuronal subtypes and glia cells. The overexpression and knock-down of E13 in ES cell lines, undergoing differentiation into neurons and glia cells, caused a strong up-regulation of the glutamatergic neurons marker Vglut2 and a strong down-regulation of the GABAergic neurons marker GAD65 and of the radial glia marker Blbp. We confirmed E13 expression in the cerebral cortex of adult mice and during development. By immuno-based affinity purification, we characterized protein partners of E13, involved in the Polycomb complex. Our results suggest a role of E13 in regulating the division between glutamatergic projection neurons and GABAergic interneurons and glia cells possibly by epigenetic-mediated transcriptional regulation. PMID:23180766

  14. Mechanisms and neuronal networks involved in reactive and proactive cognitive control of interference in working memory.

    PubMed

    Irlbacher, Kerstin; Kraft, Antje; Kehrer, Stefanie; Brandt, Stephan A

    2014-10-01

    Cognitive control can be reactive or proactive in nature. Reactive control mechanisms, which support the resolution of interference, start after its onset. Conversely, proactive control involves the anticipation and prevention of interference prior to its occurrence. The interrelation of both types of cognitive control is currently under debate: Are they mediated by different neuronal networks? Or are there neuronal structures that have the potential to act in a proactive as well as in a reactive manner? This review illustrates the way in which integrating knowledge gathered from behavioral studies, functional imaging, and human electroencephalography proves useful in answering these questions. We focus on studies that investigate interference resolution at the level of working memory representations. In summary, different mechanisms are instrumental in supporting reactive and proactive control. Distinct neuronal networks are involved, though some brain regions, especially pre-SMA, possess functions that are relevant to both control modes. Therefore, activation of these brain areas could be observed in reactive, as well as proactive control, but at different times during information processing. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Inference of topology and the nature of synapses, and the flow of information in neuronal networks

    NASA Astrophysics Data System (ADS)

    Borges, F. S.; Lameu, E. L.; Iarosz, K. C.; Protachevicz, P. R.; Caldas, I. L.; Viana, R. L.; Macau, E. E. N.; Batista, A. M.; Baptista, M. S.

    2018-02-01

    The characterization of neuronal connectivity is one of the most important matters in neuroscience. In this work, we show that a recently proposed informational quantity, the causal mutual information, employed with an appropriate methodology, can be used not only to correctly infer the direction of the underlying physical synapses, but also to identify their excitatory or inhibitory nature, considering easy to handle and measure bivariate time series. The success of our approach relies on a surprising property found in neuronal networks by which nonadjacent neurons do "understand" each other (positive mutual information), however, this exchange of information is not capable of causing effect (zero transfer entropy). Remarkably, inhibitory connections, responsible for enhancing synchronization, transfer more information than excitatory connections, known to enhance entropy in the network. We also demonstrate that our methodology can be used to correctly infer directionality of synapses even in the presence of dynamic and observational Gaussian noise, and is also successful in providing the effective directionality of intermodular connectivity, when only mean fields can be measured.

  16. Evoking prescribed spike times in stochastic neurons

    NASA Astrophysics Data System (ADS)

    Doose, Jens; Lindner, Benjamin

    2017-09-01

    Single cell stimulation in vivo is a powerful tool to investigate the properties of single neurons and their functionality in neural networks. We present a method to determine a cell-specific stimulus that reliably evokes a prescribed spike train with high temporal precision of action potentials. We test the performance of this stimulus in simulations for two different stochastic neuron models. For a broad range of parameters and a neuron firing with intermediate firing rates (20-40 Hz) the reliability in evoking the prescribed spike train is close to its theoretical maximum that is mainly determined by the level of intrinsic noise.

  17. Neuronal avalanches and learning

    NASA Astrophysics Data System (ADS)

    de Arcangelis, Lucilla

    2011-05-01

    Networks of living neurons represent one of the most fascinating systems of biology. If the physical and chemical mechanisms at the basis of the functioning of a single neuron are quite well understood, the collective behaviour of a system of many neurons is an extremely intriguing subject. Crucial ingredient of this complex behaviour is the plasticity property of the network, namely the capacity to adapt and evolve depending on the level of activity. This plastic ability is believed, nowadays, to be at the basis of learning and memory in real brains. Spontaneous neuronal activity has recently shown features in common to other complex systems. Experimental data have, in fact, shown that electrical information propagates in a cortex slice via an avalanche mode. These avalanches are characterized by a power law distribution for the size and duration, features found in other problems in the context of the physics of complex systems and successful models have been developed to describe their behaviour. In this contribution we discuss a statistical mechanical model for the complex activity in a neuronal network. The model implements the main physiological properties of living neurons and is able to reproduce recent experimental results. Then, we discuss the learning abilities of this neuronal network. Learning occurs via plastic adaptation of synaptic strengths by a non-uniform negative feedback mechanism. The system is able to learn all the tested rules, in particular the exclusive OR (XOR) and a random rule with three inputs. The learning dynamics exhibits universal features as function of the strength of plastic adaptation. Any rule could be learned provided that the plastic adaptation is sufficiently slow.

  18. Network inference from functional experimental data (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Desrosiers, Patrick; Labrecque, Simon; Tremblay, Maxime; Bélanger, Mathieu; De Dorlodot, Bertrand; Côté, Daniel C.

    2016-03-01

    Functional connectivity maps of neuronal networks are critical tools to understand how neurons form circuits, how information is encoded and processed by neurons, how memory is shaped, and how these basic processes are altered under pathological conditions. Current light microscopy allows to observe calcium or electrical activity of thousands of neurons simultaneously, yet assessing comprehensive connectivity maps directly from such data remains a non-trivial analytical task. There exist simple statistical methods, such as cross-correlation and Granger causality, but they only detect linear interactions between neurons. Other more involved inference methods inspired by information theory, such as mutual information and transfer entropy, identify more accurately connections between neurons but also require more computational resources. We carried out a comparative study of common connectivity inference methods. The relative accuracy and computational cost of each method was determined via simulated fluorescence traces generated with realistic computational models of interacting neurons in networks of different topologies (clustered or non-clustered) and sizes (10-1000 neurons). To bridge the computational and experimental works, we observed the intracellular calcium activity of live hippocampal neuronal cultures infected with the fluorescent calcium marker GCaMP6f. The spontaneous activity of the networks, consisting of 50-100 neurons per field of view, was recorded from 20 to 50 Hz on a microscope controlled by a homemade software. We implemented all connectivity inference methods in the software, which rapidly loads calcium fluorescence movies, segments the images, extracts the fluorescence traces, and assesses the functional connections (with strengths and directions) between each pair of neurons. We used this software to assess, in real time, the functional connectivity from real calcium imaging data in basal conditions, under plasticity protocols, and epileptic

  19. A computational model of the respiratory network challenged and optimized by data from optogenetic manipulation of glycinergic neurons.

    PubMed

    Oku, Yoshitaka; Hülsmann, Swen

    2017-04-07

    The topology of the respiratory network in the brainstem has been addressed using different computational models, which help to understand the functional properties of the system. We tested a neural mass model by comparing the result of activation and inhibition of inhibitory neurons in silico with recently published results of optogenetic manipulation of glycinergic neurons [Sherman, et al. (2015) Nat Neurosci 18:408]. The comparison revealed that a five-cell type model consisting of three classes of inhibitory neurons [I-DEC, E-AUG, E-DEC (PI)] and two excitatory populations (pre-I/I) and (I-AUG) neurons can be applied to explain experimental observations made by stimulating or inhibiting inhibitory neurons by light sensitive ion channels. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.

  20. Criticality in Neuronal Networks

    NASA Astrophysics Data System (ADS)

    Friedman, Nir; Ito, Shinya; Brinkman, Braden A. W.; Shimono, Masanori; Deville, R. E. Lee; Beggs, John M.; Dahmen, Karin A.; Butler, Tom C.

    2012-02-01

    In recent years, experiments detecting the electrical firing patterns in slices of in vitro brain tissue have been analyzed to suggest the presence of scale invariance and possibly criticality in the brain. Much of the work done however has been limited in two ways: 1) the data collected is from local field potentials that do not represent the firing of individual neurons; 2) the analysis has been primarily limited to histograms. In our work we examine data based on the firing of individual neurons (spike data), and greatly extend the analysis by considering shape collapse and exponents. Our results strongly suggest that the brain operates near a tuned critical point of a highly distinctive universality class.