Science.gov

Sample records for neuron networks method

  1. Optimization Methods for Spiking Neurons and Networks

    PubMed Central

    Russell, Alexander; Orchard, Garrick; Dong, Yi; Mihalaş, Ştefan; Niebur, Ernst; Tapson, Jonathan; Etienne-Cummings, Ralph

    2011-01-01

    Spiking neurons and spiking neural circuits are finding uses in a multitude of tasks such as robotic locomotion control, neuroprosthetics, visual sensory processing, and audition. The desired neural output is achieved through the use of complex neuron models, or by combining multiple simple neurons into a network. In either case, a means for configuring the neuron or neural circuit is required. Manual manipulation of parameters is both time consuming and non-intuitive due to the nonlinear relationship between parameters and the neuron’s output. The complexity rises even further as the neurons are networked and the systems often become mathematically intractable. In large circuits, the desired behavior and timing of action potential trains may be known but the timing of the individual action potentials is unknown and unimportant, whereas in single neuron systems the timing of individual action potentials is critical. In this paper, we automate the process of finding parameters. To configure a single neuron we derive a maximum likelihood method for configuring a neuron model, specifically the Mihalas–Niebur Neuron. Similarly, to configure neural circuits, we show how we use genetic algorithms (GAs) to configure parameters for a network of simple integrate and fire with adaptation neurons. The GA approach is demonstrated both in software simulation and hardware implementation on a reconfigurable custom very large scale integration chip. PMID:20959265

  2. A systematic method for configuring VLSI networks of spiking neurons.

    PubMed

    Neftci, Emre; Chicca, Elisabetta; Indiveri, Giacomo; Douglas, Rodney

    2011-10-01

    An increasing number of research groups are developing custom hybrid analog/digital very large scale integration (VLSI) chips and systems that implement hundreds to thousands of spiking neurons with biophysically realistic dynamics, with the intention of emulating brainlike real-world behavior in hardware and robotic systems rather than simply simulating their performance on general-purpose digital computers. Although the electronic engineering aspects of these emulation systems is proceeding well, progress toward the actual emulation of brainlike tasks is restricted by the lack of suitable high-level configuration methods of the kind that have already been developed over many decades for simulations on general-purpose computers. The key difficulty is that the dynamics of the CMOS electronic analogs are determined by transistor biases that do not map simply to the parameter types and values used in typical abstract mathematical models of neurons and their networks. Here we provide a general method for resolving this difficulty. We describe a parameter mapping technique that permits an automatic configuration of VLSI neural networks so that their electronic emulation conforms to a higher-level neuronal simulation. We show that the neurons configured by our method exhibit spike timing statistics and temporal dynamics that are the same as those observed in the software simulated neurons and, in particular, that the key parameters of recurrent VLSI neural networks (e.g., implementing soft winner-take-all) can be precisely tuned. The proposed method permits a seamless integration between software simulations with hardware emulations and intertranslatability between the parameters of abstract neuronal models and their emulation counterparts. Most important, our method offers a route toward a high-level task configuration language for neuromorphic VLSI systems.

  3. Epileptic Neuronal Networks: Methods of Identification and Clinical Relevance

    PubMed Central

    Stefan, Hermann; Lopes da Silva, Fernando H.

    2012-01-01

    The main objective of this paper is to examine evidence for the concept that epileptic activity should be envisaged in terms of functional connectivity and dynamics of neuronal networks. Basic concepts regarding structure and dynamics of neuronal networks are briefly described. Particular attention is given to approaches that are derived, or related, to the concept of causality, as formulated by Granger. Linear and non-linear methodologies aiming at characterizing the dynamics of neuronal networks applied to EEG/MEG and combined EEG/fMRI signals in epilepsy are critically reviewed. The relevance of functional dynamical analysis of neuronal networks with respect to clinical queries in focal cortical dysplasias, temporal lobe epilepsies, and “generalized” epilepsies is emphasized. In the light of the concepts of epileptic neuronal networks, and recent experimental findings, the dichotomic classification in focal and generalized epilepsy is re-evaluated. It is proposed that so-called “generalized epilepsies,” such as absence seizures, are actually fast spreading epilepsies, the onset of which can be tracked down to particular neuronal networks using appropriate network analysis. Finally new approaches to delineate epileptogenic networks are discussed. PMID:23532203

  4. Epileptic neuronal networks: methods of identification and clinical relevance.

    PubMed

    Stefan, Hermann; Lopes da Silva, Fernando H

    2013-01-01

    The main objective of this paper is to examine evidence for the concept that epileptic activity should be envisaged in terms of functional connectivity and dynamics of neuronal networks. Basic concepts regarding structure and dynamics of neuronal networks are briefly described. Particular attention is given to approaches that are derived, or related, to the concept of causality, as formulated by Granger. Linear and non-linear methodologies aiming at characterizing the dynamics of neuronal networks applied to EEG/MEG and combined EEG/fMRI signals in epilepsy are critically reviewed. The relevance of functional dynamical analysis of neuronal networks with respect to clinical queries in focal cortical dysplasias, temporal lobe epilepsies, and "generalized" epilepsies is emphasized. In the light of the concepts of epileptic neuronal networks, and recent experimental findings, the dichotomic classification in focal and generalized epilepsy is re-evaluated. It is proposed that so-called "generalized epilepsies," such as absence seizures, are actually fast spreading epilepsies, the onset of which can be tracked down to particular neuronal networks using appropriate network analysis. Finally new approaches to delineate epileptogenic networks are discussed.

  5. Method of derivation and differentiation of mouse embryonic stem cells generating synchronous neuronal networks.

    PubMed

    Gazina, Elena V; Morrisroe, Emma; Mendis, Gunarathna D C; Michalska, Anna E; Chen, Joseph; Nefzger, Christian M; Rollo, Benjamin N; Reid, Christopher A; Pera, Martin F; Petrou, Steven

    2017-08-18

    Stem cells-derived neuronal cultures hold great promise for in vitro disease modelling and drug screening. However, currently stem cells-derived neuronal cultures do not recapitulate the functional properties of primary neurons, such as network properties. Cultured primary murine neurons develop networks which are synchronised over large fractions of the culture, whereas neurons derived from mouse embryonic stem cells (ESCs) display only partly synchronised network activity and human pluripotent stem cells-derived neurons have mostly asynchronous network properties. Therefore, strategies to improve correspondence of derived neuronal cultures with primary neurons need to be developed to validate the use of stem cell-derived neuronal cultures as in vitro models. By combining serum-free derivation of ESCs from mouse blastocysts with neuronal differentiation of ESCs in morphogen-free adherent culture we generated neuronal networks with properties recapitulating those of mature primary cortical cultures. After 35days of differentiation ESC-derived neurons developed network activity very similar to that of mature primary cortical neurons. Importantly, ESC plating density was critical for network development. Compared to the previously published methods this protocol generated more synchronous neuronal networks, with high similarity to the networks formed in mature primary cortical culture. We have demonstrated that ESC-derived neuronal networks recapitulating key properties of mature primary cortical networks can be generated by optimising both stem cell derivation and differentiation. This validates the approach of using ESC-derived neuronal cultures for disease modelling and in vitro drug screening. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Application of complex network method to spatiotemporal patterns in a neuronal network

    NASA Astrophysics Data System (ADS)

    Wang, Rong; Li, Jiajia; Wang, Li; Yang, Yong; Lin, Pan; Wu, Ying

    2016-12-01

    Spiral waves have been found to appear alternatively with plane waves in the brain cerebral cortex, which has a significant effect on neuron firing behaviors. In this paper, we propose a functional firing network based on the correlated firing behaviors among neuronal populations and use the complex network method to investigate the effects of spiral waves and plane waves on the structure and function of the network. We first analyze the correlation coefficient and the largest eigenvalue of the functional firing network. We find a larger range distribution of correlation coefficients and greater largest eigenvalue of the functional firing network for spiral waves than those for plane waves, which indicates that spiral waves induce higher network synchronization. In addition, we explore the topological structure of the functional firing network using the complex network method. We find that the functional firing network for spiral waves has a larger degree and global efficiency and a lower modularity and characteristic path length than that for plane waves, revealing that spiral waves contribute to neural information transmission and strengthen the functional integration. Our work not only provides new insights for studying spatiotemporal patterns, but is also helpful for explaining the modulation of spiral waves on brain function.

  7. Cryopreservation of adherent neuronal networks.

    PubMed

    Ma, Wu; O'Shaughnessy, Thomas; Chang, Eddie

    2006-07-31

    Neuronal networks have been widely used for neurophysiology, drug discovery and toxicity testing. An essential prerequisite for future widespread application of neuronal networks is the development of efficient cryopreservation protocols to facilitate their storage and transportation. Here is the first report on cryopreservation of mammalian adherent neuronal networks. Dissociated spinal cord cells were attached to a poly-d-lysine/laminin surface and allowed to form neuronal networks. Adherent neuronal networks were embedded in a thin film of collagen gel and loaded with trehalose prior to transfer to a freezing medium containing DMSO, FBS and culture medium. This was followed by a slow rate of cooling to -80 degrees C for 24 h and then storage for up to 2 months in liquid nitrogen at -196 degrees C. The three components: DMSO, collagen gel entrapment and trehalose loading combined provided the highest post-thaw viability, relative to individual or two component protocols. The post-thaw cells with this protocol demonstrated similar neuronal and astrocytic markers and morphological structure as those detected in unfrozen cells. Fluorescent dye FM1-43 staining revealed active recycling of synaptic vesicles upon depolarizing stimulation in the post-thaw neuronal networks. These results suggest that a combination of DMSO, collagen gel entrapment and trehalose loading can significantly improve conventional slow-cooling methods in cryopreservation of adherent neuronal networks.

  8. Micropatterning neuronal networks.

    PubMed

    Hardelauf, Heike; Waide, Sarah; Sisnaiske, Julia; Jacob, Peter; Hausherr, Vanessa; Schöbel, Nicole; Janasek, Dirk; van Thriel, Christoph; West, Jonathan

    2014-07-07

    Spatially organised neuronal networks have wide reaching applications, including fundamental research, toxicology testing, pharmaceutical screening and the realisation of neuronal implant interfaces. Despite the large number of methods catalogued in the literature there remains the need to identify a method that delivers high pattern compliance, long-term stability and is widely accessible to neuroscientists. In this comparative study, aminated (polylysine/polyornithine and aminosilanes) and cytophobic (poly(ethylene glycol) (PEG) and methylated) material contrasts were evaluated. Backfilling plasma stencilled PEGylated substrates with polylysine does not produce good material contrasts, whereas polylysine patterned on methylated substrates becomes mobilised by agents in the cell culture media which results in rapid pattern decay. Aminosilanes, polylysine substitutes, are prone to hydrolysis and the chemistries prove challenging to master. Instead, the stable coupling between polylysine and PLL-g-PEG can be exploited: Microcontact printing polylysine onto a PLL-g-PEG coated glass substrate provides a simple means to produce microstructured networks of primary neurons that have superior pattern compliance during long term (>1 month) culture.

  9. Spontaneous Calcium Changes in Micro Neuronal Networks

    NASA Astrophysics Data System (ADS)

    Saito, Aki; Moriguchi, Hiroyuki; Iwabuchi, Shin; Goto, Miho; Takayama, Yuzo; Kotani, Kiyoshi; Jimbo, Yasuhiko

    We have developed a practical experimental method to mass-produce and maintain a variation of minimal neuronal networks (“micro neuronal networks”) consisted of a single to several neurons in culture using spray-patterning technique. In this paper, we could maintain the micro-cultures for one month or more by adding conditioned medium and carried out optical recording of spontaneous activity in micro neuronal networks and considered the interactions between them. To determine the interactions between micro neuronal networks, fluorescence changes in several small networks were simultaneously measured using calcium indicator dye fluo-4 AM, and time-series analysis was carried out using surrogate arrangements. By using the spray-patterning method, a large number of cell-adhesive micro regions were formed. Neurons extended neurites along the edge of the cell-adhesive micro regions and form micro neuronal networks. In part of micro regions, some neurite was protruded from the region, and thus micro neuronal networks were connected with synapses. In these networks, a single neuron-induced network activity was observed. On the other hand, even in morphologically non-connected micro neuronal networks, synchronous oscillations between micro neuronal networks were observed. Our micro-patterning methods and results provide the possibility that synchronous activity is occurred between morphologically non-connected neuronal networks. This suggest that the humoral factor is also a important component for network-wide dynamics.

  10. Modeling spiking activity of in vitro neuronal networks through non linear methods.

    PubMed

    Maffezzoli, A; Signorini, M G; Gullo, F; Wanke, E

    2008-01-01

    Neuroscience research is even more exploiting technologies developed for electronic engineering use: this is the case of Micro-Electrode Array (MEA) technology, an instrumentation which is able to acquire in vitro neuron spiking activity from a finite number of channels. In this work we present three models of synaptic neuronal network connections, called 'Full-Connected', 'Hierarchical' and 'Closed-Path'. Related to each one we implemented an index giving quantitative measures of similarity and of statistical dependence among neuron activities recorded in different MEA channels. They are based on Information Theory techniques as Mutual and Multi Information: the last one extending the pair-wise information to higher-order connections on the entire MEA neuronal network. We calculated indexes for each model in order to test the presence of self-synchronization among neurons evolving in time, in response to external stimuli such as the application of chemical neuron-inhibitors. The availability of such different models helps us to investigate also how much the synaptic connections are spatially sparse or hierarchically structured and finally how much of the information exchanged on the neuronal network is regulated by higher-order correlations.

  11. Shape, connectedness and dynamics in neuronal networks.

    PubMed

    Comin, Cesar Henrique; da Fontoura Costa, Luciano

    2013-11-15

    The morphology of neurons is directly related to several aspects of the nervous system, including its connectedness, health, development, evolution, dynamics and, ultimately, behavior. Such interplays of the neuronal morphology can be understood within the more general shape-function paradigm. The current article reviews, in an introductory way, some key issues regarding the role of neuronal morphology in the nervous system, with emphasis on works developed in the authors' group. The following topics are addressed: (a) characterization of neuronal shape; (b) stochastic synthesis of neurons and neuronal systems; (c) characterization of the connectivity of neuronal networks by using complex networks concepts; and (d) investigations of influences of neuronal shape on network dynamics. The presented concepts and methods are useful also for several other multiple object systems, such as protein-protein interaction, tissues, aggregates and polymers.

  12. A comparison of computational methods for detecting bursts in neuronal spike trains and their application to human stem cell-derived neuronal networks.

    PubMed

    Cotterill, Ellese; Charlesworth, Paul; Thomas, Christopher W; Paulsen, Ole; Eglen, Stephen J

    2016-08-01

    Accurate identification of bursting activity is an essential element in the characterization of neuronal network activity. Despite this, no one technique for identifying bursts in spike trains has been widely adopted. Instead, many methods have been developed for the analysis of bursting activity, often on an ad hoc basis. Here we provide an unbiased assessment of the effectiveness of eight of these methods at detecting bursts in a range of spike trains. We suggest a list of features that an ideal burst detection technique should possess and use synthetic data to assess each method in regard to these properties. We further employ each of the methods to reanalyze microelectrode array (MEA) recordings from mouse retinal ganglion cells and examine their coherence with bursts detected by a human observer. We show that several common burst detection techniques perform poorly at analyzing spike trains with a variety of properties. We identify four promising burst detection techniques, which are then applied to MEA recordings of networks of human induced pluripotent stem cell-derived neurons and used to describe the ontogeny of bursting activity in these networks over several months of development. We conclude that no current method can provide "perfect" burst detection results across a range of spike trains; however, two burst detection techniques, the MaxInterval and logISI methods, outperform compared with others. We provide recommendations for the robust analysis of bursting activity in experimental recordings using current techniques.

  13. Nanometric resolution magnetic resonance imaging methods for mapping functional activity in neuronal networks.

    PubMed

    Boretti, Albert; Castelletto, Stefania

    2016-01-01

    This contribution highlights and compares some recent achievements in the use of k-space and real space imaging (scanning probe and wide-filed microscope techniques), when applied to a luminescent color center in diamond, known as nitrogen vacancy (NV) center. These techniques combined with the optically detected magnetic resonance of NV, provide a unique platform to achieve nanometric magnetic resonance imaging (MRI) resolution of nearby nuclear spins (known as nanoMRI), and nanometric NV real space localization. •Atomic size optically detectable spin probe.•High magnetic field sensitivity and nanometric resolution.•Non-invasive mapping of functional activity in neuronal networks.

  14. STDP in Recurrent Neuronal Networks

    PubMed Central

    Gilson, Matthieu; Burkitt, Anthony; van Hemmen, J. Leo

    2010-01-01

    Recent results about spike-timing-dependent plasticity (STDP) in recurrently connected neurons are reviewed, with a focus on the relationship between the weight dynamics and the emergence of network structure. In particular, the evolution of synaptic weights in the two cases of incoming connections for a single neuron and recurrent connections are compared and contrasted. A theoretical framework is used that is based upon Poisson neurons with a temporally inhomogeneous firing rate and the asymptotic distribution of weights generated by the learning dynamics. Different network configurations examined in recent studies are discussed and an overview of the current understanding of STDP in recurrently connected neuronal networks is presented. PMID:20890448

  15. Whole-brain neural network analysis (connectomics) using cell lineage-based neuron-labeling method.

    PubMed

    Ito, Kei; Ito, Masayoshi

    2014-11-01

    The brain is a computing machine that receives input signals from sensory neurons, calculates best responses to changing environments, and sends output signals to motor muscles. How such computation is materialized remains largely unknown. Understanding the entire wiring network of neural connections in the brain, which is recently called the connectomics (connection + omics), should provide indispensable insights on this problem.To resolve the circuit diagram from the tangled thickets of neural fibers, only a small subset of neurons should be visualized at one time. Previous studies visualized such selective cells by injecting dyes or by detecting specific molecules or gene expression patterns using antibodies and expression driver strains. These approaches were unfortunately not efficient enough for identifying all the brain cells in a comprehensive and systematic manner.Neurons are generated by neural stem cells. The entire neural population can therefore be divided into a finite number of families - or clones - of the cells that are the progeny of each single stem cell. The central brain of the fruit fly Drosophila melanogaster consists of about 15,000 neurons per side and is made by utmost 100 stem cells. By genetically labeling one of such stem cells and tracing the projection patterns of its progeny in the adult brain, we were able to identify the neural projections of almost all the clonal cell groups.To visualize these neural projections, we made serial optical sections of the fly brain using laser confocal microscopy. Because of its relatively small size (0.6-mm wide and less than 0.3-mm thick), the entire fly brain can be imaged using high-resolution objectives with n.a. 1.2. Neuronal fibers are visualized by ectopically expressed cytoplasmic and membrane-bound fluorescent proteins, and the output synaptic sites are visualized with ectopically expressed tag proteins that are fused with the proteins associated with synaptic vesicles. In addition, density

  16. From the neuron doctrine to neural networks.

    PubMed

    Yuste, Rafael

    2015-08-01

    For over a century, the neuron doctrine--which states that the neuron is the structural and functional unit of the nervous system--has provided a conceptual foundation for neuroscience. This viewpoint reflects its origins in a time when the use of single-neuron anatomical and physiological techniques was prominent. However, newer multineuronal recording methods have revealed that ensembles of neurons, rather than individual cells, can form physiological units and generate emergent functional properties and states. As a new paradigm for neuroscience, neural network models have the potential to incorporate knowledge acquired with single-neuron approaches to help us understand how emergent functional states generate behaviour, cognition and mental disease.

  17. Network synchronization in hippocampal neurons.

    PubMed

    Penn, Yaron; Segal, Menahem; Moses, Elisha

    2016-03-22

    Oscillatory activity is widespread in dynamic neuronal networks. The main paradigm for the origin of periodicity consists of specialized pacemaking elements that synchronize and drive the rest of the network; however, other models exist. Here, we studied the spontaneous emergence of synchronized periodic bursting in a network of cultured dissociated neurons from rat hippocampus and cortex. Surprisingly, about 60% of all active neurons were self-sustained oscillators when disconnected, each with its own natural frequency. The individual neuron's tendency to oscillate and the corresponding oscillation frequency are controlled by its excitability. The single neuron intrinsic oscillations were blocked by riluzole, and are thus dependent on persistent sodium leak currents. Upon a gradual retrieval of connectivity, the synchrony evolves: Loose synchrony appears already at weak connectivity, with the oscillators converging to one common oscillation frequency, yet shifted in phase across the population. Further strengthening of the connectivity causes a reduction in the mean phase shifts until zero-lag is achieved, manifested by synchronous periodic network bursts. Interestingly, the frequency of network bursting matches the average of the intrinsic frequencies. Overall, the network behaves like other universal systems, where order emerges spontaneously by entrainment of independent rhythmic units. Although simplified with respect to circuitry in the brain, our results attribute a basic functional role for intrinsic single neuron excitability mechanisms in driving the network's activity and dynamics, contributing to our understanding of developing neural circuits.

  18. Observability of Neuronal Network Motifs

    PubMed Central

    Whalen, Andrew J.; Brennan, Sean N.; Sauer, Timothy D.; Schiff, Steven J.

    2014-01-01

    We quantify observability in small (3 node) neuronal networks as a function of 1) the connection topology and symmetry, 2) the measured nodes, and 3) the nodal dynamics (linear and nonlinear). We find that typical observability metrics for 3 neuron motifs range over several orders of magnitude, depending upon topology, and for motifs containing symmetry the network observability decreases when observing from particularly confounded nodes. Nonlinearities in the nodal equations generally decrease the average network observability and full network information becomes available only in limited regions of the system phase space. Our findings demonstrate that such networks are partially observable, and suggest their potential efficacy in reconstructing network dynamics from limited measurement data. How well such strategies can be used to reconstruct and control network dynamics in experimental settings is a subject for future experimental work. PMID:25909092

  19. Network synchronization in hippocampal neurons

    PubMed Central

    Penn, Yaron; Segal, Menahem; Moses, Elisha

    2016-01-01

    Oscillatory activity is widespread in dynamic neuronal networks. The main paradigm for the origin of periodicity consists of specialized pacemaking elements that synchronize and drive the rest of the network; however, other models exist. Here, we studied the spontaneous emergence of synchronized periodic bursting in a network of cultured dissociated neurons from rat hippocampus and cortex. Surprisingly, about 60% of all active neurons were self-sustained oscillators when disconnected, each with its own natural frequency. The individual neuron’s tendency to oscillate and the corresponding oscillation frequency are controlled by its excitability. The single neuron intrinsic oscillations were blocked by riluzole, and are thus dependent on persistent sodium leak currents. Upon a gradual retrieval of connectivity, the synchrony evolves: Loose synchrony appears already at weak connectivity, with the oscillators converging to one common oscillation frequency, yet shifted in phase across the population. Further strengthening of the connectivity causes a reduction in the mean phase shifts until zero-lag is achieved, manifested by synchronous periodic network bursts. Interestingly, the frequency of network bursting matches the average of the intrinsic frequencies. Overall, the network behaves like other universal systems, where order emerges spontaneously by entrainment of independent rhythmic units. Although simplified with respect to circuitry in the brain, our results attribute a basic functional role for intrinsic single neuron excitability mechanisms in driving the network’s activity and dynamics, contributing to our understanding of developing neural circuits. PMID:26961000

  20. The Bifurcating Neuron network 1.

    PubMed

    Lee, G; Farhat, N H

    2001-01-01

    The Bifurcating Neuron (BN), a chaotic integrate-and-fire neuron, is a model of a neuron augmented by coherent modulation from its environment. The BN is mathematically equivalent to the sine-circle map, and this equivalence relationship allowed us to apply the mathematics of one-dimensional maps to the design of BN networks. The study of symmetry in the BN revealed that the BN can be configured to exhibit bistability that is controlled by attractor-merging crisis. Also, the symmetry of the bistability can be controlled by the introduction of a sinusoidal fluctuation to the threshold level of the BN. These two observations led us to the design of the BN Network 1 (BNN-1), a chaotic pulse-coupled neural network exhibiting associative memory. In numerical simulations, the BNN-1 showed a better performance than the continuous-time Hopfield network, as far as the spurious-minima problem is concerned and exhibited many biologically plausible characteristics.

  1. Stages of neuronal network formation

    NASA Astrophysics Data System (ADS)

    Woiterski, Lydia; Claudepierre, Thomas; Luxenhofer, Robert; Jordan, Rainer; Käs, Josef A.

    2013-02-01

    Graph theoretical approaches have become a powerful tool for investigating the architecture and dynamics of complex networks. The topology of network graphs revealed small-world properties for very different real systems among these neuronal networks. In this study, we observed the early development of mouse retinal ganglion cell (RGC) networks in vitro using time-lapse video microscopy. By means of a time-resolved graph theoretical analysis of the connectivity, shortest path length and the edge length, we were able to discover the different stages during the network formation. Starting from single cells, at the first stage neurons connected to each other ending up in a network with maximum complexity. In the further course, we observed a simplification of the network which manifested in a change of relevant network parameters such as the minimization of the path length. Moreover, we found that RGC networks self-organized as small-world networks at both stages; however, the optimization occurred only in the second stage.

  2. Spiking neuron network Helmholtz machine.

    PubMed

    Sountsov, Pavel; Miller, Paul

    2015-01-01

    An increasing amount of behavioral and neurophysiological data suggests that the brain performs optimal (or near-optimal) probabilistic inference and learning during perception and other tasks. Although many machine learning algorithms exist that perform inference and learning in an optimal way, the complete description of how one of those algorithms (or a novel algorithm) can be implemented in the brain is currently incomplete. There have been many proposed solutions that address how neurons can perform optimal inference but the question of how synaptic plasticity can implement optimal learning is rarely addressed. This paper aims to unify the two fields of probabilistic inference and synaptic plasticity by using a neuronal network of realistic model spiking neurons to implement a well-studied computational model called the Helmholtz Machine. The Helmholtz Machine is amenable to neural implementation as the algorithm it uses to learn its parameters, called the wake-sleep algorithm, uses a local delta learning rule. Our spiking-neuron network implements both the delta rule and a small example of a Helmholtz machine. This neuronal network can learn an internal model of continuous-valued training data sets without supervision. The network can also perform inference on the learned internal models. We show how various biophysical features of the neural implementation constrain the parameters of the wake-sleep algorithm, such as the duration of the wake and sleep phases of learning and the minimal sample duration. We examine the deviations from optimal performance and tie them to the properties of the synaptic plasticity rule.

  3. Spiking neuron network Helmholtz machine

    PubMed Central

    Sountsov, Pavel; Miller, Paul

    2015-01-01

    An increasing amount of behavioral and neurophysiological data suggests that the brain performs optimal (or near-optimal) probabilistic inference and learning during perception and other tasks. Although many machine learning algorithms exist that perform inference and learning in an optimal way, the complete description of how one of those algorithms (or a novel algorithm) can be implemented in the brain is currently incomplete. There have been many proposed solutions that address how neurons can perform optimal inference but the question of how synaptic plasticity can implement optimal learning is rarely addressed. This paper aims to unify the two fields of probabilistic inference and synaptic plasticity by using a neuronal network of realistic model spiking neurons to implement a well-studied computational model called the Helmholtz Machine. The Helmholtz Machine is amenable to neural implementation as the algorithm it uses to learn its parameters, called the wake-sleep algorithm, uses a local delta learning rule. Our spiking-neuron network implements both the delta rule and a small example of a Helmholtz machine. This neuronal network can learn an internal model of continuous-valued training data sets without supervision. The network can also perform inference on the learned internal models. We show how various biophysical features of the neural implementation constrain the parameters of the wake-sleep algorithm, such as the duration of the wake and sleep phases of learning and the minimal sample duration. We examine the deviations from optimal performance and tie them to the properties of the synaptic plasticity rule. PMID:25954191

  4. Dynamics of moment neuronal networks

    SciTech Connect

    Feng Jianfeng; Deng Yingchun; Rossoni, Enrico

    2006-04-15

    A theoretical framework is developed for moment neuronal networks (MNNs). Within this framework, the behavior of the system of spiking neurons is specified in terms of the first- and second-order statistics of their interspike intervals, i.e., the mean, the variance, and the cross correlations of spike activity. Since neurons emit and receive spike trains which can be described by renewal--but generally non-Poisson--processes, we first derive a suitable diffusion-type approximation of such processes. Two approximation schemes are introduced: the usual approximation scheme (UAS) and the Ornstein-Uhlenbeck scheme. It is found that both schemes approximate well the input-output characteristics of spiking models such as the IF and the Hodgkin-Huxley models. The MNN framework is then developed according to the UAS scheme, and its predictions are tested on a few examples.

  5. Advances in applications of spiking neuron networks

    NASA Astrophysics Data System (ADS)

    Cios, Krzysztof J.; Sala, Dorel M.

    2000-03-01

    In this paper, we present new findings in constructing and applications of artificial neural networks that use a biologically inspired spiking neuron model. The used model is a point neuron with the interaction between neurons described by postsynaptic potentials. The synaptic plasticity is achieved by using a temporal correlation learning rule, specified as a function of time difference between the firings of pre- and post-synaptic neurons. Using this rule we show how certain associations between neurons in a network of spiking neurons can be implemented. As an example we analyze the dynamic properties of networks of laterally connected spiking neurons and we show their capability to self-organize into topological maps in response to external stimulation. In another application we explore the capability networks of spiking neurons to solve graph algorithms by using temporal coding of distances in a given spatial configuration. The paper underlines the importance of temporal dimension in artificial neural network information processing.

  6. Parallel network simulations with NEURON.

    PubMed

    Migliore, M; Cannia, C; Lytton, W W; Markram, Henry; Hines, M L

    2006-10-01

    The NEURON simulation environment has been extended to support parallel network simulations. Each processor integrates the equations for its subnet over an interval equal to the minimum (interprocessor) presynaptic spike generation to postsynaptic spike delivery connection delay. The performance of three published network models with very different spike patterns exhibits superlinear speedup on Beowulf clusters and demonstrates that spike communication overhead is often less than the benefit of an increased fraction of the entire problem fitting into high speed cache. On the EPFL IBM Blue Gene, almost linear speedup was obtained up to 100 processors. Increasing one model from 500 to 40,000 realistic cells exhibited almost linear speedup on 2,000 processors, with an integration time of 9.8 seconds and communication time of 1.3 seconds. The potential for speed-ups of several orders of magnitude makes practical the running of large network simulations that could otherwise not be explored.

  7. Parallel Network Simulations with NEURON

    PubMed Central

    Migliore, M.; Cannia, C.; Lytton, W.W; Markram, Henry; Hines, M. L.

    2009-01-01

    The NEURON simulation environment has been extended to support parallel network simulations. Each processor integrates the equations for its subnet over an interval equal to the minimum (interprocessor) presynaptic spike generation to postsynaptic spike delivery connection delay. The performance of three published network models with very different spike patterns exhibits superlinear speedup on Beowulf clusters and demonstrates that spike communication overhead is often less than the benefit of an increased fraction of the entire problem fitting into high speed cache. On the EPFL IBM Blue Gene, almost linear speedup was obtained up to 100 processors. Increasing one model from 500 to 40,000 realistic cells exhibited almost linear speedup on 2000 processors, with an integration time of 9.8 seconds and communication time of 1.3 seconds. The potential for speed-ups of several orders of magnitude makes practical the running of large network simulations that could otherwise not be explored. PMID:16732488

  8. Robust Multiobjective Controllability of Complex Neuronal Networks.

    PubMed

    Tang, Yang; Gao, Huijun; Du, Wei; Lu, Jianquan; Vasilakos, Athanasios V; Kurths, Jurgen

    2016-01-01

    This paper addresses robust multiobjective identification of driver nodes in the neuronal network of a cat's brain, in which uncertainties in determination of driver nodes and control gains are considered. A framework for robust multiobjective controllability is proposed by introducing interval uncertainties and optimization algorithms. By appropriate definitions of robust multiobjective controllability, a robust nondominated sorting adaptive differential evolution (NSJaDE) is presented by means of the nondominated sorting mechanism and the adaptive differential evolution (JaDE). The simulation experimental results illustrate the satisfactory performance of NSJaDE for robust multiobjective controllability, in comparison with six statistical methods and two multiobjective evolutionary algorithms (MOEAs): nondominated sorting genetic algorithms II (NSGA-II) and nondominated sorting composite differential evolution. It is revealed that the existence of uncertainties in choosing driver nodes and designing control gains heavily affects the controllability of neuronal networks. We also unveil that driver nodes play a more drastic role than control gains in robust controllability. The developed NSJaDE and obtained results will shed light on the understanding of robustness in controlling realistic complex networks such as transportation networks, power grid networks, biological networks, etc.

  9. Neuronal Networks on Nanocellulose Scaffolds.

    PubMed

    Jonsson, Malin; Brackmann, Christian; Puchades, Maja; Brattås, Karoline; Ewing, Andrew; Gatenholm, Paul; Enejder, Annika

    2015-11-01

    Proliferation, integration, and neurite extension of PC12 cells, a widely used culture model for cholinergic neurons, were studied in nanocellulose scaffolds biosynthesized by Gluconacetobacter xylinus to allow a three-dimensional (3D) extension of neurites better mimicking neuronal networks in tissue. The interaction with control scaffolds was compared with cationized nanocellulose (trimethyl ammonium betahydroxy propyl [TMAHP] cellulose) to investigate the impact of surface charges on the cell interaction mechanisms. Furthermore, coatings with extracellular matrix proteins (collagen, fibronectin, and laminin) were investigated to determine the importance of integrin-mediated cell attachment. Cell proliferation was evaluated by a cellular proliferation assay, while cell integration and neurite propagation were studied by simultaneous label-free Coherent anti-Stokes Raman Scattering and second harmonic generation microscopy, providing 3D images of PC12 cells and arrangement of nanocellulose fibrils, respectively. Cell attachment and proliferation were enhanced by TMAHP modification, but not by protein coating. Protein coating instead promoted active interaction between the cells and the scaffold, hence lateral cell migration and integration. Irrespective of surface modification, deepest cell integration measured was one to two cell layers, whereas neurites have a capacity to integrate deeper than the cell bodies in the scaffold due to their fine dimensions and amoeba-like migration pattern. Neurites with lengths of >50 μm were observed, successfully connecting individual cells and cell clusters. In conclusion, TMAHP-modified nanocellulose scaffolds promote initial cellular scaffold adhesion, which combined with additional cell-scaffold treatments enables further formation of 3D neuronal networks.

  10. Solving Constraint Satisfaction Problems with Networks of Spiking Neurons

    PubMed Central

    Jonke, Zeno; Habenschuss, Stefan; Maass, Wolfgang

    2016-01-01

    Network of neurons in the brain apply—unlike processors in our current generation of computer hardware—an event-based processing strategy, where short pulses (spikes) are emitted sparsely by neurons to signal the occurrence of an event at a particular point in time. Such spike-based computations promise to be substantially more power-efficient than traditional clocked processing schemes. However, it turns out to be surprisingly difficult to design networks of spiking neurons that can solve difficult computational problems on the level of single spikes, rather than rates of spikes. We present here a new method for designing networks of spiking neurons via an energy function. Furthermore, we show how the energy function of a network of stochastically firing neurons can be shaped in a transparent manner by composing the networks of simple stereotypical network motifs. We show that this design approach enables networks of spiking neurons to produce approximate solutions to difficult (NP-hard) constraint satisfaction problems from the domains of planning/optimization and verification/logical inference. The resulting networks employ noise as a computational resource. Nevertheless, the timing of spikes plays an essential role in their computations. Furthermore, networks of spiking neurons carry out for the Traveling Salesman Problem a more efficient stochastic search for good solutions compared with stochastic artificial neural networks (Boltzmann machines) and Gibbs sampling. PMID:27065785

  11. Solving Constraint Satisfaction Problems with Networks of Spiking Neurons.

    PubMed

    Jonke, Zeno; Habenschuss, Stefan; Maass, Wolfgang

    2016-01-01

    Network of neurons in the brain apply-unlike processors in our current generation of computer hardware-an event-based processing strategy, where short pulses (spikes) are emitted sparsely by neurons to signal the occurrence of an event at a particular point in time. Such spike-based computations promise to be substantially more power-efficient than traditional clocked processing schemes. However, it turns out to be surprisingly difficult to design networks of spiking neurons that can solve difficult computational problems on the level of single spikes, rather than rates of spikes. We present here a new method for designing networks of spiking neurons via an energy function. Furthermore, we show how the energy function of a network of stochastically firing neurons can be shaped in a transparent manner by composing the networks of simple stereotypical network motifs. We show that this design approach enables networks of spiking neurons to produce approximate solutions to difficult (NP-hard) constraint satisfaction problems from the domains of planning/optimization and verification/logical inference. The resulting networks employ noise as a computational resource. Nevertheless, the timing of spikes plays an essential role in their computations. Furthermore, networks of spiking neurons carry out for the Traveling Salesman Problem a more efficient stochastic search for good solutions compared with stochastic artificial neural networks (Boltzmann machines) and Gibbs sampling.

  12. Macroscopic Description for Networks of Spiking Neurons

    NASA Astrophysics Data System (ADS)

    Montbrió, Ernest; Pazó, Diego; Roxin, Alex

    2015-04-01

    A major goal of neuroscience, statistical physics, and nonlinear dynamics is to understand how brain function arises from the collective dynamics of networks of spiking neurons. This challenge has been chiefly addressed through large-scale numerical simulations. Alternatively, researchers have formulated mean-field theories to gain insight into macroscopic states of large neuronal networks in terms of the collective firing activity of the neurons, or the firing rate. However, these theories have not succeeded in establishing an exact correspondence between the firing rate of the network and the underlying microscopic state of the spiking neurons. This has largely constrained the range of applicability of such macroscopic descriptions, particularly when trying to describe neuronal synchronization. Here, we provide the derivation of a set of exact macroscopic equations for a network of spiking neurons. Our results reveal that the spike generation mechanism of individual neurons introduces an effective coupling between two biophysically relevant macroscopic quantities, the firing rate and the mean membrane potential, which together govern the evolution of the neuronal network. The resulting equations exactly describe all possible macroscopic dynamical states of the network, including states of synchronous spiking activity. Finally, we show that the firing-rate description is related, via a conformal map, to a low-dimensional description in terms of the Kuramoto order parameter, called Ott-Antonsen theory. We anticipate that our results will be an important tool in investigating how large networks of spiking neurons self-organize in time to process and encode information in the brain.

  13. Adaptive Neurons For Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Tawel, Raoul

    1990-01-01

    Training time decreases dramatically. In improved mathematical model of neural-network processor, temperature of neurons (in addition to connection strengths, also called weights, of synapses) varied during supervised-learning phase of operation according to mathematical formalism and not heuristic rule. Evidence that biological neural networks also process information at neuronal level.

  14. Vehicle dynamic analysis using neuronal network algorithms

    NASA Astrophysics Data System (ADS)

    Oloeriu, Florin; Mocian, Oana

    2014-06-01

    Theoretical developments of certain engineering areas, the emergence of new investigation tools, which are better and more precise and their implementation on-board the everyday vehicles, all these represent main influence factors that impact the theoretical and experimental study of vehicle's dynamic behavior. Once the implementation of these new technologies onto the vehicle's construction had been achieved, it had led to more and more complex systems. Some of the most important, such as the electronic control of engine, transmission, suspension, steering, braking and traction had a positive impact onto the vehicle's dynamic behavior. The existence of CPU on-board vehicles allows data acquisition and storage and it leads to a more accurate and better experimental and theoretical study of vehicle dynamics. It uses the information offered directly by the already on-board built-in elements of electronic control systems. The technical literature that studies vehicle dynamics is entirely focused onto parametric analysis. This kind of approach adopts two simplifying assumptions. Functional parameters obey certain distribution laws, which are known in classical statistics theory. The second assumption states that the mathematical models are previously known and have coefficients that are not time-dependent. Both the mentioned assumptions are not confirmed in real situations: the functional parameters do not follow any known statistical repartition laws and the mathematical laws aren't previously known and contain families of parameters and are mostly time-dependent. The purpose of the paper is to present a more accurate analysis methodology that can be applied when studying vehicle's dynamic behavior. A method that provides the setting of non-parametrical mathematical models for vehicle's dynamic behavior is relying on neuronal networks. This method contains coefficients that are time-dependent. Neuronal networks are mostly used in various types' system controls, thus

  15. Inferring Single Neuron Properties in Conductance Based Balanced Networks

    PubMed Central

    Pool, Román Rossi; Mato, Germán

    2011-01-01

    Balanced states in large networks are a usual hypothesis for explaining the variability of neural activity in cortical systems. In this regime the statistics of the inputs is characterized by static and dynamic fluctuations. The dynamic fluctuations have a Gaussian distribution. Such statistics allows to use reverse correlation methods, by recording synaptic inputs and the spike trains of ongoing spontaneous activity without any additional input. By using this method, properties of the single neuron dynamics that are masked by the balanced state can be quantified. To show the feasibility of this approach we apply it to large networks of conductance based neurons. The networks are classified as Type I or Type II according to the bifurcations which neurons of the different populations undergo near the firing onset. We also analyze mixed networks, in which each population has a mixture of different neuronal types. We determine under which conditions the intrinsic noise generated by the network can be used to apply reverse correlation methods. We find that under realistic conditions we can ascertain with low error the types of neurons present in the network. We also find that data from neurons with similar firing rates can be combined to perform covariance analysis. We compare the results of these methods (that do not requite any external input) to the standard procedure (that requires the injection of Gaussian noise into a single neuron). We find a good agreement between the two procedures. PMID:22016730

  16. Slow waves in mutually inhibitory neuronal networks

    NASA Astrophysics Data System (ADS)

    Jalics, Jozsi

    2004-05-01

    A variety of experimental and modeling studies have been performed to investigate wave propagation in networks of thalamic neurons and their relationship to spindle sleep rhythms. It is believed that spindle oscillations result from the reciprocal interaction between thalamocortical (TC) and thalamic reticular (RE) neurons. We consider a network of TC and RE cells reduced to a one-layer network model and represented by a system of singularly perturbed integral-differential equations. Geometric singular perturbation methods are used to prove the existence of a locally unique slow wave pulse that propagates along the network. By seeking a slow pulse solution, we reformulate the problem to finding a heteroclinic orbit in a 3D system of ODEs with two additional constraints on the location of the orbit at two distinct points in time. In proving the persistence of the singular heteroclinic orbit, difficulties arising from the solution passing near points where normal hyperbolicity is lost on a 2D critical manifold are overcome by employing results by Wechselberger [Singularly perturbed folds and canards in R3, Thesis, TU-Wien, 1998].

  17. Stiff substrates enhance cultured neuronal network activity.

    PubMed

    Zhang, Quan-You; Zhang, Yan-Yan; Xie, Jing; Li, Chen-Xu; Chen, Wei-Yi; Liu, Bai-Lin; Wu, Xiao-an; Li, Shu-Na; Huo, Bo; Jiang, Lin-Hua; Zhao, Hu-Cheng

    2014-08-28

    The mechanical property of extracellular matrix and cell-supporting substrates is known to modulate neuronal growth, differentiation, extension and branching. Here we show that substrate stiffness is an important microenvironmental cue, to which mouse hippocampal neurons respond and integrate into synapse formation and transmission in cultured neuronal network. Hippocampal neurons were cultured on polydimethylsiloxane substrates fabricated to have similar surface properties but a 10-fold difference in Young's modulus. Voltage-gated Ca(2+) channel currents determined by patch-clamp recording were greater in neurons on stiff substrates than on soft substrates. Ca(2+) oscillations in cultured neuronal network monitored using time-lapse single cell imaging increased in both amplitude and frequency among neurons on stiff substrates. Consistently, synaptic connectivity recorded by paired recording was enhanced between neurons on stiff substrates. Furthermore, spontaneous excitatory postsynaptic activity became greater and more frequent in neurons on stiff substrates. Evoked excitatory transmitter release and excitatory postsynaptic currents also were heightened at synapses between neurons on stiff substrates. Taken together, our results provide compelling evidence to show that substrate stiffness is an important biophysical factor modulating synapse connectivity and transmission in cultured hippocampal neuronal network. Such information is useful in designing instructive scaffolds or supporting substrates for neural tissue engineering.

  18. Real-time neuronal networks reconstruction using hierarchical systolic arrays.

    PubMed

    Yu, Bo; Mak, Terrence; Sun, Yihe; Poon, Chi-Sang

    2011-01-01

    The correlation network of neurons emerges as an important mathematical framework for a spectrum of applications including neural modeling, brain disease prediction and brain-machine interface. However, construction of correlation network is computationally expensive, especially when the number of neurons is large and this prohibits realtime applications. This paper proposes a hardware architecture using hierarchical systolic arrays to reconstruct the correlation network. Through mapping an efficient algorithm for cross-correlation onto a massively parallel structure, the hardware can accomplish the network construction with extremely small delay. The proposed structure is evaluated using Field Programmable Gate Array (FPGA). Results show that our method is three orders of magnitudes faster than the software approach using desktop computer. This new method enables real-time network construction and leads to future novel devices of realtime neuronal network monitoring and rehabilitation.

  19. Dynamical estimation of neuron and network properties III: network analysis using neuron spike times.

    PubMed

    Knowlton, Chris; Meliza, C Daniel; Margoliash, Daniel; Abarbanel, Henry D I

    2014-06-01

    Estimating the behavior of a network of neurons requires accurate models of the individual neurons along with accurate characterizations of the connections among them. Whereas for a single cell, measurements of the intracellular voltage are technically feasible and sufficient to characterize a useful model of its behavior, making sufficient numbers of simultaneous intracellular measurements to characterize even small networks is infeasible. This paper builds on prior work on single neurons to explore whether knowledge of the time of spiking of neurons in a network, once the nodes (neurons) have been characterized biophysically, can provide enough information to usefully constrain the functional architecture of the network: the existence of synaptic links among neurons and their strength. Using standardized voltage and synaptic gating variable waveforms associated with a spike, we demonstrate that the functional architecture of a small network of model neurons can be established.

  20. Synaptic connectivity in engineered neuronal networks.

    PubMed

    Molnar, Peter; Kang, Jung-Fong; Bhargava, Neelima; Das, Mainak; Hickman, James J

    2014-01-01

    We have developed a method to organize cells in dissociated cultures using engineered chemical clues on a culture surface and determined their connectivity patterns. Although almost all elements of the synaptic transmission machinery can be studied separately in single cell models in dissociated cultures, the complex physiological interactions between these elements are usually lost. Thus, factors affecting synaptic transmission are generally studied in organotypic cultures, brain slices, or in vivo where the cellular architecture generally remains intact. However, by utilizing engineered neuronal networks complex phenomenon such as synaptic transmission or synaptic plasticity can be studied in a simple, functional, cell culture-based system. We have utilized self-assembled monolayers and photolithography to create the surface templates. Embryonic hippocampal cells, plated on the resultant patterns in serum-free medium, followed the surface clues and formed the engineered neuronal networks. Basic whole-cell patch-clamp electrophysiology was applied to characterize the synaptic connectivity in these engineered two-cell networks. The same technology has been used to pattern other cell types such as cardiomyocytes or skeletal muscle fibers.

  1. Cellular neuron and large wireless neural network

    NASA Astrophysics Data System (ADS)

    Jannson, Tomasz; Forrester, Thomas; Ambrose, Barry; Kazantzidis, Matheos; Lin, Freddie

    2006-05-01

    A new approach to neural networks is proposed, based on wireless interconnects (synapses) and cellular neurons, both software and hardware; with the capacity of 10 10 neurons, almost fully connected. The core of the system is Spatio-Temporal-Variant (STV) kernel and cellular axon with synaptic plasticity variable in time and space. The novel large neural network hardware is based on two established wireless technologies: RF-cellular and IR-wireless.

  2. Associative memory in phasing neuron networks

    SciTech Connect

    Nair, Niketh S; Bochove, Erik J.; Braiman, Yehuda

    2014-01-01

    We studied pattern formation in a network of coupled Hindmarsh-Rose model neurons and introduced a new model for associative memory retrieval using networks of Kuramoto oscillators. Hindmarsh-Rose Neural Networks can exhibit a rich set of collective dynamics that can be controlled by their connectivity. Specifically, we showed an instance of Hebb's rule where spiking was correlated with network topology. Based on this, we presented a simple model of associative memory in coupled phase oscillators.

  3. Structural Properties of the Caenorhabditis elegans Neuronal Network

    PubMed Central

    Varshney, Lav R.; Chen, Beth L.; Paniagua, Eric; Hall, David H.; Chklovskii, Dmitri B.

    2011-01-01

    Despite recent interest in reconstructing neuronal networks, complete wiring diagrams on the level of individual synapses remain scarce and the insights into function they can provide remain unclear. Even for Caenorhabditis elegans, whose neuronal network is relatively small and stereotypical from animal to animal, published wiring diagrams are neither accurate nor complete and self-consistent. Using materials from White et al. and new electron micrographs we assemble whole, self-consistent gap junction and chemical synapse networks of hermaphrodite C. elegans. We propose a method to visualize the wiring diagram, which reflects network signal flow. We calculate statistical and topological properties of the network, such as degree distributions, synaptic multiplicities, and small-world properties, that help in understanding network signal propagation. We identify neurons that may play central roles in information processing, and network motifs that could serve as functional modules of the network. We explore propagation of neuronal activity in response to sensory or artificial stimulation using linear systems theory and find several activity patterns that could serve as substrates of previously described behaviors. Finally, we analyze the interaction between the gap junction and the chemical synapse networks. Since several statistical properties of the C. elegans network, such as multiplicity and motif distributions are similar to those found in mammalian neocortex, they likely point to general principles of neuronal networks. The wiring diagram reported here can help in understanding the mechanistic basis of behavior by generating predictions about future experiments involving genetic perturbations, laser ablations, or monitoring propagation of neuronal activity in response to stimulation. PMID:21304930

  4. Sloppiness in spontaneously active neuronal networks.

    PubMed

    Panas, Dagmara; Amin, Hayder; Maccione, Alessandro; Muthmann, Oliver; van Rossum, Mark; Berdondini, Luca; Hennig, Matthias H

    2015-06-03

    Various plasticity mechanisms, including experience-dependent, spontaneous, as well as homeostatic ones, continuously remodel neural circuits. Yet, despite fluctuations in the properties of single neurons and synapses, the behavior and function of neuronal assemblies are generally found to be very stable over time. This raises the important question of how plasticity is coordinated across the network. To address this, we investigated the stability of network activity in cultured rat hippocampal neurons recorded with high-density multielectrode arrays over several days. We used parametric models to characterize multineuron activity patterns and analyzed their sensitivity to changes. We found that the models exhibited sloppiness, a property where the model behavior is insensitive to changes in many parameter combinations, but very sensitive to a few. The activity of neurons with sloppy parameters showed faster and larger fluctuations than the activity of a small subset of neurons associated with sensitive parameters. Furthermore, parameter sensitivity was highly correlated with firing rates. Finally, we tested our observations from cell cultures on an in vivo recording from monkey visual cortex and we confirm that spontaneous cortical activity also shows hallmarks of sloppy behavior and firing rate dependence. Our findings suggest that a small subnetwork of highly active and stable neurons supports group stability, and that this endows neuronal networks with the flexibility to continuously remodel without compromising stability and function.

  5. Synchrony and Control of Neuronal Networks.

    NASA Astrophysics Data System (ADS)

    Schiff, Steven

    2001-03-01

    Cooperative behavior in the brain stems from the nature and strength of the interactions between neurons within a networked ensemble. Normal network activity takes place in a state of partial synchrony between neurons, and some pathological behaviors, such as epilepsy and tremor, appear to share a common feature of increased interaction strength. We have focused on the parallel paths of both detecting and characterizing the nonlinear synchronization present within neuronal networks, and employing feedback control methodology using electrical fields to modulate that neuronal activity. From a theoretical perspective, we see evidence for nonlinear generalized synchrony in networks of neurons that linear techniques are incapable of detecting (PRE 54: 6708, 1996), and we have described a decoherence transition between asymmetric nonlinear systems that is experimentally observable (PRL 84: 1689, 2000). In addition, we have seen evidence for unstable dimension variability in real neuronal systems that indicates certain physical limits of modelability when observing such systems (PRL 85, 2490, 2000). From an experimental perspective, we have achieved success in modulating epileptic seizures in neuronal networks using electrical fields. Extracellular neuronal activity is continuously recorded during field application through differential extracellular recording techniques, and the applied electric field strength is continuously updated using a computer controlled proportional feedback algorithm. This approach appears capable of sustained amelioration of seizure events when used with negative feedback. In negative feedback mode, such findings may offer a novel technology for seizure control. In positive feedback mode, adaptively applied electric fields may offer a more physiological means for neural modulation for prosthetic purposes than previously possible (J. Neuroscience, 2001).

  6. Selective adaptation in networks of cortical neurons.

    PubMed

    Eytan, Danny; Brenner, Naama; Marom, Shimon

    2003-10-15

    A key property of neural systems is their ability to adapt selectively to stimuli with different features. Using multisite electrical recordings from networks of cortical neurons developing ex vivo, we show that neurons adapt selectively to different stimuli invading the network. We focus on selective adaptation to frequent and rare stimuli; networks were stimulated at two sites with two different stimulus frequencies. When both stimuli were presented within the same period, neurons in the network attenuated their responsiveness to the more frequent input, whereas their responsiveness to the rarely delivered stimuli showed a marked average increase. The amplification of the response to rare stimuli required the presence of the other, more frequent stimulation source. By contrast, the decreased response to the frequent stimuli occurred regardless of the presence of the rare stimuli. Analysis of the response of single units suggests that both of these effects are caused by changes in synaptic transmission. By using synaptic blockers, we find that the increased responsiveness to the rarely stimulated site depends specifically on fast GABAergic transmission. Thus, excitatory synaptic depression, the inhibitory sub-network, and their balance play an active role in generating selective gain control. The observation that selective adaptation arises naturally in a network of cortical neurons developing ex vivo indicates that this is an inherent feature of spontaneously organizing cortical networks.

  7. Stability of Neuronal Networks with Homeostatic Regulation

    PubMed Central

    Harnack, Daniel; Pelko, Miha; Chaillet, Antoine; Chitour, Yacine; van Rossum, Mark C.W.

    2015-01-01

    Neurons are equipped with homeostatic mechanisms that counteract long-term perturbations of their average activity and thereby keep neurons in a healthy and information-rich operating regime. While homeostasis is believed to be crucial for neural function, a systematic analysis of homeostatic control has largely been lacking. The analysis presented here analyses the necessary conditions for stable homeostatic control. We consider networks of neurons with homeostasis and show that homeostatic control that is stable for single neurons, can destabilize activity in otherwise stable recurrent networks leading to strong non-abating oscillations in the activity. This instability can be prevented by slowing down the homeostatic control. The stronger the network recurrence, the slower the homeostasis has to be. Next, we consider how non-linearities in the neural activation function affect these constraints. Finally, we consider the case that homeostatic feedback is mediated via a cascade of multiple intermediate stages. Counter-intuitively, the addition of extra stages in the homeostatic control loop further destabilizes activity in single neurons and networks. Our theoretical framework for homeostasis thus reveals previously unconsidered constraints on homeostasis in biological networks, and identifies conditions that require the slow time-constants of homeostatic regulation observed experimentally. PMID:26154297

  8. Neuronal networks and energy bursts in epilepsy.

    PubMed

    Wu, Y; Liu, D; Song, Z

    2015-02-26

    Epilepsy can be defined as the abnormal activities of neurons. The occurrence, propagation and termination of epileptic seizures rely on the networks of neuronal cells that are connected through both synaptic- and non-synaptic interactions. These complicated interactions contain the modified functions of normal neurons and glias as well as the mediation of excitatory and inhibitory mechanisms with feedback homeostasis. Numerous spread patterns are detected in disparate networks of ictal activities. The cortical-thalamic-cortical loop is present during a general spike wave seizure. The thalamic reticular nucleus (nRT) is the major inhibitory input traversing the region, and the dentate gyrus (DG) controls CA3 excitability. The imbalance between γ-aminobutyric acid (GABA)-ergic inhibition and glutamatergic excitation is the main disorder in epilepsy. Adjustable negative feedback that mediates both inhibitory and excitatory components affects neuronal networks through neurotransmission fluctuation, receptor and transmitter signaling, and through concomitant influences on ion concentrations and field effects. Within a limited dynamic range, neurons slowly adapt to input levels and have a high sensitivity to synaptic changes. The stability of the adapting network depends on the ratio of the adaptation rates of both the excitatory and inhibitory populations. Thus, therapeutic strategies with multiple effects on seizures are required for the treatment of epilepsy, and the therapeutic functions on networks are reviewed here. Based on the high-energy burst theory of epileptic activity, we propose a potential antiepileptic therapeutic strategy to transfer the high energy and extra electricity out of the foci.

  9. Protein phosphorylation networks in motor neuron death.

    PubMed

    Hu, Jie Hong; Krieger, Charles

    2002-01-01

    The disorder amyotrophic lateral sclerosis (ALS) is characterized by the death of specific groups of neurons, especially motor neurons, which innervate skeletal muscle, and neurons connecting the cerebral cortex with motor neurons, such as corticospinal tract neurons. There have been numerous attempts to elucidate why there is selective involvement of motor neurons in ALS. Recent observations have demonstrated altered activities and protein levels of diverse kinases in the brain and spinal cord of transgenic mice that overexpress a mutant superoxide dismutase (mSOD) gene that is found in patients with the familial form of ALS, as well as in patients who have died with ALS. These results suggest that the alteration of protein phosphorylation may be involved in the pathogenesis of ALS. The changes in protein kinase and phosphatase expression and activity can affect the activation of important neuronal neurotransmitter receptors such as NMDA receptors or other signaling proteins and can trigger, or modify, the process producing neuronal loss in ALS. These various kinases, phosphatases and signaling proteins are involved in many signaling pathways; however, they have close interactions with each other. Therefore, an understanding of the role of protein kinases and protein phosphatases and the molecular organization of protein phosphorylation networks are useful to determine the mechanisms of selective motor neuron death.

  10. Neural network with dynamically adaptable neurons

    NASA Technical Reports Server (NTRS)

    Tawel, Raoul (Inventor)

    1994-01-01

    This invention is an adaptive neuron for use in neural network processors. The adaptive neuron participates in the supervised learning phase of operation on a co-equal basis with the synapse matrix elements by adaptively changing its gain in a similar manner to the change of weights in the synapse IO elements. In this manner, training time is decreased by as much as three orders of magnitude.

  11. Attractor dynamics in local neuronal networks

    PubMed Central

    Thivierge, Jean-Philippe; Comas, Rosa; Longtin, André

    2014-01-01

    Patterns of synaptic connectivity in various regions of the brain are characterized by the presence of synaptic motifs, defined as unidirectional and bidirectional synaptic contacts that follow a particular configuration and link together small groups of neurons. Recent computational work proposes that a relay network (two populations communicating via a third, relay population of neurons) can generate precise patterns of neural synchronization. Here, we employ two distinct models of neuronal dynamics and show that simulated neural circuits designed in this way are caught in a global attractor of activity that prevents neurons from modulating their response on the basis of incoming stimuli. To circumvent the emergence of a fixed global attractor, we propose a mechanism of selective gain inhibition that promotes flexible responses to external stimuli. We suggest that local neuronal circuits may employ this mechanism to generate precise patterns of neural synchronization whose transient nature delimits the occurrence of a brief stimulus. PMID:24688457

  12. On the Dynamics of Random Neuronal Networks

    NASA Astrophysics Data System (ADS)

    Robert, Philippe; Touboul, Jonathan

    2016-11-01

    We study the mean-field limit and stationary distributions of a pulse-coupled network modeling the dynamics of a large neuronal assemblies. Our model takes into account explicitly the intrinsic randomness of firing times, contrasting with the classical integrate-and-fire model. The ergodicity properties of the Markov process associated to finite networks are investigated. We derive the large network size limit of the distribution of the state of a neuron, and characterize their invariant distributions as well as their stability properties. We show that the system undergoes transitions as a function of the averaged connectivity parameter, and can support trivial states (where the network activity dies out, which is also the unique stationary state of finite networks in some cases) and self-sustained activity when connectivity level is sufficiently large, both being possibly stable.

  13. Towards Reproducible Descriptions of Neuronal Network Models

    PubMed Central

    Nordlie, Eilen; Gewaltig, Marc-Oliver; Plesser, Hans Ekkehard

    2009-01-01

    Progress in science depends on the effective exchange of ideas among scientists. New ideas can be assessed and criticized in a meaningful manner only if they are formulated precisely. This applies to simulation studies as well as to experiments and theories. But after more than 50 years of neuronal network simulations, we still lack a clear and common understanding of the role of computational models in neuroscience as well as established practices for describing network models in publications. This hinders the critical evaluation of network models as well as their re-use. We analyze here 14 research papers proposing neuronal network models of different complexity and find widely varying approaches to model descriptions, with regard to both the means of description and the ordering and placement of material. We further observe great variation in the graphical representation of networks and the notation used in equations. Based on our observations, we propose a good model description practice, composed of guidelines for the organization of publications, a checklist for model descriptions, templates for tables presenting model structure, and guidelines for diagrams of networks. The main purpose of this good practice is to trigger a debate about the communication of neuronal network models in a manner comprehensible to humans, as opposed to machine-readable model description languages. We believe that the good model description practice proposed here, together with a number of other recent initiatives on data-, model-, and software-sharing, may lead to a deeper and more fruitful exchange of ideas among computational neuroscientists in years to come. We further hope that work on standardized ways of describing—and thinking about—complex neuronal networks will lead the scientific community to a clearer understanding of high-level concepts in network dynamics, and will thus lead to deeper insights into the function of the brain. PMID:19662159

  14. Towards reproducible descriptions of neuronal network models.

    PubMed

    Nordlie, Eilen; Gewaltig, Marc-Oliver; Plesser, Hans Ekkehard

    2009-08-01

    Progress in science depends on the effective exchange of ideas among scientists. New ideas can be assessed and criticized in a meaningful manner only if they are formulated precisely. This applies to simulation studies as well as to experiments and theories. But after more than 50 years of neuronal network simulations, we still lack a clear and common understanding of the role of computational models in neuroscience as well as established practices for describing network models in publications. This hinders the critical evaluation of network models as well as their re-use. We analyze here 14 research papers proposing neuronal network models of different complexity and find widely varying approaches to model descriptions, with regard to both the means of description and the ordering and placement of material. We further observe great variation in the graphical representation of networks and the notation used in equations. Based on our observations, we propose a good model description practice, composed of guidelines for the organization of publications, a checklist for model descriptions, templates for tables presenting model structure, and guidelines for diagrams of networks. The main purpose of this good practice is to trigger a debate about the communication of neuronal network models in a manner comprehensible to humans, as opposed to machine-readable model description languages. We believe that the good model description practice proposed here, together with a number of other recent initiatives on data-, model-, and software-sharing, may lead to a deeper and more fruitful exchange of ideas among computational neuroscientists in years to come. We further hope that work on standardized ways of describing--and thinking about--complex neuronal networks will lead the scientific community to a clearer understanding of high-level concepts in network dynamics, and will thus lead to deeper insights into the function of the brain.

  15. Bayesian Inference and Online Learning in Poisson Neuronal Networks.

    PubMed

    Huang, Yanping; Rao, Rajesh P N

    2016-08-01

    Motivated by the growing evidence for Bayesian computation in the brain, we show how a two-layer recurrent network of Poisson neurons can perform both approximate Bayesian inference and learning for any hidden Markov model. The lower-layer sensory neurons receive noisy measurements of hidden world states. The higher-layer neurons infer a posterior distribution over world states via Bayesian inference from inputs generated by sensory neurons. We demonstrate how such a neuronal network with synaptic plasticity can implement a form of Bayesian inference similar to Monte Carlo methods such as particle filtering. Each spike in a higher-layer neuron represents a sample of a particular hidden world state. The spiking activity across the neural population approximates the posterior distribution over hidden states. In this model, variability in spiking is regarded not as a nuisance but as an integral feature that provides the variability necessary for sampling during inference. We demonstrate how the network can learn the likelihood model, as well as the transition probabilities underlying the dynamics, using a Hebbian learning rule. We present results illustrating the ability of the network to perform inference and learning for arbitrary hidden Markov models.

  16. Neuronal network analyses: premises, promises and uncertainties

    PubMed Central

    Parker, David

    2010-01-01

    Neuronal networks assemble the cellular components needed for sensory, motor and cognitive functions. Any rational intervention in the nervous system will thus require an understanding of network function. Obtaining this understanding is widely considered to be one of the major tasks facing neuroscience today. Network analyses have been performed for some years in relatively simple systems. In addition to the direct insights these systems have provided, they also illustrate some of the difficulties of understanding network function. Nevertheless, in more complex systems (including human), claims are made that the cellular bases of behaviour are, or will shortly be, understood. While the discussion is necessarily limited, this issue will examine these claims and highlight some traditional and novel aspects of network analyses and their difficulties. This introduction discusses the criteria that need to be satisfied for network understanding, and how they relate to traditional and novel approaches being applied to addressing network function. PMID:20603354

  17. Inhibition Controls Asynchronous States of Neuronal Networks

    PubMed Central

    Treviño, Mario

    2016-01-01

    Computations in cortical circuits require action potentials from excitatory and inhibitory neurons. In this mini-review, I first provide a quick overview of findings that indicate that GABAergic neurons play a fundamental role in coordinating spikes and generating synchronized network activity. Next, I argue that these observations helped popularize the notion that network oscillations require a high degree of spike correlations among interneurons which, in turn, produce synchronous inhibition of the local microcircuit. The aim of this text is to discuss some recent experimental and computational findings that support a complementary view: one in which interneurons participate actively in producing asynchronous states in cortical networks. This requires a proper mixture of shared excitation and inhibition leading to asynchronous activity between neighboring cells. Such contribution from interneurons would be extremely important because it would tend to reduce the spike correlation between neighboring pyramidal cells, a drop in redundancy that could enhance the information-processing capacity of neural networks. PMID:27274721

  18. Complexities and uncertainties of neuronal network function

    PubMed Central

    Parker, David

    2005-01-01

    The nervous system generates behaviours through the activity in groups of neurons assembled into networks. Understanding these networks is thus essential to our understanding of nervous system function. Understanding a network requires information on its component cells, their interactions and their functional properties. Few networks come close to providing complete information on these aspects. However, even if complete information were available it would still only provide limited insight into network function. This is because the functional and structural properties of a network are not fixed but are plastic and can change over time. The number of interacting network components, their (variable) functional properties, and various plasticity mechanisms endows networks with considerable flexibility, but these features inevitably complicate network analyses. This review will initially discuss the general approaches and problems of network analyses. It will then examine the success of these analyses in a model spinal cord locomotor network in the lamprey, to determine to what extent in this relatively simple vertebrate system it is possible to claim detailed understanding of network function and plasticity. PMID:16553310

  19. Label-Free Characterization of Emerging Human Neuronal Networks

    NASA Astrophysics Data System (ADS)

    Mir, Mustafa; Kim, Taewoo; Majumder, Anirban; Xiang, Mike; Wang, Ru; Liu, S. Chris; Gillette, Martha U.; Stice, Steven; Popescu, Gabriel

    2014-03-01

    The emergent self-organization of a neuronal network in a developing nervous system is the result of a remarkably orchestrated process involving a multitude of chemical, mechanical and electrical signals. Little is known about the dynamic behavior of a developing network (especially in a human model) primarily due to a lack of practical and non-invasive methods to measure and quantify the process. Here we demonstrate that by using a novel optical interferometric technique, we can non-invasively measure several fundamental properties of neural networks from the sub-cellular to the cell population level. We applied this method to quantify network formation in human stem cell derived neurons and show for the first time, correlations between trends in the growth, transport, and spatial organization of such a system. Quantifying the fundamental behavior of such cell lines without compromising their viability may provide an important new tool in future longitudinal studies.

  20. Label-Free Characterization of Emerging Human Neuronal Networks

    PubMed Central

    Mir, Mustafa; Kim, Taewoo; Majumder, Anirban; Xiang, Mike; Wang, Ru; Liu, S. Chris; Gillette, Martha U.; Stice, Steven; Popescu, Gabriel

    2014-01-01

    The emergent self-organization of a neuronal network in a developing nervous system is the result of a remarkably orchestrated process involving a multitude of chemical, mechanical and electrical signals. Little is known about the dynamic behavior of a developing network (especially in a human model) primarily due to a lack of practical and non-invasive methods to measure and quantify the process. Here we demonstrate that by using a novel optical interferometric technique, we can non-invasively measure several fundamental properties of neural networks from the sub-cellular to the cell population level. We applied this method to quantify network formation in human stem cell derived neurons and show for the first time, correlations between trends in the growth, transport, and spatial organization of such a system. Quantifying the fundamental behavior of such cell lines without compromising their viability may provide an important new tool in future longitudinal studies. PMID:24658536

  1. Neuronal network plasticity and recovery from depression.

    PubMed

    Castrén, Eero

    2013-09-01

    The brain processes sensory information in neuronal networks that are shaped by experience, particularly during early life, to optimally represent the internal and external milieu. Recent surprising findings have revealed that antidepressant drugs reactivate a window of juvenile-like plasticity in the adult cortex. When antidepressant-induced plasticity was combined with appropriate rehabilitation, it brought about a functional recovery of abnormally wired neuronal networks. These observations suggest that antidepressants act permissively to facilitate environmental influence on neuronal network reorganization and so provide a plausible neurobiological explanation for the enhanced effect of combining antidepressant treatment with psychotherapy. The results emphasize that pharmacological and psychological treatments of mood disorders are closely entwined: the effect of antidepressant-induced plasticity is facilitated by rehabilitation, such as psychotherapy, that guides the plastic networks, and psychotherapy benefits from the enhanced plasticity provided by the drug treatment. Optimized combinations of pharmacological and psychological treatments might help make best use of existing antidepressant drugs and reduce the number of treatment-resistant patients. The network hypothesis of antidepressant action presented here proposes that recovery from depression and related mood disorders is a gradual process that develops slowly and is facilitated by structured guidance and rehabilitation.

  2. Patterned Neuronal Networks for Robotics, Neurocomputing, Toxin Detection and Rehabilitation

    DTIC Science & Technology

    2004-12-01

    PATTERNED NEURONAL NETWORKS FOR ROBOTICS, NEUROCOMPUTING, TOXIN DETECTION AND REHABILITATION Jung F. Kang*, Matt Poeta, Lisa Riedel, Mainak Das...systems work, how neuronal networks can process information and how basic physiological control circuits function 2) exploring the possibilities for...to engineer neuronal networks . Surface chemistry utilizing self- assembled monolayers (Laibinis, Hickman et al. 1989) (SAMs) is an excellent

  3. How Structure Determines Correlations in Neuronal Networks

    PubMed Central

    Pernice, Volker; Staude, Benjamin; Cardanobile, Stefano; Rotter, Stefan

    2011-01-01

    Networks are becoming a ubiquitous metaphor for the understanding of complex biological systems, spanning the range between molecular signalling pathways, neural networks in the brain, and interacting species in a food web. In many models, we face an intricate interplay between the topology of the network and the dynamics of the system, which is generally very hard to disentangle. A dynamical feature that has been subject of intense research in various fields are correlations between the noisy activity of nodes in a network. We consider a class of systems, where discrete signals are sent along the links of the network. Such systems are of particular relevance in neuroscience, because they provide models for networks of neurons that use action potentials for communication. We study correlations in dynamic networks with arbitrary topology, assuming linear pulse coupling. With our novel approach, we are able to understand in detail how specific structural motifs affect pairwise correlations. Based on a power series decomposition of the covariance matrix, we describe the conditions under which very indirect interactions will have a pronounced effect on correlations and population dynamics. In random networks, we find that indirect interactions may lead to a broad distribution of activation levels with low average but highly variable correlations. This phenomenon is even more pronounced in networks with distance dependent connectivity. In contrast, networks with highly connected hubs or patchy connections often exhibit strong average correlations. Our results are particularly relevant in view of new experimental techniques that enable the parallel recording of spiking activity from a large number of neurons, an appropriate interpretation of which is hampered by the currently limited understanding of structure-dynamics relations in complex networks. PMID:21625580

  4. Combined topographical and chemical micropatterns for templating neuronal networks.

    PubMed

    Zhang, Jiayi; Venkataramani, Sowmya; Xu, Heng; Song, Yoon-Kyu; Song, Hyun-Kon; Palmore, G Tayhas R; Fallon, Justin; Nurmikko, Arto V

    2006-11-01

    In vitro neuronal networks with geometrically defined features are desirable for studying long-term electrical activity within the neuron assembly and for interfacing with external microelectronic circuits. In standard cultures, the random spatial distribution and overlap of neurites makes this aim difficult; hence, many recent efforts have been made on creating patterned cellular circuits. Here, we present a novel method for creating a planar neural network that is compatible with optical devices. This method combines both topographical and chemical micropatterns onto which neurons can be cultured. Compared to other reported patterning techniques, our approach and choice of template appears to show both geometrical control over the formation of specific neurite connections at low plating density and compatibility with microelectronic circuits that stimulate and record neural activity.

  5. Associative memory in networks of spiking neurons.

    PubMed

    Sommer, F T; Wennekers, T

    2001-01-01

    Here, we develop and investigate a computational model of a network of cortical neurons on the base of biophysically well constrained and tested two-compartmental neurons developed by Pinsky and Rinzel [Pinsky, P. F., & Rinzel, J. (1994). Intrinsic and network rhythmogenesis in a reduced Traub model for CA3 neurons. Journal of Computational Neuroscience, 1, 39-60]. To study associative memory, we connect a pool of cells by a structured connectivity matrix. The connection weights are shaped by simple Hebbian coincidence learning using a set of spatially sparse patterns. We study the neuronal activity processes following an external stimulation of a stored memory. In two series of simulation experiments, we explore the effect of different classes of external input, tonic and flashed stimulation. With tonic stimulation, the addressed memory is an attractor of the network dynamics. The memory is displayed rhythmically, coded by phase-locked bursts or regular spikes. The participating neurons have rhythmic activity in the gamma-frequency range (30-80 Hz). If the input is switched from one memory to another, the network activity can follow this change within one or two gamma cycles. Unlike similar models in the literature, we studied the range of high memory capacity (in the order of 0.1 bit/synapse), comparable to optimally tuned formal associative networks. We explored the robustness of efficient retrieval varying the memory load, the excitation/inhibition parameters, and background activity. A stimulation pulse applied to the identical simulation network can push away ongoing network activity and trigger a phase-locked association event within one gamma period. Unlike as under tonic stimulation, the memories are not attractors. After one association process, the network activity moves to other states. Applying in close succession pulses addressing different memories, one can switch through the space of memory patterns. The readout speed can be increased up to the

  6. Integrated microfluidic platforms for investigating neuronal networks

    NASA Astrophysics Data System (ADS)

    Kim, Hyung Joon

    (multielectrode array) or nanowire electrode array to study electrophysiology in neuronal network. Also, "diode-like" microgrooves to control the number of neuronal processes is embedded in this platform. Chapter 6 concludes with a possible future direction of this work. Interfacing micro/nanotechnology with primary neuron culture would open many doors in fundamental neuroscience research and also biomedical innovation.

  7. Collective Dynamics for Heterogeneous Networks of Theta Neurons

    NASA Astrophysics Data System (ADS)

    Luke, Tanushree

    Collective behavior in neural networks has often been used as an indicator of communication between different brain areas. These collective synchronization and desynchronization patterns are also considered an important feature in understanding normal and abnormal brain function. To understand the emergence of these collective patterns, I create an analytic model that identifies all such macroscopic steady-states attainable by a network of Type-I neurons. This network, whose basic unit is the model "theta'' neuron, contains a mixture of excitable and spiking neurons coupled via a smooth pulse-like synapse. Applying the Ott-Antonsen reduction method in the thermodynamic limit, I obtain a low-dimensional evolution equation that describes the asymptotic dynamics of the macroscopic mean field of the network. This model can be used as the basis in understanding more complicated neuronal networks when additional dynamical features are included. From this reduced dynamical equation for the mean field, I show that the network exhibits three collective attracting steady-states. The first two are equilibrium states that both reflect partial synchronization in the network, whereas the third is a limit cycle in which the degree of network synchronization oscillates in time. In addition to a comprehensive identification of all possible attracting macro-states, this analytic model permits a complete bifurcation analysis of the collective behavior of the network with respect to three key network features: the degree of excitability of the neurons, the heterogeneity of the population, and the overall coupling strength. The network typically tends towards the two macroscopic equilibrium states when the neuron's intrinsic dynamics and the network interactions reinforce each other. In contrast, the limit cycle state, bifurcations, and multistability tend to occur when there is competition between these network features. I also outline here an extension of the above model where the

  8. Estimated hydrogeological parameters by artificial neurons network

    NASA Astrophysics Data System (ADS)

    Lin, H.; Chen, C.; Tan, Y.; Ke, K.

    2009-12-01

    In recent years, many approaches had been developed using artificial neurons network (ANN) model cooperated with Theis analytical solution to estimate the effective hydrological parameters for the homogenous and isotropic porous media, such as Lin and Chen approach [Lin and Chen, 2006] (or called the ANN approach hereafter), PC-ANN approach [Samani et al., 2008]. The above methods assumed a full superimposition of the type curve and the observed drawdown, and tried to use the first time-drawdown data as a match point to make a fine approximation of the effective parameters. However, using the first time-drawdown data or the early time-drawdown data is not always correct for the estimation of the hydrological parameters, especially for heterogeneous and anisotropic aquifers. Therefore, this paper mainly corrected the concept of superimposed plot by modifying the ANN approach and PC-ANN approach, as well as cooperating with Papadopoulos analytical solution, to estimate the transmissivities and storage coefficient for anisotropic, heterogeneous aquifers. The ANN model is trained with 4000 training sets of the well function, and tested with 1000 sets and 300 sets of synthetic time-drawdown generated from homogonous and heterogonous parameters, respectively. In-situ observation data, the time-drawdown at station Shi-Chou of the Chihuahua River alluvial fan, Taiwan, is further adopted to test the applicability and reliability of proposed methods, as well as comparing with Straight-line method and Type-curve method. Results suggested that both of the modified methods had better performance than the original ones. Using late time drawdown to optimize the effective parameters is shown better than using early-time drawdown. Additionally, results indicated that the modified ANN approach is better than the modified PC-ANN approach in terms of precision, while the efficiency of the modified PC-ANN approach is approximately three times better than the modified ANN approach.

  9. Hidden Neuronal Correlations in Cultured Networks

    NASA Astrophysics Data System (ADS)

    Segev, Ronen; Baruchi, Itay; Hulata, Eyal; Ben-Jacob, Eshel

    2004-03-01

    Utilization of a clustering algorithm on neuronal spatiotemporal correlation matrices recorded during a spontaneous activity of in vitro networks revealed the existence of hidden correlations: the sequence of synchronized bursting events (SBEs) is composed of statistically distinguishable subgroups each with its own distinct pattern of interneuron spatiotemporal correlations. These findings hint that each of the SBE subgroups can serve as a template for coding, storage, and retrieval of a specific information.

  10. Spike Code Flow in Cultured Neuronal Networks.

    PubMed

    Tamura, Shinichi; Nishitani, Yoshi; Hosokawa, Chie; Miyoshi, Tomomitsu; Sawai, Hajime; Kamimura, Takuya; Yagi, Yasushi; Mizuno-Matsumoto, Yuko; Chen, Yen-Wei

    2016-01-01

    We observed spike trains produced by one-shot electrical stimulation with 8 × 8 multielectrodes in cultured neuronal networks. Each electrode accepted spikes from several neurons. We extracted the short codes from spike trains and obtained a code spectrum with a nominal time accuracy of 1%. We then constructed code flow maps as movies of the electrode array to observe the code flow of "1101" and "1011," which are typical pseudorandom sequence such as that we often encountered in a literature and our experiments. They seemed to flow from one electrode to the neighboring one and maintained their shape to some extent. To quantify the flow, we calculated the "maximum cross-correlations" among neighboring electrodes, to find the direction of maximum flow of the codes with lengths less than 8. Normalized maximum cross-correlations were almost constant irrespective of code. Furthermore, if the spike trains were shuffled in interval orders or in electrodes, they became significantly small. Thus, the analysis suggested that local codes of approximately constant shape propagated and conveyed information across the network. Hence, the codes can serve as visible and trackable marks of propagating spike waves as well as evaluating information flow in the neuronal network.

  11. Micropatterning Facilitates the Long-Term Growth and Analysis of iPSC-Derived Individual Human Neurons and Neuronal Networks.

    PubMed

    Burbulla, Lena F; Beaumont, Kristin G; Mrksich, Milan; Krainc, Dimitri

    2016-08-01

    The discovery of induced pluripotent stem cells (iPSCs) and their application to patient-specific disease models offers new opportunities for studying the pathophysiology of neurological disorders. However, current methods for culturing iPSC-derived neuronal cells result in clustering of neurons, which precludes the analysis of individual neurons and defined neuronal networks. To address this challenge, cultures of human neurons on micropatterned surfaces are developed that promote neuronal survival over extended periods of time. This approach facilitates studies of neuronal development, cellular trafficking, and related mechanisms that require assessment of individual neurons and specific network connections. Importantly, micropatterns support the long-term stability of cultured neurons, which enables time-dependent analysis of cellular processes in living neurons. The approach described in this paper allows mechanistic studies of human neurons, both in terms of normal neuronal development and function, as well as time-dependent pathological processes, and provides a platform for testing of new therapeutics in neuropsychiatric disorders.

  12. Transition to Chaos in Random Neuronal Networks

    NASA Astrophysics Data System (ADS)

    Kadmon, Jonathan; Sompolinsky, Haim

    2015-10-01

    Firing patterns in the central nervous system often exhibit strong temporal irregularity and considerable heterogeneity in time-averaged response properties. Previous studies suggested that these properties are the outcome of the intrinsic chaotic dynamics of the neural circuits. Indeed, simplified rate-based neuronal networks with synaptic connections drawn from Gaussian distribution and sigmoidal nonlinearity are known to exhibit chaotic dynamics when the synaptic gain (i.e., connection variance) is sufficiently large. In the limit of an infinitely large network, there is a sharp transition from a fixed point to chaos, as the synaptic gain reaches a critical value. Near the onset, chaotic fluctuations are slow, analogous to the ubiquitous, slow irregular fluctuations observed in the firing rates of many cortical circuits. However, the existence of a transition from a fixed point to chaos in neuronal circuit models with more realistic architectures and firing dynamics has not been established. In this work, we investigate rate-based dynamics of neuronal circuits composed of several subpopulations with randomly diluted connections. Nonzero connections are either positive for excitatory neurons or negative for inhibitory ones, while single neuron output is strictly positive with output rates rising as a power law above threshold, in line with known constraints in many biological systems. Using dynamic mean field theory, we find the phase diagram depicting the regimes of stable fixed-point, unstable-dynamic, and chaotic-rate fluctuations. We focus on the latter and characterize the properties of systems near this transition. We show that dilute excitatory-inhibitory architectures exhibit the same onset to chaos as the single population with Gaussian connectivity. In these architectures, the large mean excitatory and inhibitory inputs dynamically balance each other, amplifying the effect of the residual fluctuations. Importantly, the existence of a transition to chaos

  13. Recording axonal conduction to evaluate the integration of pluripotent cell-derived neurons into a neuronal network.

    PubMed

    Shimba, Kenta; Sakai, Koji; Takayama, Yuzo; Kotani, Kiyoshi; Jimbo, Yasuhiko

    2015-10-01

    Stem cell transplantation is a promising therapy to treat neurodegenerative disorders, and a number of in vitro models have been developed for studying interactions between grafted neurons and the host neuronal network to promote drug discovery. However, methods capable of evaluating the process by which stem cells integrate into the host neuronal network are lacking. In this study, we applied an axonal conduction-based analysis to a co-culture study of primary and differentiated neurons. Mouse cortical neurons and neuronal cells differentiated from P19 embryonal carcinoma cells, a model for early neural differentiation of pluripotent stem cells, were co-cultured in a microfabricated device. The somata of these cells were separated by the co-culture device, but their axons were able to elongate through microtunnels and then form synaptic contacts. Propagating action potentials were recorded from these axons by microelectrodes embedded at the bottom of the microtunnels and sorted into clusters representing individual axons. While the number of axons of cortical neurons increased until 14 days in vitro and then decreased, those of P19 neurons increased throughout the culture period. Network burst analysis showed that P19 neurons participated in approximately 80% of the bursting activity after 14 days in vitro. Interestingly, the axonal conduction delay of P19 neurons was significantly greater than that of cortical neurons, suggesting that there are some physiological differences in their axons. These results suggest that our method is feasible to evaluate the process by which stem cell-derived neurons integrate into a host neuronal network.

  14. Synchronization properties of heterogeneous neuronal networks with mixed excitability type.

    PubMed

    Leone, Michael J; Schurter, Brandon N; Letson, Benjamin; Booth, Victoria; Zochowski, Michal; Fink, Christian G

    2015-03-01

    We study the synchronization of neuronal networks with dynamical heterogeneity, showing that network structures with the same propensity for synchronization (as quantified by master stability function analysis) may develop dramatically different synchronization properties when heterogeneity is introduced with respect to neuronal excitability type. Specifically, we investigate networks composed of neurons with different types of phase response curves (PRCs), which characterize how oscillating neurons respond to excitatory perturbations. Neurons exhibiting type 1 PRC respond exclusively with phase advances, while neurons exhibiting type 2 PRC respond with either phase delays or phase advances, depending on when the perturbation occurs. We find that Watts-Strogatz small world networks transition to synchronization gradually as the proportion of type 2 neurons increases, whereas scale-free networks may transition gradually or rapidly, depending upon local correlations between node degree and excitability type. Random placement of type 2 neurons results in gradual transition to synchronization, whereas placement of type 2 neurons as hubs leads to a much more rapid transition, showing that type 2 hub cells easily "hijack" neuronal networks to synchronization. These results underscore the fact that the degree of synchronization observed in neuronal networks is determined by a complex interplay between network structure and the dynamical properties of individual neurons, indicating that efforts to recover structural connectivity from dynamical correlations must in general take both factors into account.

  15. Emergent Functional Properties of Neuronal Networks with Controlled Topology

    PubMed Central

    Marconi, Emanuele; Nieus, Thierry; Maccione, Alessandro; Valente, Pierluigi; Simi, Alessandro; Messa, Mirko; Dante, Silvia; Baldelli, Pietro; Berdondini, Luca; Benfenati, Fabio

    2012-01-01

    The interplay between anatomical connectivity and dynamics in neural networks plays a key role in the functional properties of the brain and in the associated connectivity changes induced by neural diseases. However, a detailed experimental investigation of this interplay at both cellular and population scales in the living brain is limited by accessibility. Alternatively, to investigate the basic operational principles with morphological, electrophysiological and computational methods, the activity emerging from large in vitro networks of primary neurons organized with imposed topologies can be studied. Here, we validated the use of a new bio-printing approach, which effectively maintains the topology of hippocampal cultures in vitro and investigated, by patch-clamp and MEA electrophysiology, the emerging functional properties of these grid-confined networks. In spite of differences in the organization of physical connectivity, our bio-patterned grid networks retained the key properties of synaptic transmission, short-term plasticity and overall network activity with respect to random networks. Interestingly, the imposed grid topology resulted in a reinforcement of functional connections along orthogonal directions, shorter connectivity links and a greatly increased spiking probability in response to focal stimulation. These results clearly demonstrate that reliable functional studies can nowadays be performed on large neuronal networks in the presence of sustained changes in the physical network connectivity. PMID:22493706

  16. Hierarchical networks, power laws, and neuronal avalanches.

    PubMed

    Friedman, Eric J; Landsberg, Adam S

    2013-03-01

    We show that in networks with a hierarchical architecture, critical dynamical behaviors can emerge even when the underlying dynamical processes are not critical. This finding provides explicit insight into current studies of the brain's neuronal network showing power-law avalanches in neural recordings, and provides a theoretical justification of recent numerical findings. Our analysis shows how the hierarchical organization of a network can itself lead to power-law distributions of avalanche sizes and durations, scaling laws between anomalous exponents, and universal functions-even in the absence of self-organized criticality or critical points. This hierarchy-induced phenomenon is independent of, though can potentially operate in conjunction with, standard dynamical mechanisms for generating power laws.

  17. Irregular behavior in an excitatory-inhibitory neuronal network

    NASA Astrophysics Data System (ADS)

    Park, Choongseok; Terman, David

    2010-06-01

    Excitatory-inhibitory networks arise in many regions throughout the central nervous system and display complex spatiotemporal firing patterns. These neuronal activity patterns (of individual neurons and/or the whole network) are closely related to the functional status of the system and differ between normal and pathological states. For example, neurons within the basal ganglia, a group of subcortical nuclei that are responsible for the generation of movement, display a variety of dynamic behaviors such as correlated oscillatory activity and irregular, uncorrelated spiking. Neither the origins of these firing patterns nor the mechanisms that underlie the patterns are well understood. We consider a biophysical model of an excitatory-inhibitory network in the basal ganglia and explore how specific biophysical properties of the network contribute to the generation of irregular spiking. We use geometric dynamical systems and singular perturbation methods to systematically reduce the model to a simpler set of equations, which is suitable for analysis. The results specify the dependence on the strengths of synaptic connections and the intrinsic firing properties of the cells in the irregular regime when applied to the subthalamopallidal network of the basal ganglia.

  18. Results on a binding neuron model and their implications for modified hourglass model for neuronal network.

    PubMed

    Arunachalam, Viswanathan; Akhavan-Tabatabaei, Raha; Lopez, Cristina

    2013-01-01

    The classical models of single neuron like Hodgkin-Huxley point neuron or leaky integrate and fire neuron assume the influence of postsynaptic potentials to last till the neuron fires. Vidybida (2008) in a refreshing departure has proposed models for binding neurons in which the trace of an input is remembered only for a finite fixed period of time after which it is forgotten. The binding neurons conform to the behaviour of real neurons and are applicable in constructing fast recurrent networks for computer modeling. This paper develops explicitly several useful results for a binding neuron like the firing time distribution and other statistical characteristics. We also discuss the applicability of the developed results in constructing a modified hourglass network model in which there are interconnected neurons with excitatory as well as inhibitory inputs. Limited simulation results of the hourglass network are presented.

  19. Method Accelerates Training Of Some Neural Networks

    NASA Technical Reports Server (NTRS)

    Shelton, Robert O.

    1992-01-01

    Three-layer networks trained faster provided two conditions are satisfied: numbers of neurons in layers are such that majority of work done in synaptic connections between input and hidden layers, and number of neurons in input layer at least as great as number of training pairs of input and output vectors. Based on modified version of back-propagation method.

  20. Computational properties of networks of synchronous groups of spiking neurons.

    PubMed

    Dayhoff, Judith E

    2007-09-01

    We demonstrate a model in which synchronously firing ensembles of neurons are networked to produce computational results. Each ensemble is a group of biological integrate-and-fire spiking neurons, with probabilistic interconnections between groups. An analogy is drawn in which each individual processing unit of an artificial neural network corresponds to a neuronal group in a biological model. The activation value of a unit in the artificial neural network corresponds to the fraction of active neurons, synchronously firing, in a biological neuronal group. Weights of the artificial neural network correspond to the product of the interconnection density between groups, the group size of the presynaptic group, and the postsynaptic potential heights in the synchronous group model. All three of these parameters can modulate connection strengths between neuronal groups in the synchronous group models. We give an example of nonlinear classification (XOR) and a function approximation example in which the capability of the artificial neural network can be captured by a neural network model with biological integrate-and-fire neurons configured as a network of synchronously firing ensembles of such neurons. We point out that the general function approximation capability proven for feedforward artificial neural networks appears to be approximated by networks of neuronal groups that fire in synchrony, where the groups comprise integrate-and-fire neurons. We discuss the advantages of this type of model for biological systems, its possible learning mechanisms, and the associated timing relationships.

  1. Long Term Synaptic Plasticity and Learning in Neuronal Networks.

    DTIC Science & Technology

    1987-09-14

    2312/Al Al -p 1. TITLE (Include Security Classification) ’a LONG TERM SYNAPTIC PLASTICITY AND LEARNING IN NEURONAL NETWORKS 12. PERSONAL AUTHOR(S...Analysis of Simple Neuronal Networks " (2nd Annual Symposium on Networks in Brain and Computer Architecture, North Texas State University, Denton, TX

  2. Inferring network dynamics and neuron properties from population recordings.

    PubMed

    Linaro, Daniele; Storace, Marco; Mattia, Maurizio

    2011-01-01

    Understanding the computational capabilities of the nervous system means to "identify" its emergent multiscale dynamics. For this purpose, we propose a novel model-driven identification procedure and apply it to sparsely connected populations of excitatory integrate-and-fire neurons with spike frequency adaptation (SFA). Our method does not characterize the system from its microscopic elements in a bottom-up fashion, and does not resort to any linearization. We investigate networks as a whole, inferring their properties from the response dynamics of the instantaneous discharge rate to brief and aspecific supra-threshold stimulations. While several available methods assume generic expressions for the system as a black box, we adopt a mean-field theory for the evolution of the network transparently parameterized by identified elements (such as dynamic timescales), which are in turn non-trivially related to single-neuron properties. In particular, from the elicited transient responses, the input-output gain function of the neurons in the network is extracted and direct links to the microscopic level are made available: indeed, we show how to extract the decay time constant of the SFA, the absolute refractory period and the average synaptic efficacy. In addition and contrary to previous attempts, our method captures the system dynamics across bifurcations separating qualitatively different dynamical regimes. The robustness and the generality of the methodology is tested on controlled simulations, reporting a good agreement between theoretically expected and identified values. The assumptions behind the underlying theoretical framework make the method readily applicable to biological preparations like cultured neuron networks and in vitro brain slices.

  3. Detecting effective connectivity in networks of coupled neuronal oscillators.

    PubMed

    Boykin, Erin R; Khargonekar, Pramod P; Carney, Paul R; Ogle, William O; Talathi, Sachin S

    2012-06-01

    The application of data-driven time series analysis techniques such as Granger causality, partial directed coherence and phase dynamics modeling to estimate effective connectivity in brain networks has recently gained significant prominence in the neuroscience community. While these techniques have been useful in determining causal interactions among different regions of brain networks, a thorough analysis of the comparative accuracy and robustness of these methods in identifying patterns of effective connectivity among brain networks is still lacking. In this paper, we systematically address this issue within the context of simple networks of coupled spiking neurons. Specifically, we develop a method to assess the ability of various effective connectivity measures to accurately determine the true effective connectivity of a given neuronal network. Our method is based on decision tree classifiers which are trained using several time series features that can be observed solely from experimentally recorded data. We show that the classifiers constructed in this work provide a general framework for determining whether a particular effective connectivity measure is likely to produce incorrect results when applied to a dataset.

  4. Critical behavior in networks of real neurons

    NASA Astrophysics Data System (ADS)

    Tkacik, Gasper

    2014-03-01

    The patterns of joint activity in a population of retinal ganglion cells encode the complete information about the visual world, and thus place limits on what could be learned about the environment by the brain. We analyze the recorded simultaneous activity of more than a hundred such neurons from an interacting population responding to naturalistic stimuli, at the single spike level, by constructing accurate maximum entropy models for the distribution of network activity states. This - essentially an ``inverse spin glass'' - construction reveals strong frustration in the pairwise couplings between the neurons that results in a rugged energy landscape with many local extrema; strong collective interactions in subgroups of neurons despite weak individual pairwise correlations; and a joint distribution of activity that has an extremely wide dynamic range characterized by a zipf-like power law, strong deviations from ``typicality,'' and a number of signatures of critical behavior. We hypothesize that this tuning to a critical operating point might be a dynamic property of the system and suggest experiments to test this hypothesis.

  5. Implementing Signature Neural Networks with Spiking Neurons

    PubMed Central

    Carrillo-Medina, José Luis; Latorre, Roberto

    2016-01-01

    Spiking Neural Networks constitute the most promising approach to develop realistic Artificial Neural Networks (ANNs). Unlike traditional firing rate-based paradigms, information coding in spiking models is based on the precise timing of individual spikes. It has been demonstrated that spiking ANNs can be successfully and efficiently applied to multiple realistic problems solvable with traditional strategies (e.g., data classification or pattern recognition). In recent years, major breakthroughs in neuroscience research have discovered new relevant computational principles in different living neural systems. Could ANNs benefit from some of these recent findings providing novel elements of inspiration? This is an intriguing question for the research community and the development of spiking ANNs including novel bio-inspired information coding and processing strategies is gaining attention. From this perspective, in this work, we adapt the core concepts of the recently proposed Signature Neural Network paradigm—i.e., neural signatures to identify each unit in the network, local information contextualization during the processing, and multicoding strategies for information propagation regarding the origin and the content of the data—to be employed in a spiking neural network. To the best of our knowledge, none of these mechanisms have been used yet in the context of ANNs of spiking neurons. This paper provides a proof-of-concept for their applicability in such networks. Computer simulations show that a simple network model like the discussed here exhibits complex self-organizing properties. The combination of multiple simultaneous encoding schemes allows the network to generate coexisting spatio-temporal patterns of activity encoding information in different spatio-temporal spaces. As a function of the network and/or intra-unit parameters shaping the corresponding encoding modality, different forms of competition among the evoked patterns can emerge even in the

  6. Implementing Signature Neural Networks with Spiking Neurons.

    PubMed

    Carrillo-Medina, José Luis; Latorre, Roberto

    2016-01-01

    Spiking Neural Networks constitute the most promising approach to develop realistic Artificial Neural Networks (ANNs). Unlike traditional firing rate-based paradigms, information coding in spiking models is based on the precise timing of individual spikes. It has been demonstrated that spiking ANNs can be successfully and efficiently applied to multiple realistic problems solvable with traditional strategies (e.g., data classification or pattern recognition). In recent years, major breakthroughs in neuroscience research have discovered new relevant computational principles in different living neural systems. Could ANNs benefit from some of these recent findings providing novel elements of inspiration? This is an intriguing question for the research community and the development of spiking ANNs including novel bio-inspired information coding and processing strategies is gaining attention. From this perspective, in this work, we adapt the core concepts of the recently proposed Signature Neural Network paradigm-i.e., neural signatures to identify each unit in the network, local information contextualization during the processing, and multicoding strategies for information propagation regarding the origin and the content of the data-to be employed in a spiking neural network. To the best of our knowledge, none of these mechanisms have been used yet in the context of ANNs of spiking neurons. This paper provides a proof-of-concept for their applicability in such networks. Computer simulations show that a simple network model like the discussed here exhibits complex self-organizing properties. The combination of multiple simultaneous encoding schemes allows the network to generate coexisting spatio-temporal patterns of activity encoding information in different spatio-temporal spaces. As a function of the network and/or intra-unit parameters shaping the corresponding encoding modality, different forms of competition among the evoked patterns can emerge even in the absence

  7. Negative dielectrophoretic force assisted construction of ordered neuronal networks on cell positioning bioelectronic chips.

    PubMed

    Yu, Zhe; Xiang, Guangxin; Pan, Liangbin; Huang, Lihua; Yu, Zhongyao; Xing, Wanli; Cheng, Jing

    2004-12-01

    Developing new methods and technologies in order to pattern neurons into regular networks is of utmost scientific interest in the field of neurological research. An efficient method here is developed for trapping neurons and constructing ordered neuronal networks on bioelectronic chips by using arrayed negative dielectrophoretic (DEP) forces. A special bioelectronic chip with well defined positioning electrode arrays was designed and fabricated on silicon substrate. When a high frequency AC signal was applied, the cell positioning bioelectronic chip (CPBC) is able to provide a well-defined non-uniform electric field, and thus generate negative DEP forces. The parameters, such as size of positioning electrode, conductivity of working solution, amplitude and frequency of power signal and cell concentration, were investigated to optimize the performance of the CPBC. When the neuron suspension was added onto the energized bioelectronic chip, the neurons were immediately trapped and quickly formed the predetermined pattern. Neurons may adhere and then be cultured directly on the CPBC, and show good neuron viability and neurite development. The formation of the ordered neuronal networks after two-week culture demonstrates that negative dielectrophoretic force assisted construction of ordered neuronal networks is effective, and it could be used to assist in monitoring functional activities of neuronal networks.

  8. Serotonin modulation of cortical neurons and networks.

    PubMed

    Celada, Pau; Puig, M Victoria; Artigas, Francesc

    2013-01-01

    The serotonergic pathways originating in the dorsal and median raphe nuclei (DR and MnR, respectively) are critically involved in cortical function. Serotonin (5-HT), acting on postsynaptic and presynaptic receptors, is involved in cognition, mood, impulse control and motor functions by (1) modulating the activity of different neuronal types, and (2) varying the release of other neurotransmitters, such as glutamate, GABA, acetylcholine and dopamine. Also, 5-HT seems to play an important role in cortical development. Of all cortical regions, the frontal lobe is the area most enriched in serotonergic axons and 5-HT receptors. 5-HT and selective receptor agonists modulate the excitability of cortical neurons and their discharge rate through the activation of several receptor subtypes, of which the 5-HT1A, 5-HT1B, 5-HT2A, and 5-HT3 subtypes play a major role. Little is known, however, on the role of other excitatory receptors moderately expressed in cortical areas, such as 5-HT2C, 5-HT4, 5-HT6, and 5-HT7. In vitro and in vivo studies suggest that 5-HT1A and 5-HT2A receptors are key players and exert opposite effects on the activity of pyramidal neurons in the medial prefrontal cortex (mPFC). The activation of 5-HT1A receptors in mPFC hyperpolarizes pyramidal neurons whereas that of 5-HT2A receptors results in neuronal depolarization, reduction of the afterhyperpolarization and increase of excitatory postsynaptic currents (EPSCs) and of discharge rate. 5-HT can also stimulate excitatory (5-HT2A and 5-HT3) and inhibitory (5-HT1A) receptors in GABA interneurons to modulate synaptic GABA inputs onto pyramidal neurons. Likewise, the pharmacological manipulation of various 5-HT receptors alters oscillatory activity in PFC, suggesting that 5-HT is also involved in the control of cortical network activity. A better understanding of the actions of 5-HT in PFC may help to develop treatments for mood and cognitive disorders associated with an abnormal function of the frontal lobe.

  9. Serotonin modulation of cortical neurons and networks

    PubMed Central

    Celada, Pau; Puig, M. Victoria; Artigas, Francesc

    2013-01-01

    The serotonergic pathways originating in the dorsal and median raphe nuclei (DR and MnR, respectively) are critically involved in cortical function. Serotonin (5-HT), acting on postsynaptic and presynaptic receptors, is involved in cognition, mood, impulse control and motor functions by (1) modulating the activity of different neuronal types, and (2) varying the release of other neurotransmitters, such as glutamate, GABA, acetylcholine and dopamine. Also, 5-HT seems to play an important role in cortical development. Of all cortical regions, the frontal lobe is the area most enriched in serotonergic axons and 5-HT receptors. 5-HT and selective receptor agonists modulate the excitability of cortical neurons and their discharge rate through the activation of several receptor subtypes, of which the 5-HT1A, 5-HT1B, 5-HT2A, and 5-HT3 subtypes play a major role. Little is known, however, on the role of other excitatory receptors moderately expressed in cortical areas, such as 5-HT2C, 5-HT4, 5-HT6, and 5-HT7. In vitro and in vivo studies suggest that 5-HT1A and 5-HT2A receptors are key players and exert opposite effects on the activity of pyramidal neurons in the medial prefrontal cortex (mPFC). The activation of 5-HT1A receptors in mPFC hyperpolarizes pyramidal neurons whereas that of 5-HT2A receptors results in neuronal depolarization, reduction of the afterhyperpolarization and increase of excitatory postsynaptic currents (EPSCs) and of discharge rate. 5-HT can also stimulate excitatory (5-HT2A and 5-HT3) and inhibitory (5-HT1A) receptors in GABA interneurons to modulate synaptic GABA inputs onto pyramidal neurons. Likewise, the pharmacological manipulation of various 5-HT receptors alters oscillatory activity in PFC, suggesting that 5-HT is also involved in the control of cortical network activity. A better understanding of the actions of 5-HT in PFC may help to develop treatments for mood and cognitive disorders associated with an abnormal function of the frontal lobe

  10. Integrated workflows for spiking neuronal network simulations.

    PubMed

    Antolík, Ján; Davison, Andrew P

    2013-01-01

    The increasing availability of computational resources is enabling more detailed, realistic modeling in computational neuroscience, resulting in a shift toward more heterogeneous models of neuronal circuits, and employment of complex experimental protocols. This poses a challenge for existing tool chains, as the set of tools involved in a typical modeler's workflow is expanding concomitantly, with growing complexity in the metadata flowing between them. For many parts of the workflow, a range of tools is available; however, numerous areas lack dedicated tools, while integration of existing tools is limited. This forces modelers to either handle the workflow manually, leading to errors, or to write substantial amounts of code to automate parts of the workflow, in both cases reducing their productivity. To address these issues, we have developed Mozaik: a workflow system for spiking neuronal network simulations written in Python. Mozaik integrates model, experiment and stimulation specification, simulation execution, data storage, data analysis and visualization into a single automated workflow, ensuring that all relevant metadata are available to all workflow components. It is based on several existing tools, including PyNN, Neo, and Matplotlib. It offers a declarative way to specify models and recording configurations using hierarchically organized configuration files. Mozaik automatically records all data together with all relevant metadata about the experimental context, allowing automation of the analysis and visualization stages. Mozaik has a modular architecture, and the existing modules are designed to be extensible with minimal programming effort. Mozaik increases the productivity of running virtual experiments on highly structured neuronal networks by automating the entire experimental cycle, while increasing the reliability of modeling studies by relieving the user from manual handling of the flow of metadata between the individual workflow stages.

  11. Human embryonic stem cell-derived neuronal cells form spontaneously active neuronal networks in vitro.

    PubMed

    Heikkilä, Teemu J; Ylä-Outinen, Laura; Tanskanen, Jarno M A; Lappalainen, Riikka S; Skottman, Heli; Suuronen, Riitta; Mikkonen, Jarno E; Hyttinen, Jari A K; Narkilahti, Susanna

    2009-07-01

    The production of functional human embryonic stem cell (hESC)-derived neuronal cells is critical for the application of hESCs in treating neurodegenerative disorders. To study the potential functionality of hESC-derived neurons, we cultured and monitored the development of hESC-derived neuronal networks on microelectrode arrays. Immunocytochemical studies revealed that these networks were positive for the neuronal marker proteins beta-tubulin(III) and microtubule-associated protein 2 (MAP-2). The hESC-derived neuronal networks were spontaneously active and exhibited a multitude of electrical impulse firing patterns. Synchronous bursts of electrical activity similar to those reported for hippocampal neurons and rodent embryonic stem cell-derived neuronal networks were recorded from the differentiated cultures until up to 4 months. The dependence of the observed neuronal network activity on sodium ion channels was examined using tetrodotoxin (TTX). Antagonists for the glutamate receptors NMDA [D(-)-2-amino-5-phosphonopentanoic acid] and AMPA/kainate [6-cyano-7-nitroquinoxaline-2,3-dione], and for GABAA receptors [(-)-bicuculline methiodide] modulated the spontaneous electrical activity, indicating that pharmacologically susceptible neuronal networks with functional synapses had been generated. The findings indicate that hESC-derived neuronal cells can generate spontaneously active networks with synchronous communication in vitro, and are therefore suitable for use in developmental and drug screening studies, as well as for regenerative medicine.

  12. Synchronization in neuronal oscillator networks with input heterogeneity and arbitrary network structure

    NASA Astrophysics Data System (ADS)

    Davison, Elizabeth; Dey, Biswadip; Leonard, Naomi

    Mathematical studies of synchronization in networks of neuronal oscillators offer insight into neuronal ensemble behavior in the brain. Systematic means to understand how network structure and external input affect synchronization in network models have the potential to improve methods for treating synchronization-related neurological disorders such as epilepsy and Parkinson's disease. To elucidate the complex relationships between network structure, external input, and synchronization, we investigate synchronous firing patterns in arbitrary networks of neuronal oscillators coupled through gap junctions with heterogeneous external inputs. We first apply a passivity-based Lyapunov analysis to undirected networks of homogeneous FitzHugh-Nagumo (FN) oscillators with homogeneous inputs and derive a sufficient condition on coupling strength that guarantees complete synchronization. In biologically relevant regimes, we employ Gronwall's inequality to obtain a bound tighter than those previously reported. We extend both analyses to a homogeneous FN network with heterogeneous inputs and show how cluster synchronization emerges under conditions on the symmetry of the coupling matrix and external inputs. Our results can be generalized to any network of semi-passive oscillators.

  13. Communication through Resonance in Spiking Neuronal Networks

    PubMed Central

    Frégnac, Yves; Aertsen, Ad; Kumar, Arvind

    2014-01-01

    The cortex processes stimuli through a distributed network of specialized brain areas. This processing requires mechanisms that can route neuronal activity across weakly connected cortical regions. Routing models proposed thus far are either limited to propagation of spiking activity across strongly connected networks or require distinct mechanisms that create local oscillations and establish their coherence between distant cortical areas. Here, we propose a novel mechanism which explains how synchronous spiking activity propagates across weakly connected brain areas supported by oscillations. In our model, oscillatory activity unleashes network resonance that amplifies feeble synchronous signals and promotes their propagation along weak connections (“communication through resonance”). The emergence of coherent oscillations is a natural consequence of synchronous activity propagation and therefore the assumption of different mechanisms that create oscillations and provide coherence is not necessary. Moreover, the phase-locking of oscillations is a side effect of communication rather than its requirement. Finally, we show how the state of ongoing activity could affect the communication through resonance and propose that modulations of the ongoing activity state could influence information processing in distributed cortical networks. PMID:25165853

  14. Collective stochastic coherence in recurrent neuronal networks

    NASA Astrophysics Data System (ADS)

    Sancristóbal, Belén; Rebollo, Beatriz; Boada, Pol; Sanchez-Vives, Maria V.; Garcia-Ojalvo, Jordi

    2016-09-01

    Recurrent networks of dynamic elements frequently exhibit emergent collective oscillations, which can show substantial regularity even when the individual elements are considerably noisy. How noise-induced dynamics at the local level coexists with regular oscillations at the global level is still unclear. Here we show that a combination of stochastic recurrence-based initiation with deterministic refractoriness in an excitable network can reconcile these two features, leading to maximum collective coherence for an intermediate noise level. We report this behaviour in the slow oscillation regime exhibited by a cerebral cortex network under dynamical conditions resembling slow-wave sleep and anaesthesia. Computational analysis of a biologically realistic network model reveals that an intermediate level of background noise leads to quasi-regular dynamics. We verify this prediction experimentally in cortical slices subject to varying amounts of extracellular potassium, which modulates neuronal excitability and thus synaptic noise. The model also predicts that this effectively regular state should exhibit noise-induced memory of the spatial propagation profile of the collective oscillations, which is also verified experimentally. Taken together, these results allow us to construe the high regularity observed experimentally in the brain as an instance of collective stochastic coherence.

  15. Identification of neuronal network properties from the spectral analysis of calcium imaging signals in neuronal cultures.

    PubMed

    Tibau, Elisenda; Valencia, Miguel; Soriano, Jordi

    2013-01-01

    Neuronal networks in vitro are prominent systems to study the development of connections in living neuronal networks and the interplay between connectivity, activity and function. These cultured networks show a rich spontaneous activity that evolves concurrently with the connectivity of the underlying network. In this work we monitor the development of neuronal cultures, and record their activity using calcium fluorescence imaging. We use spectral analysis to characterize global dynamical and structural traits of the neuronal cultures. We first observe that the power spectrum can be used as a signature of the state of the network, for instance when inhibition is active or silent, as well as a measure of the network's connectivity strength. Second, the power spectrum identifies prominent developmental changes in the network such as GABAA switch. And third, the analysis of the spatial distribution of the spectral density, in experiments with a controlled disintegration of the network through CNQX, an AMPA-glutamate receptor antagonist in excitatory neurons, reveals the existence of communities of strongly connected, highly active neurons that display synchronous oscillations. Our work illustrates the interest of spectral analysis for the study of in vitro networks, and its potential use as a network-state indicator, for instance to compare healthy and diseased neuronal networks.

  16. Identification of neuronal network properties from the spectral analysis of calcium imaging signals in neuronal cultures

    PubMed Central

    Tibau, Elisenda; Valencia, Miguel; Soriano, Jordi

    2013-01-01

    Neuronal networks in vitro are prominent systems to study the development of connections in living neuronal networks and the interplay between connectivity, activity and function. These cultured networks show a rich spontaneous activity that evolves concurrently with the connectivity of the underlying network. In this work we monitor the development of neuronal cultures, and record their activity using calcium fluorescence imaging. We use spectral analysis to characterize global dynamical and structural traits of the neuronal cultures. We first observe that the power spectrum can be used as a signature of the state of the network, for instance when inhibition is active or silent, as well as a measure of the network's connectivity strength. Second, the power spectrum identifies prominent developmental changes in the network such as GABAA switch. And third, the analysis of the spatial distribution of the spectral density, in experiments with a controlled disintegration of the network through CNQX, an AMPA-glutamate receptor antagonist in excitatory neurons, reveals the existence of communities of strongly connected, highly active neurons that display synchronous oscillations. Our work illustrates the interest of spectral analysis for the study of in vitro networks, and its potential use as a network-state indicator, for instance to compare healthy and diseased neuronal networks. PMID:24385953

  17. Developmental time windows for axon growth influence neuronal network topology.

    PubMed

    Lim, Sol; Kaiser, Marcus

    2015-04-01

    Early brain connectivity development consists of multiple stages: birth of neurons, their migration and the subsequent growth of axons and dendrites. Each stage occurs within a certain period of time depending on types of neurons and cortical layers. Forming synapses between neurons either by growing axons starting at similar times for all neurons (much-overlapped time windows) or at different time points (less-overlapped) may affect the topological and spatial properties of neuronal networks. Here, we explore the extreme cases of axon formation during early development, either starting at the same time for all neurons (parallel, i.e., maximally overlapped time windows) or occurring for each neuron separately one neuron after another (serial, i.e., no overlaps in time windows). For both cases, the number of potential and established synapses remained comparable. Topological and spatial properties, however, differed: Neurons that started axon growth early on in serial growth achieved higher out-degrees, higher local efficiency and longer axon lengths while neurons demonstrated more homogeneous connectivity patterns for parallel growth. Second, connection probability decreased more rapidly with distance between neurons for parallel growth than for serial growth. Third, bidirectional connections were more numerous for parallel growth. Finally, we tested our predictions with C. elegans data. Together, this indicates that time windows for axon growth influence the topological and spatial properties of neuronal networks opening up the possibility to a posteriori estimate developmental mechanisms based on network properties of a developed network.

  18. Effect of Transcranial Magnetic Stimulation on Neuronal Networks

    NASA Astrophysics Data System (ADS)

    Unsal, Ahmet; Hadimani, Ravi; Jiles, David

    2013-03-01

    The human brain contains around 100 billion nerve cells controlling our day to day activities. Consequently, brain disorders often result in impairments such as paralysis, loss of coordination and seizure. It has been said that 1 in 5 Americans suffer some diagnosable mental disorder. There is an urgent need to understand the disorders, prevent them and if possible, develop permanent cure for them. As a result, a significant amount of research activities is being directed towards brain research. Transcranial Magnetic Stimulation (TMS) is a promising tool for diagnosing and treating brain disorders. It is a non-invasive treatment method that produces a current flow in the brain which excites the neurons. Even though TMS has been verified to have advantageous effects on various brain related disorders, there have not been enough studies on the impact of TMS on cells. In this study, we are investigating the electrophysiological effects of TMS on one dimensional neuronal culture grown in a circular pathway. Electrical currents are produced on the neuronal networks depending on the directionality of the applied field. This aids in understanding how neuronal networks react under TMS treatment.

  19. Modelling small-patterned neuronal networks coupled to microelectrode arrays

    NASA Astrophysics Data System (ADS)

    Massobrio, Paolo; Martinoia, Sergio

    2008-09-01

    Cultured neurons coupled to planar substrates which exhibit 'well-defined' two-dimensional network architectures can provide valuable insights into cell-to-cell communication, network dynamics versus topology, and basic mechanisms of synaptic plasticity and learning. In the literature several approaches were presented to drive neuronal growth, such as surface modification by silane chemistry, photolithographic techniques, microcontact printing, microfluidic channel flow patterning, microdrop patterning, etc. This work presents a computational model fit for reproducing and explaining the dynamics exhibited by small-patterned neuronal networks coupled to microelectrode arrays (MEAs). The model is based on the concept of meta-neuron, i.e., a small spatially confined number of actual neurons which perform single macroscopic functions. Each meta-neuron is characterized by a detailed morphology, and the membrane channels are modelled by simple Hodgkin-Huxley and passive kinetics. The two main findings that emerge from the simulations can be summarized as follows: (i) the increasing complexity of meta-neuron morphology reflects the variations of the network dynamics as a function of network development; (ii) the dynamics displayed by the patterned neuronal networks considered can be explained by hypothesizing the presence of several short- and a few long-term distance interactions among small assemblies of neurons (i.e., meta-neurons).

  20. Modelling small-patterned neuronal networks coupled to microelectrode arrays.

    PubMed

    Massobrio, Paolo; Martinoia, Sergio

    2008-09-01

    Cultured neurons coupled to planar substrates which exhibit 'well-defined' two-dimensional network architectures can provide valuable insights into cell-to-cell communication, network dynamics versus topology, and basic mechanisms of synaptic plasticity and learning. In the literature several approaches were presented to drive neuronal growth, such as surface modification by silane chemistry, photolithographic techniques, microcontact printing, microfluidic channel flow patterning, microdrop patterning, etc. This work presents a computational model fit for reproducing and explaining the dynamics exhibited by small-patterned neuronal networks coupled to microelectrode arrays (MEAs). The model is based on the concept of meta-neuron, i.e., a small spatially confined number of actual neurons which perform single macroscopic functions. Each meta-neuron is characterized by a detailed morphology, and the membrane channels are modelled by simple Hodgkin-Huxley and passive kinetics. The two main findings that emerge from the simulations can be summarized as follows: (i) the increasing complexity of meta-neuron morphology reflects the variations of the network dynamics as a function of network development; (ii) the dynamics displayed by the patterned neuronal networks considered can be explained by hypothesizing the presence of several short- and a few long-term distance interactions among small assemblies of neurons (i.e., meta-neurons).

  1. Identifying Controlling Nodes in Neuronal Networks in Different Scales

    PubMed Central

    Tang, Yang; Gao, Huijun; Zou, Wei; Kurths, Jürgen

    2012-01-01

    Recent studies have detected hubs in neuronal networks using degree, betweenness centrality, motif and synchronization and revealed the importance of hubs in their structural and functional roles. In addition, the analysis of complex networks in different scales are widely used in physics community. This can provide detailed insights into the intrinsic properties of networks. In this study, we focus on the identification of controlling regions in cortical networks of cats’ brain in microscopic, mesoscopic and macroscopic scales, based on single-objective evolutionary computation methods. The problem is investigated by considering two measures of controllability separately. The impact of the number of driver nodes on controllability is revealed and the properties of controlling nodes are shown in a statistical way. Our results show that the statistical properties of the controlling nodes display a concave or convex shape with an increase of the allowed number of controlling nodes, revealing a transition in choosing driver nodes from the areas with a large degree to the areas with a low degree. Interestingly, the community Auditory in cats’ brain, which has sparse connections with other communities, plays an important role in controlling the neuronal networks. PMID:22848475

  2. Identifying controlling nodes in neuronal networks in different scales.

    PubMed

    Tang, Yang; Gao, Huijun; Zou, Wei; Kurths, Jürgen

    2012-01-01

    Recent studies have detected hubs in neuronal networks using degree, betweenness centrality, motif and synchronization and revealed the importance of hubs in their structural and functional roles. In addition, the analysis of complex networks in different scales are widely used in physics community. This can provide detailed insights into the intrinsic properties of networks. In this study, we focus on the identification of controlling regions in cortical networks of cats' brain in microscopic, mesoscopic and macroscopic scales, based on single-objective evolutionary computation methods. The problem is investigated by considering two measures of controllability separately. The impact of the number of driver nodes on controllability is revealed and the properties of controlling nodes are shown in a statistical way. Our results show that the statistical properties of the controlling nodes display a concave or convex shape with an increase of the allowed number of controlling nodes, revealing a transition in choosing driver nodes from the areas with a large degree to the areas with a low degree. Interestingly, the community Auditory in cats' brain, which has sparse connections with other communities, plays an important role in controlling the neuronal networks.

  3. Synchronization in hybrid neuronal networks of the hippocampal formation.

    PubMed

    Netoff, Theoden I; Banks, Matthew I; Dorval, Alan D; Acker, Corey D; Haas, Julie S; Kopell, Nancy; White, John A

    2005-03-01

    Understanding the mechanistic bases of neuronal synchronization is a current challenge in quantitative neuroscience. We studied this problem in two putative cellular pacemakers of the mammalian hippocampal theta rhythm: glutamatergic stellate cells (SCs) of the medial entorhinal cortex and GABAergic oriens-lacunosum-molecular (O-LM) interneurons of hippocampal region CA1. We used two experimental methods. First, we measured changes in spike timing induced by artificial synaptic inputs applied to individual neurons. We then measured responses of free-running hybrid neuronal networks, consisting of biological neurons coupled (via dynamic clamp) to biological or virtual counterparts. Results from the single-cell experiments predicted network behaviors well and are compatible with previous model-based predictions of how specific membrane mechanisms give rise to empirically measured synchronization behavior. Both cell types phase lock stably when connected via homogeneous excitatory-excitatory (E-E) or inhibitory-inhibitory (I-I) connections. Phase-locked firing is consistently synchronous for either cell type with E-E connections and nearly anti-synchronous with I-I connections. With heterogeneous connections (e.g., excitatory-inhibitory, as might be expected if members of a given population had heterogeneous connections involving intermediate interneurons), networks often settled into phase locking that was either stable or unstable, depending on the order of firing of the two cells in the hybrid network. Our results imply that excitatory SCs, but not inhibitory O-LM interneurons, are capable of synchronizing in phase via monosynaptic mutual connections of the biologically appropriate polarity. Results are largely independent of synaptic strength and synaptic kinetics, implying that our conclusions are robust and largely unaffected by synaptic plasticity.

  4. Hypothalamic leptin-neurotensin-hypocretin neuronal networks in zebrafish.

    PubMed

    Levitas-Djerbi, Talia; Yelin-Bekerman, Laura; Lerer-Goldshtein, Tali; Appelbaum, Lior

    2015-04-01

    Neurotensin (NTS) is a 13 amino acid neuropeptide that is expressed in the hypothalamus. In mammals, NTS-producing neurons that express leptin receptor (LepRb) regulate the function of hypocretin/orexin (HCRT) and dopamine neurons. Thus, the hypothalamic leptin-NTS-HCRT neuronal network orchestrates key homeostatic output, including sleep, feeding, and reward. However, the intricate mechanisms of the circuitry and the unique role of NTS-expressing neurons remain unclear. We studied the NTS neuronal networks in zebrafish and cloned the genes encoding the NTS neuropeptide and receptor (NTSR). Similar to mammals, the ligand is expressed primarily in the hypothalamus, while the receptor is expressed widely throughout the brain in zebrafish. A portion of hypothalamic nts-expressing neurons are inhibitory and some coexpress leptin receptor (lepR1). As in mammals, NTS and HCRT neurons are localized adjacently in the hypothalamus. To track the development and axonal projection of NTS neurons, the NTS promoter was isolated. Transgenesis and double labeling of NTS and HCRT neurons showed that NTS axons project toward HCRT neurons, some of which express ntsr. Moreover, another target of NTS neurons is ntsr-expressing dopaminergeric neurons. These findings suggest structural circuitry between leptin, NTS, and hypocretinergic or dopaminergic neurons and establish the zebrafish as a model to study the role of these neuronal circuits in the regulation of feeding, sleep, and reward.

  5. Leader neurons in leaky integrate and fire neural network simulations.

    PubMed

    Zbinden, Cyrille

    2011-10-01

    In this paper, we highlight the topological properties of leader neurons whose existence is an experimental fact. Several experimental studies show the existence of leader neurons in population bursts of activity in 2D living neural networks (Eytan and Marom, J Neurosci 26(33):8465-8476, 2006; Eckmann et al., New J Phys 10(015011), 2008). A leader neuron is defined as a neuron which fires at the beginning of a burst (respectively network spike) more often than we expect by chance considering its mean firing rate. This means that leader neurons have some burst triggering power beyond a chance-level statistical effect. In this study, we characterize these leader neuron properties. This naturally leads us to simulate neural 2D networks. To build our simulations, we choose the leaky integrate and fire (lIF) neuron model (Gerstner and Kistler 2002; Cessac, J Math Biol 56(3):311-345, 2008), which allows fast simulations (Izhikevich, IEEE Trans Neural Netw 15(5):1063-1070, 2004; Gerstner and Naud, Science 326:379-380, 2009). The dynamics of our lIF model has got stable leader neurons in the burst population that we simulate. These leader neurons are excitatory neurons and have a low membrane potential firing threshold. Except for these two first properties, the conditions required for a neuron to be a leader neuron are difficult to identify and seem to depend on several parameters involved in the simulations themselves. However, a detailed linear analysis shows a trend of the properties required for a neuron to be a leader neuron. Our main finding is: A leader neuron sends signals to many excitatory neurons as well as to few inhibitory neurons and a leader neuron receives only signals from few other excitatory neurons. Our linear analysis exhibits five essential properties of leader neurons each with different relative importance. This means that considering a given neural network with a fixed mean number of connections per neuron, our analysis gives us a way of

  6. Automated Neuron Tracing Methods: An Updated Account.

    PubMed

    Acciai, Ludovica; Soda, Paolo; Iannello, Giulio

    2016-10-01

    The reconstruction of neuron morphology allows to investigate how the brain works, which is one of the foremost challenges in neuroscience. This process aims at extracting the neuronal structures from microscopic imaging data. The great advances in microscopic technologies have made a huge amount of data available at the micro-, or even lower, resolution where manual inspection is time consuming, prone to error and utterly impractical. This has motivated the development of methods to automatically trace the neuronal structures, a task also known as neuron tracing. This paper surveys the latest neuron tracing methods available in the scientific literature as well as a selection of significant older papers to better place these proposals into context. They are categorized into global processing, local processing and meta-algorithm approaches. Furthermore, we point out the algorithmic components used to design each method and we report information on the datasets and the performance metrics used.

  7. Information diversity in structure and dynamics of simulated neuronal networks.

    PubMed

    Mäki-Marttunen, Tuomo; Aćimović, Jugoslava; Nykter, Matti; Kesseli, Juha; Ruohonen, Keijo; Yli-Harja, Olli; Linne, Marja-Leena

    2011-01-01

    Neuronal networks exhibit a wide diversity of structures, which contributes to the diversity of the dynamics therein. The presented work applies an information theoretic framework to simultaneously analyze structure and dynamics in neuronal networks. Information diversity within the structure and dynamics of a neuronal network is studied using the normalized compression distance. To describe the structure, a scheme for generating distance-dependent networks with identical in-degree distribution but variable strength of dependence on distance is presented. The resulting network structure classes possess differing path length and clustering coefficient distributions. In parallel, comparable realistic neuronal networks are generated with NETMORPH simulator and similar analysis is done on them. To describe the dynamics, network spike trains are simulated using different network structures and their bursting behaviors are analyzed. For the simulation of the network activity the Izhikevich model of spiking neurons is used together with the Tsodyks model of dynamical synapses. We show that the structure of the simulated neuronal networks affects the spontaneous bursting activity when measured with bursting frequency and a set of intraburst measures: the more locally connected networks produce more and longer bursts than the more random networks. The information diversity of the structure of a network is greatest in the most locally connected networks, smallest in random networks, and somewhere in between in the networks between order and disorder. As for the dynamics, the most locally connected networks and some of the in-between networks produce the most complex intraburst spike trains. The same result also holds for sparser of the two considered network densities in the case of full spike trains.

  8. Nanostructured superhydrophobic substrates trigger the development of 3D neuronal networks.

    PubMed

    Limongi, Tania; Cesca, Fabrizia; Gentile, Francesco; Marotta, Roberto; Ruffilli, Roberta; Barberis, Andrea; Dal Maschio, Marco; Petrini, Enrica Maria; Santoriello, Stefania; Benfenati, Fabio; Di Fabrizio, Enzo

    2013-02-11

    The generation of 3D networks of primary neurons is a big challenge in neuroscience. Here, a novel method is presented for a 3D neuronal culture on superhydrophobic (SH) substrates. How nano-patterned SH devices stimulate neurons to build 3D networks is investigated. Scanning electron microscopy and confocal imaging show that soon after plating neurites adhere to the nanopatterned pillar sidewalls and they are subsequently pulled between pillars in a suspended position. These neurons display an enhanced survival rate compared to standard cultures and develop mature networks with physiological excitability. These findings underline the importance of using nanostructured SH surfaces for directing 3D neuronal growth, as well as for the design of biomaterials for neuronal regeneration. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. The role of dimensionality in neuronal network dynamics.

    PubMed

    Ulloa Severino, Francesco Paolo; Ban, Jelena; Song, Qin; Tang, Mingliang; Bianconi, Ginestra; Cheng, Guosheng; Torre, Vincent

    2016-07-11

    Recent results from network theory show that complexity affects several dynamical properties of networks that favor synchronization. Here we show that synchronization in 2D and 3D neuronal networks is significantly different. Using dissociated hippocampal neurons we compared properties of cultures grown on a flat 2D substrates with those formed on 3D graphene foam scaffolds. Both 2D and 3D cultures had comparable glia to neuron ratio and the percentage of GABAergic inhibitory neurons. 3D cultures because of their dimension have many connections among distant neurons leading to small-world networks and their characteristic dynamics. After one week, calcium imaging revealed moderately synchronous activity in 2D networks, but the degree of synchrony of 3D networks was higher and had two regimes: a highly synchronized (HS) and a moderately synchronized (MS) regime. The HS regime was never observed in 2D networks. During the MS regime, neuronal assemblies in synchrony changed with time as observed in mammalian brains. After two weeks, the degree of synchrony in 3D networks decreased, as observed in vivo. These results show that dimensionality determines properties of neuronal networks and that several features of brain dynamics are a consequence of its 3D topology.

  10. The role of dimensionality in neuronal network dynamics

    PubMed Central

    Ulloa Severino, Francesco Paolo; Ban, Jelena; Song, Qin; Tang, Mingliang; Bianconi, Ginestra; Cheng, Guosheng; Torre, Vincent

    2016-01-01

    Recent results from network theory show that complexity affects several dynamical properties of networks that favor synchronization. Here we show that synchronization in 2D and 3D neuronal networks is significantly different. Using dissociated hippocampal neurons we compared properties of cultures grown on a flat 2D substrates with those formed on 3D graphene foam scaffolds. Both 2D and 3D cultures had comparable glia to neuron ratio and the percentage of GABAergic inhibitory neurons. 3D cultures because of their dimension have many connections among distant neurons leading to small-world networks and their characteristic dynamics. After one week, calcium imaging revealed moderately synchronous activity in 2D networks, but the degree of synchrony of 3D networks was higher and had two regimes: a highly synchronized (HS) and a moderately synchronized (MS) regime. The HS regime was never observed in 2D networks. During the MS regime, neuronal assemblies in synchrony changed with time as observed in mammalian brains. After two weeks, the degree of synchrony in 3D networks decreased, as observed in vivo. These results show that dimensionality determines properties of neuronal networks and that several features of brain dynamics are a consequence of its 3D topology. PMID:27404281

  11. Stimulus-dependent synchronization in delayed-coupled neuronal networks

    PubMed Central

    Esfahani, Zahra G.; Gollo, Leonardo L.; Valizadeh, Alireza

    2016-01-01

    Time delay is a general feature of all interactions. Although the effects of delayed interaction are often neglected when the intrinsic dynamics is much slower than the coupling delay, they can be crucial otherwise. We show that delayed coupled neuronal networks support transitions between synchronous and asynchronous states when the level of input to the network changes. The level of input determines the oscillation period of neurons and hence whether time-delayed connections are synchronizing or desynchronizing. We find that synchronizing connections lead to synchronous dynamics, whereas desynchronizing connections lead to out-of-phase oscillations in network motifs and to frustrated states with asynchronous dynamics in large networks. Since the impact of a neuronal network to downstream neurons increases when spikes are synchronous, networks with delayed connections can serve as gatekeeper layers mediating the firing transfer to other regions. This mechanism can regulate the opening and closing of communicating channels between cortical layers on demand. PMID:27001428

  12. Stimulus-dependent synchronization in delayed-coupled neuronal networks.

    PubMed

    Esfahani, Zahra G; Gollo, Leonardo L; Valizadeh, Alireza

    2016-03-22

    Time delay is a general feature of all interactions. Although the effects of delayed interaction are often neglected when the intrinsic dynamics is much slower than the coupling delay, they can be crucial otherwise. We show that delayed coupled neuronal networks support transitions between synchronous and asynchronous states when the level of input to the network changes. The level of input determines the oscillation period of neurons and hence whether time-delayed connections are synchronizing or desynchronizing. We find that synchronizing connections lead to synchronous dynamics, whereas desynchronizing connections lead to out-of-phase oscillations in network motifs and to frustrated states with asynchronous dynamics in large networks. Since the impact of a neuronal network to downstream neurons increases when spikes are synchronous, networks with delayed connections can serve as gatekeeper layers mediating the firing transfer to other regions. This mechanism can regulate the opening and closing of communicating channels between cortical layers on demand.

  13. Cell specific electrodes for neuronal network reconstruction and monitoring.

    PubMed

    Bendali, Amel; Bouguelia, Sihem; Roupioz, Yoann; Forster, Valérie; Mailley, Pascal; Benosman, Ryad; Livache, Thierry; Sahel, José-Alain; Picaud, Serge

    2014-07-07

    Direct interfacing of neurons with electronic devices has been investigated for both prosthetic and neuro-computing applications. In vitro neuronal networks provide great tools not only for improving neuroprostheses but also to take advantage of their computing abilities. However, it is often difficult to organize neuronal networks according to specific cell distributions. Our aim was to develop a cell-type specific immobilization of neurons on individual electrodes to produce organized in vitro neuronal networks on multi-electrode arrays (MEAs). We demonstrate the selective capture of retinal neurons on antibody functionalized surfaces following the formation of self-assembled monolayers from protein-thiol conjugates by simple contact and protein-polypyrrole deposits by electrochemical functionalization. This neuronal selection was achieved on gold for either cone photoreceptors or retinal ganglion neurons using a PNA lectin or a Thy1 antibody, respectively. Anti-fouling of un-functionalized gold surfaces was optimized to increase the capture efficiencies. The technique was extended to electrode arrays by addressing electropolymerization of pyrrole monomers and pyrrole-protein conjugates to active electrodes. Retinal ganglion cell recording on the array further demonstrated the integrity of these neurons following their selection on polypyrrole-coated electrodes. Therefore, this protein-polypyrrole electrodeposition could provide a new approach to generate organized in vitro neuronal networks.

  14. On The Use of Dynamic Bayesian Networks in Reconstructing Functional Neuronal Networks from Spike Train Ensembles

    PubMed Central

    Eldawlatly, Seif; Zhou, Yang; Jin, Rong; Oweiss, Karim G.

    2009-01-01

    Coordination among cortical neurons is believed to be key element in mediating many high level cortical processes such as perception, attention, learning and memory formation. Inferring the topology of the neural circuitry underlying this coordination is important to characterize the highly non-linear, time-varying interactions between cortical neurons in the presence of complex stimuli. In this work, we investigate the applicability of Dynamic Bayesian Networks (DBNs) in inferring the effective connectivity between spiking cortical neurons from their observed spike trains. We demonstrate that DBNs can infer the underlying non-linear and time-varying causal interactions between these neurons and can discriminate between mono and polysynaptic links between them under certain constraints governing their putative connectivity. We analyzed conditionally-Poisson spike train data mimicking spiking activity of cortical networks of small and moderately-large sizes. The performance was assessed and compared to other methods under systematic variations of the network structure to mimic a wide range of responses typically observed in the cortex. Results demonstrate the utility of DBN in inferring the effective connectivity in cortical networks. PMID:19852619

  15. The Hypocretin/Orexin Neuronal Networks in Zebrafish.

    PubMed

    Elbaz, Idan; Levitas-Djerbi, Talia; Appelbaum, Lior

    2016-12-24

    The hypothalamic Hypocretin/Orexin (Hcrt) neurons secrete two Hcrt neuropeptides. These neurons and peptides play a major role in the regulation of feeding, sleep wake cycle, reward-seeking, addiction, and stress. Loss of Hcrt neurons causes the sleep disorder narcolepsy. The zebrafish has become an attractive model to study the Hcrt neuronal network because it is a transparent vertebrate that enables simple genetic manipulation, imaging of the structure and function of neuronal circuits in live animals, and high-throughput monitoring of behavioral performance during both day and night. The zebrafish Hcrt network comprises ~16-60 neurons, which similar to mammals, are located in the hypothalamus and widely innervate the brain and spinal cord, and regulate various fundamental behaviors such as feeding, sleep, and wakefulness. Here we review how the zebrafish contributes to the study of the Hcrt neuronal system molecularly, anatomically, physiologically, and pathologically.

  16. Signal propagation through feedforward neuronal networks with different operational modes

    NASA Astrophysics Data System (ADS)

    Li, Jie; Liu, Feng; Xu, Ding; Wang, Wei

    2009-02-01

    How neuronal activity is propagated across multiple layers of neurons is a fundamental issue in neuroscience. Using numerical simulations, we explored how the operational mode of neurons —coincidence detector or temporal integrator— could affect the propagation of rate signals through a 10-layer feedforward network with sparse connectivity. Our study was based on two kinds of neuron models. The Hodgkin-Huxley (HH) neuron can function as a coincidence detector, while the leaky integrate-and-fire (LIF) neuron can act as a temporal integrator. When white noise is afferent to the input layer, rate signals can be stably propagated through both networks, while neurons in deeper layers fire synchronously in the absence of background noise; but the underlying mechanism for the development of synchrony is different. When an aperiodic signal is presented, the network of HH neurons can represent the temporal structure of the signal in firing rate. Meanwhile, synchrony is well developed and is resistant to background noise. In contrast, rate signals are somewhat distorted during the propagation through the network of LIF neurons, and only weak synchrony occurs in deeper layers. That is, coincidence detectors have a performance advantage over temporal integrators in propagating rate signals. Therefore, given weak synaptic conductance and sparse connectivity between layers in both networks, synchrony does greatly subserve the propagation of rate signals with fidelity, and coincidence detection could be of considerable functional significance in cortical processing.

  17. Burst analysis tool for developing neuronal networks exhibiting highly varying action potential dynamics.

    PubMed

    Kapucu, Fikret E; Tanskanen, Jarno M A; Mikkonen, Jarno E; Ylä-Outinen, Laura; Narkilahti, Susanna; Hyttinen, Jari A K

    2012-01-01

    In this paper we propose a firing statistics based neuronal network burst detection algorithm for neuronal networks exhibiting highly variable action potential dynamics. Electrical activity of neuronal networks is generally analyzed by the occurrences of spikes and bursts both in time and space. Commonly accepted analysis tools employ burst detection algorithms based on predefined criteria. However, maturing neuronal networks, such as those originating from human embryonic stem cells (hESCs), exhibit highly variable network structure and time-varying dynamics. To explore the developing burst/spike activities of such networks, we propose a burst detection algorithm which utilizes the firing statistics based on interspike interval (ISI) histograms. Moreover, the algorithm calculates ISI thresholds for burst spikes as well as for pre-burst spikes and burst tails by evaluating the cumulative moving average (CMA) and skewness of the ISI histogram. Because of the adaptive nature of the proposed algorithm, its analysis power is not limited by the type of neuronal cell network at hand. We demonstrate the functionality of our algorithm with two different types of microelectrode array (MEA) data recorded from spontaneously active hESC-derived neuronal cell networks. The same data was also analyzed by two commonly employed burst detection algorithms and the differences in burst detection results are illustrated. The results demonstrate that our method is both adaptive to the firing statistics of the network and yields successful burst detection from the data. In conclusion, the proposed method is a potential tool for analyzing of hESC-derived neuronal cell networks and thus can be utilized in studies aiming to understand the development and functioning of human neuronal networks and as an analysis tool for in vitro drug screening and neurotoxicity assays.

  18. Neuronal pathway finding: from neurons to initial neural networks.

    PubMed

    Roscigno, Cecelia I

    2004-10-01

    Neuronal pathway finding is crucial for structured cellular organization and development of neural circuits within the nervous system. Neuronal pathway finding within the visual system has been extensively studied and therefore is used as a model to review existing knowledge regarding concepts of this developmental process. General principles of neuron pathway finding throughout the nervous system exist. Comprehension of these concepts guides neuroscience nurses in gaining an understanding of the developmental course of action, the implications of different anomalies, as well as the theoretical basis and nursing implications of some provocative new therapies being proposed to treat neurodegenerative diseases and neurologic injuries. These therapies have limitations in light of current ethical, developmental, and delivery modes and what is known about the development of neuronal pathways.

  19. Management of synchronized network activity by highly active neurons

    NASA Astrophysics Data System (ADS)

    Shein, Mark; Volman, Vladislav; Raichman, Nadav; Hanein, Yael; Ben-Jacob, Eshel

    2008-09-01

    Increasing evidence supports the idea that spontaneous brain activity may have an important functional role. Cultured neuronal networks provide a suitable model system to search for the mechanisms by which neuronal spontaneous activity is maintained and regulated. This activity is marked by synchronized bursting events (SBEs)—short time windows (hundreds of milliseconds) of rapid neuronal firing separated by long quiescent periods (seconds). However, there exists a special subset of rapidly firing neurons whose activity also persists between SBEs. It has been proposed that these highly active (HA) neurons play an important role in the management (i.e. establishment, maintenance and regulation) of the synchronized network activity. Here, we studied the dynamical properties and the functional role of HA neurons in homogeneous and engineered networks, during early network development, upon recovery from chemical inhibition and in response to electrical stimulations. We found that their sequences of inter-spike intervals (ISI) exhibit long time correlations and a unimodal distribution. During the network's development and under intense inhibition, the observed activity follows a transition period during which mostly HA neurons are active. Studying networks with engineered geometry, we found that HA neurons are precursors (the first to fire) of the spontaneous SBEs and are more responsive to electrical stimulations.

  20. Transition of spatiotemporal patterns in neuronal networks with chemical synapses

    NASA Astrophysics Data System (ADS)

    Wang, Rong; Li, Jiajia; Du, Mengmeng; Lei, Jinzhi; Wu, Ying

    2016-11-01

    In mammalian neocortex plane waves, spiral and irregular waves appear alternately. In this paper, we study the transition of spatiotemporal patterns in neuronal networks in which neurons are coupled via two types of chemical synapses: fast excitatory synapse and fast inhibitory synapse. Our results indicate that the fast excitatory synapse connection is easier to induce regular spatiotemporal patterns than fast inhibitory synapse connection, and the mechanism is discussed through bifurcation analysis of a single neuron. We introduce the permutation entropy as a measure of network firing complexity to study the mechanisms of formation and transition of spatiotemporal patterns. Our calculations show that the spatiotemporal pattern transitions are closely connected to a sudden decrease in the firing complexity of neuronal networks, and the neuronal networks with fast excitatory synapses have higher firing complexity than those with fast inhibitory synapses.

  1. Real-time tracking of neuronal network structure using data assimilation

    NASA Astrophysics Data System (ADS)

    Hamilton, Franz; Berry, Tyrus; Peixoto, Nathalia; Sauer, Timothy

    2013-11-01

    A nonlinear data assimilation technique is applied to determine and track effective connections between ensembles of cultured spinal cord neurons measured with multielectrode arrays. The method is statistical, depending only on confidence intervals, and requiring no form of arbitrary thresholding. In addition, the method updates connection strengths sequentially, enabling real-time tracking of nonstationary networks. The ensemble Kalman filter is used with a generic spiking neuron model to estimate connection strengths as well as other system parameters to deal with model mismatch. The method is validated on noisy synthetic data from Hodgkin-Huxley model neurons before being used to find network connections in the neural culture recordings.

  2. Estimating network parameters from combined dynamics of firing rate and irregularity of single neurons.

    PubMed

    Hamaguchi, Kosuke; Riehle, Alexa; Brunel, Nicolas

    2011-01-01

    High firing irregularity is a hallmark of cortical neurons in vivo, and modeling studies suggest a balance of excitation and inhibition is necessary to explain this high irregularity. Such a balance must be generated, at least partly, from local interconnected networks of excitatory and inhibitory neurons, but the details of the local network structure are largely unknown. The dynamics of the neural activity depends on the local network structure; this in turn suggests the possibility of estimating network structure from the dynamics of the firing statistics. Here we report a new method to estimate properties of the local cortical network from the instantaneous firing rate and irregularity (CV(2)) under the assumption that recorded neurons are a part of a randomly connected sparse network. The firing irregularity, measured in monkey motor cortex, exhibits two features; many neurons show relatively stable firing irregularity in time and across different task conditions; the time-averaged CV(2) is widely distributed from quasi-regular to irregular (CV(2) = 0.3-1.0). For each recorded neuron, we estimate the three parameters of a local network [balance of local excitation-inhibition, number of recurrent connections per neuron, and excitatory postsynaptic potential (EPSP) size] that best describe the dynamics of the measured firing rates and irregularities. Our analysis shows that optimal parameter sets form a two-dimensional manifold in the three-dimensional parameter space that is confined for most of the neurons to the inhibition-dominated region. High irregularity neurons tend to be more strongly connected to the local network, either in terms of larger EPSP and inhibitory PSP size or larger number of recurrent connections, compared with the low irregularity neurons, for a given excitatory/inhibitory balance. Incorporating either synaptic short-term depression or conductance-based synapses leads many low CV(2) neurons to move to the excitation-dominated region as

  3. Network of hypothalamic neurons that control appetite.

    PubMed

    Sohn, Jong-Woo

    2015-04-01

    The central nervous system (CNS) controls food intake and energy expenditure via tight coordinations between multiple neuronal populations. Specifically, two distinct neuronal populations exist in the arcuate nucleus of hypothalamus (ARH): the anorexigenic (appetite-suppressing) pro-opiomelanocortin (POMC) neurons and the orexigenic (appetite-increasing) neuropeptide Y (NPY)/agouti-related peptide (AgRP) neurons. The coordinated regulation of neuronal circuit involving these neurons is essential in properly maintaining energy balance, and any disturbance therein may result in hyperphagia/obesity or hypophagia/starvation. Thus, adequate knowledge of the POMC and NPY/AgRP neuron physiology is mandatory to understand the pathophysiology of obesity and related metabolic diseases. This review will discuss the history and recent updates on the POMC and NPY/AgRP neuronal circuits, as well as the general anorexigenic and orexigenic circuits in the CNS.

  4. Organization of network properties of cells in local and distributed neuronal networks of the brain of cats.

    PubMed

    Merzhanova, G Kh; Berg, A I

    1992-01-01

    The network properties of neurons of the visual and motor cortex and of the lateral nucleus of the hypothalamus were investigated on the basis of identified interneuronal interactions, using the cross-correlation method of analysis, in cats with developed alimentary conditioned instrumental reflexes to light. The varied organization of the network properties of cortical neurons in the organization of local and distributed neuronal networks was demonstrated, namely: the predominance of divergent properties over convergent properties for large cells in local networks and the leveling out of these relationships in distributed networks. The neurons of the lateral nucleus of the hypothalamus had an equal representation of convergent and divergent properties in the organization of local and distributed networks. The network properties of neurons of the cortical and subcortical structures were manifested in the background, following the development of conditioned reflexes, and during extinction. Only the small cells of the visual cortex were functionally dependent and changed the relationship of network properties in local networks during the extinction of conditioned reflexes.

  5. Effect of the heterogeneous neuron and information transmission delay on stochastic resonance of neuronal networks

    NASA Astrophysics Data System (ADS)

    Wang, Qingyun; Zhang, Honghui; Chen, Guanrong

    2012-12-01

    We study the effect of heterogeneous neuron and information transmission delay on stochastic resonance of scale-free neuronal networks. For this purpose, we introduce the heterogeneity to the specified neuron with the highest degree. It is shown that in the absence of delay, an intermediate noise level can optimally assist spike firings of collective neurons so as to achieve stochastic resonance on scale-free neuronal networks for small and intermediate αh, which plays a heterogeneous role. Maxima of stochastic resonance measure are enhanced as αh increases, which implies that the heterogeneity can improve stochastic resonance. However, as αh is beyond a certain large value, no obvious stochastic resonance can be observed. If the information transmission delay is introduced to neuronal networks, stochastic resonance is dramatically affected. In particular, the tuned information transmission delay can induce multiple stochastic resonance, which can be manifested as well-expressed maximum in the measure for stochastic resonance, appearing every multiple of one half of the subthreshold stimulus period. Furthermore, we can observe that stochastic resonance at odd multiple of one half of the subthreshold stimulus period is subharmonic, as opposed to the case of even multiple of one half of the subthreshold stimulus period. More interestingly, multiple stochastic resonance can also be improved by the suitable heterogeneous neuron. Presented results can provide good insights into the understanding of the heterogeneous neuron and information transmission delay on realistic neuronal networks.

  6. Effect of the heterogeneous neuron and information transmission delay on stochastic resonance of neuronal networks.

    PubMed

    Wang, Qingyun; Zhang, Honghui; Chen, Guanrong

    2012-12-01

    We study the effect of heterogeneous neuron and information transmission delay on stochastic resonance of scale-free neuronal networks. For this purpose, we introduce the heterogeneity to the specified neuron with the highest degree. It is shown that in the absence of delay, an intermediate noise level can optimally assist spike firings of collective neurons so as to achieve stochastic resonance on scale-free neuronal networks for small and intermediate α(h), which plays a heterogeneous role. Maxima of stochastic resonance measure are enhanced as α(h) increases, which implies that the heterogeneity can improve stochastic resonance. However, as α(h) is beyond a certain large value, no obvious stochastic resonance can be observed. If the information transmission delay is introduced to neuronal networks, stochastic resonance is dramatically affected. In particular, the tuned information transmission delay can induce multiple stochastic resonance, which can be manifested as well-expressed maximum in the measure for stochastic resonance, appearing every multiple of one half of the subthreshold stimulus period. Furthermore, we can observe that stochastic resonance at odd multiple of one half of the subthreshold stimulus period is subharmonic, as opposed to the case of even multiple of one half of the subthreshold stimulus period. More interestingly, multiple stochastic resonance can also be improved by the suitable heterogeneous neuron. Presented results can provide good insights into the understanding of the heterogeneous neuron and information transmission delay on realistic neuronal networks.

  7. Pyramidal Neurons Are Not Generalizable Building Blocks of Cortical Networks

    PubMed Central

    Luebke, Jennifer I.

    2017-01-01

    A key challenge in cortical neuroscience is to gain a comprehensive understanding of how pyramidal neuron heterogeneity across different areas and species underlies the functional specialization of individual neurons, networks, and areas. Comparative studies have been important in this endeavor, providing data relevant to the question of which of the many inherent properties of individual pyramidal neurons are necessary and sufficient for species-specific network and areal function. In this mini review, the importance of pyramidal neuron structural properties for signaling are outlined, followed by a summary of our recent work comparing the structural features of mouse (C57/BL6 strain) and rhesus monkey layer 3 (L3) pyramidal neurons in primary visual and frontal association cortices and their implications for neuronal and areal function. Based on these and other published data, L3 pyramidal neurons plausibly might be considered broadly “generalizable” from one area to another in the mouse neocortex due to their many similarities, but major differences in the properties of these neurons in diverse areas in the rhesus monkey neocortex rules this out in the primate. Further, fundamental differences in the dendritic topology of mouse and rhesus monkey pyramidal neurons highlight the implausibility of straightforward scaling and/or extrapolation from mouse to primate neurons and cortical networks. PMID:28326020

  8. Small is beautiful: models of small neuronal networks

    PubMed Central

    Lamb, Damon G; Calabrese, Ronald L

    2013-01-01

    Modeling has contributed a great deal to our understanding of how individual neurons and neuronal networks function. In this review, we focus on models of the small neuronal networks of invertebrates, especially rhythmically active CPG networks. Models have elucidated many aspects of these networks, from identifying key interacting membrane properties to pointing out gaps in our understanding, for example missing neurons. Even the complex CPGs of vertebrates, such as those that underlie respiration, have been reduced to small network models to great effect. Modeling of these networks spans from simplified models, which are amenable to mathematical analyses, to very complicated biophysical models. Some researchers have now adopted a population approach, where they generate and analyze many related models that differ in a few to several judiciously chosen free parameters; often these parameters show variability across animals and thus justify the approach. Models of small neuronal networks will continue to expand and refine our understanding of how neuronal networks in all animals program motor output, process sensory information and learn. PMID:22364687

  9. Small is beautiful: models of small neuronal networks.

    PubMed

    Lamb, Damon G; Calabrese, Ronald L

    2012-08-01

    Modeling has contributed a great deal to our understanding of how individual neurons and neuronal networks function. In this review, we focus on models of the small neuronal networks of invertebrates, especially rhythmically active CPG networks. Models have elucidated many aspects of these networks, from identifying key interacting membrane properties to pointing out gaps in our understanding, for example missing neurons. Even the complex CPGs of vertebrates, such as those that underlie respiration, have been reduced to small network models to great effect. Modeling of these networks spans from simplified models, which are amenable to mathematical analyses, to very complicated biophysical models. Some researchers have now adopted a population approach, where they generate and analyze many related models that differ in a few to several judiciously chosen free parameters; often these parameters show variability across animals and thus justify the approach. Models of small neuronal networks will continue to expand and refine our understanding of how neuronal networks in all animals program motor output, process sensory information and learn.

  10. Intermittent synchronization in a network of bursting neurons

    NASA Astrophysics Data System (ADS)

    Park, Choongseok; Rubchinsky, Leonid L.

    2011-09-01

    Synchronized oscillations in networks of inhibitory and excitatory coupled bursting neurons are common in a variety of neural systems from central pattern generators to human brain circuits. One example of the latter is the subcortical network of the basal ganglia, formed by excitatory and inhibitory bursters of the subthalamic nucleus and globus pallidus, involved in motor control and affected in Parkinson's disease. Recent experiments have demonstrated the intermittent nature of the phase-locking of neural activity in this network. Here, we explore one potential mechanism to explain the intermittent phase-locking in a network. We simplify the network to obtain a model of two inhibitory coupled elements and explore its dynamics. We used geometric analysis and singular perturbation methods for dynamical systems to reduce the full model to a simpler set of equations. Mathematical analysis was completed using three slow variables with two different time scales. Intermittently, synchronous oscillations are generated by overlapped spiking which crucially depends on the geometry of the slow phase plane and the interplay between slow variables as well as the strength of synapses. Two slow variables are responsible for the generation of activity patterns with overlapped spiking, and the other slower variable enhances the robustness of an irregular and intermittent activity pattern. While the analyzed network and the explored mechanism of intermittent synchrony appear to be quite generic, the results of this analysis can be used to trace particular values of biophysical parameters (synaptic strength and parameters of calcium dynamics), which are known to be impacted in Parkinson's disease.

  11. Simulation Neurotechnologies for Advancing Brain Research: Parallelizing Large Networks in NEURON.

    PubMed

    Lytton, William W; Seidenstein, Alexandra H; Dura-Bernal, Salvador; McDougal, Robert A; Schürmann, Felix; Hines, Michael L

    2016-10-01

    Large multiscale neuronal network simulations are of increasing value as more big data are gathered about brain wiring and organization under the auspices of a current major research initiative, such as Brain Research through Advancing Innovative Neurotechnologies. The development of these models requires new simulation technologies. We describe here the current use of the NEURON simulator with message passing interface (MPI) for simulation in the domain of moderately large networks on commonly available high-performance computers (HPCs). We discuss the basic layout of such simulations, including the methods of simulation setup, the run-time spike-passing paradigm, and postsimulation data storage and data management approaches. Using the Neuroscience Gateway, a portal for computational neuroscience that provides access to large HPCs, we benchmark simulations of neuronal networks of different sizes (500-100,000 cells), and using different numbers of nodes (1-256). We compare three types of networks, composed of either Izhikevich integrate-and-fire neurons (I&F), single-compartment Hodgkin-Huxley (HH) cells, or a hybrid network with half of each. Results show simulation run time increased approximately linearly with network size and decreased almost linearly with the number of nodes. Networks with I&F neurons were faster than HH networks, although differences were small since all tested cells were point neurons with a single compartment.

  12. On the Dynamics of the Spontaneous Activity in Neuronal Networks

    PubMed Central

    Bonifazi, Paolo; Ruaro, Maria Elisabetta; Torre, Vincent

    2007-01-01

    Most neuronal networks, even in the absence of external stimuli, produce spontaneous bursts of spikes separated by periods of reduced activity. The origin and functional role of these neuronal events are still unclear. The present work shows that the spontaneous activity of two very different networks, intact leech ganglia and dissociated cultures of rat hippocampal neurons, share several features. Indeed, in both networks: i) the inter-spike intervals distribution of the spontaneous firing of single neurons is either regular or periodic or bursting, with the fraction of bursting neurons depending on the network activity; ii) bursts of spontaneous spikes have the same broad distributions of size and duration; iii) the degree of correlated activity increases with the bin width, and the power spectrum of the network firing rate has a 1/f behavior at low frequencies, indicating the existence of long-range temporal correlations; iv) the activity of excitatory synaptic pathways mediated by NMDA receptors is necessary for the onset of the long-range correlations and for the presence of large bursts; v) blockage of inhibitory synaptic pathways mediated by GABAA receptors causes instead an increase in the correlation among neurons and leads to a burst distribution composed only of very small and very large bursts. These results suggest that the spontaneous electrical activity in neuronal networks with different architectures and functions can have very similar properties and common dynamics. PMID:17502919

  13. Emergence of Slow-Switching Assemblies in Structured Neuronal Networks.

    PubMed

    Schaub, Michael T; Billeh, Yazan N; Anastassiou, Costas A; Koch, Christof; Barahona, Mauricio

    2015-07-01

    Unraveling the interplay between connectivity and spatio-temporal dynamics in neuronal networks is a key step to advance our understanding of neuronal information processing. Here we investigate how particular features of network connectivity underpin the propensity of neural networks to generate slow-switching assembly (SSA) dynamics, i.e., sustained epochs of increased firing within assemblies of neurons which transition slowly between different assemblies throughout the network. We show that the emergence of SSA activity is linked to spectral properties of the asymmetric synaptic weight matrix. In particular, the leading eigenvalues that dictate the slow dynamics exhibit a gap with respect to the bulk of the spectrum, and the associated Schur vectors exhibit a measure of block-localization on groups of neurons, thus resulting in coherent dynamical activity on those groups. Through simple rate models, we gain analytical understanding of the origin and importance of the spectral gap, and use these insights to develop new network topologies with alternative connectivity paradigms which also display SSA activity. Specifically, SSA dynamics involving excitatory and inhibitory neurons can be achieved by modifying the connectivity patterns between both types of neurons. We also show that SSA activity can occur at multiple timescales reflecting a hierarchy in the connectivity, and demonstrate the emergence of SSA in small-world like networks. Our work provides a step towards understanding how network structure (uncovered through advancements in neuroanatomy and connectomics) can impact on spatio-temporal neural activity and constrain the resulting dynamics.

  14. Emergence of Slow-Switching Assemblies in Structured Neuronal Networks

    PubMed Central

    Schaub, Michael T.; Billeh, Yazan N.; Anastassiou, Costas A.; Koch, Christof; Barahona, Mauricio

    2015-01-01

    Unraveling the interplay between connectivity and spatio-temporal dynamics in neuronal networks is a key step to advance our understanding of neuronal information processing. Here we investigate how particular features of network connectivity underpin the propensity of neural networks to generate slow-switching assembly (SSA) dynamics, i.e., sustained epochs of increased firing within assemblies of neurons which transition slowly between different assemblies throughout the network. We show that the emergence of SSA activity is linked to spectral properties of the asymmetric synaptic weight matrix. In particular, the leading eigenvalues that dictate the slow dynamics exhibit a gap with respect to the bulk of the spectrum, and the associated Schur vectors exhibit a measure of block-localization on groups of neurons, thus resulting in coherent dynamical activity on those groups. Through simple rate models, we gain analytical understanding of the origin and importance of the spectral gap, and use these insights to develop new network topologies with alternative connectivity paradigms which also display SSA activity. Specifically, SSA dynamics involving excitatory and inhibitory neurons can be achieved by modifying the connectivity patterns between both types of neurons. We also show that SSA activity can occur at multiple timescales reflecting a hierarchy in the connectivity, and demonstrate the emergence of SSA in small-world like networks. Our work provides a step towards understanding how network structure (uncovered through advancements in neuroanatomy and connectomics) can impact on spatio-temporal neural activity and constrain the resulting dynamics. PMID:26176664

  15. Biological Investigations of Adaptive Networks: Neuronal Control of Conditioned Responses

    DTIC Science & Technology

    1989-07-01

    NO Boiling AFB, DC 203-4861102F 2312 Al I TI TLE (include Secunty Clamtfiation) Biological Investigations of Adaptive Networks: Neuronal Control of...based on mathematical models and computer simulation. Recordings were done from single brain stem neurons in awake, behaving animals for the purpose...single-unit recordings from awake behaving animals were developed. The relationship between single neurons ’ dynamic behavior and the CR in terms of

  16. Neuronal avalanches of a self-organized neural network with active-neuron-dominant structure.

    PubMed

    Li, Xiumin; Small, Michael

    2012-06-01

    Neuronal avalanche is a spontaneous neuronal activity which obeys a power-law distribution of population event sizes with an exponent of -3/2. It has been observed in the superficial layers of cortex both in vivo and in vitro. In this paper, we analyze the information transmission of a novel self-organized neural network with active-neuron-dominant structure. Neuronal avalanches can be observed in this network with appropriate input intensity. We find that the process of network learning via spike-timing dependent plasticity dramatically increases the complexity of network structure, which is finally self-organized to be active-neuron-dominant connectivity. Both the entropy of activity patterns and the complexity of their resulting post-synaptic inputs are maximized when the network dynamics are propagated as neuronal avalanches. This emergent topology is beneficial for information transmission with high efficiency and also could be responsible for the large information capacity of this network compared with alternative archetypal networks with different neural connectivity.

  17. Neuronal avalanches of a self-organized neural network with active-neuron-dominant structure

    NASA Astrophysics Data System (ADS)

    Li, Xiumin; Small, Michael

    2012-06-01

    Neuronal avalanche is a spontaneous neuronal activity which obeys a power-law distribution of population event sizes with an exponent of -3/2. It has been observed in the superficial layers of cortex both invivo and invitro. In this paper, we analyze the information transmission of a novel self-organized neural network with active-neuron-dominant structure. Neuronal avalanches can be observed in this network with appropriate input intensity. We find that the process of network learning via spike-timing dependent plasticity dramatically increases the complexity of network structure, which is finally self-organized to be active-neuron-dominant connectivity. Both the entropy of activity patterns and the complexity of their resulting post-synaptic inputs are maximized when the network dynamics are propagated as neuronal avalanches. This emergent topology is beneficial for information transmission with high efficiency and also could be responsible for the large information capacity of this network compared with alternative archetypal networks with different neural connectivity.

  18. Unraveling a locomotor network, many neurons at a time.

    PubMed

    Brownstone, Robert M; Stifani, Nicolas

    2015-04-08

    In this issue of Neuron, Bruno et al. (2015) use large-scale recordings in Aplysia, and apply novel dimensionality-reduction techniques to define dynamical building blocks involved in locomotor behavior. These techniques open new avenues to the study of neuronal networks.

  19. Invariant imbedding and a matrix integral equation of neuronal networks.

    NASA Technical Reports Server (NTRS)

    Kalaba, R.; Ruspini, E. H.

    1971-01-01

    A matrix Fredholm integral equation of neuronal networks is transformed into a Cauchy system suited for numerical and analytical studies. A special case is discussed, and a connection with the classical renewal integral equation of stochastic point processes is presented.

  20. Delay-induced locking in bursting neuronal networks

    NASA Astrophysics Data System (ADS)

    Zhu, Jinjie; Liu, Xianbin

    2017-08-01

    In this paper, the collective behaviors for ring structured bursting neuronal networks with electrical couplings and distance-dependent delays are studied. Each neuron is modeled by the Hindmarsh-Rose neuron. Through changing time delays between connected neurons, different spatiotemporal patterns are obtained. These patterns can be explained by calculating the ratios between the bursting period and the delay which exhibit clear locking relations. The holding and the failure of the lockings are investigated via bifurcation analysis. In particular, the bursting death phenomenon is observed for large coupling strengths and small time delays which is in fact the result of the partial amplitude death in the fast subsystem. These results indicate that the collective behaviors of bursting neurons critically depend on the bifurcation structure of individual ones and thus the variety of bifurcation types for bursting neurons may create diverse behaviors in similar neuronal ensembles.

  1. Qualitative-Modeling-Based Silicon Neurons and Their Networks.

    PubMed

    Kohno, Takashi; Sekikawa, Munehisa; Li, Jing; Nanami, Takuya; Aihara, Kazuyuki

    2016-01-01

    The ionic conductance models of neuronal cells can finely reproduce a wide variety of complex neuronal activities. However, the complexity of these models has prompted the development of qualitative neuron models. They are described by differential equations with a reduced number of variables and their low-dimensional polynomials, which retain the core mathematical structures. Such simple models form the foundation of a bottom-up approach in computational and theoretical neuroscience. We proposed a qualitative-modeling-based approach for designing silicon neuron circuits, in which the mathematical structures in the polynomial-based qualitative models are reproduced by differential equations with silicon-native expressions. This approach can realize low-power-consuming circuits that can be configured to realize various classes of neuronal cells. In this article, our qualitative-modeling-based silicon neuron circuits for analog and digital implementations are quickly reviewed. One of our CMOS analog silicon neuron circuits can realize a variety of neuronal activities with a power consumption less than 72 nW. The square-wave bursting mode of this circuit is explained. Another circuit can realize Class I and II neuronal activities with about 3 nW. Our digital silicon neuron circuit can also realize these classes. An auto-associative memory realized on an all-to-all connected network of these silicon neurons is also reviewed, in which the neuron class plays important roles in its performance.

  2. Qualitative-Modeling-Based Silicon Neurons and Their Networks

    PubMed Central

    Kohno, Takashi; Sekikawa, Munehisa; Li, Jing; Nanami, Takuya; Aihara, Kazuyuki

    2016-01-01

    The ionic conductance models of neuronal cells can finely reproduce a wide variety of complex neuronal activities. However, the complexity of these models has prompted the development of qualitative neuron models. They are described by differential equations with a reduced number of variables and their low-dimensional polynomials, which retain the core mathematical structures. Such simple models form the foundation of a bottom-up approach in computational and theoretical neuroscience. We proposed a qualitative-modeling-based approach for designing silicon neuron circuits, in which the mathematical structures in the polynomial-based qualitative models are reproduced by differential equations with silicon-native expressions. This approach can realize low-power-consuming circuits that can be configured to realize various classes of neuronal cells. In this article, our qualitative-modeling-based silicon neuron circuits for analog and digital implementations are quickly reviewed. One of our CMOS analog silicon neuron circuits can realize a variety of neuronal activities with a power consumption less than 72 nW. The square-wave bursting mode of this circuit is explained. Another circuit can realize Class I and II neuronal activities with about 3 nW. Our digital silicon neuron circuit can also realize these classes. An auto-associative memory realized on an all-to-all connected network of these silicon neurons is also reviewed, in which the neuron class plays important roles in its performance. PMID:27378842

  3. Complete classification of the macroscopic behavior of a heterogeneous network of theta neurons.

    PubMed

    Luke, Tanushree B; Barreto, Ernest; So, Paul

    2013-12-01

    We design and analyze the dynamics of a large network of theta neurons, which are idealized type I neurons. The network is heterogeneous in that it includes both inherently spiking and excitable neurons. The coupling is global, via pulselike synapses of adjustable sharpness. Using recently developed analytical methods, we identify all possible asymptotic states that can be exhibited by a mean field variable that captures the network's macroscopic state. These consist of two equilibrium states that reflect partial synchronization in the network and a limit cycle state in which the degree of network synchronization oscillates in time. Our approach also permits a complete bifurcation analysis, which we carry out with respect to parameters that capture the degree of excitability of the neurons, the heterogeneity in the population, and the coupling strength (which can be excitatory or inhibitory). We find that the network typically tends toward the two macroscopic equilibrium states when the neuron's intrinsic dynamics and the network interactions reinforce one another. In contrast, the limit cycle state, bifurcations, and multistability tend to occur when there is competition among these network features. Finally, we show that our results are exhibited by finite network realizations of reasonable size.

  4. Library-based numerical reduction of the Hodgkin-Huxley neuron for network simulation.

    PubMed

    Sun, Yi; Zhou, Douglas; Rangan, Aaditya V; Cai, David

    2009-12-01

    We present an efficient library-based numerical method for simulating the Hodgkin-Huxley (HH) neuronal networks. The key components in our numerical method involve (i) a pre-computed high resolution data library which contains typical neuronal trajectories (i.e., the time-courses of membrane potential and gating variables) during the interval of an action potential (spike), thus allowing us to avoid resolving the spikes in detail and to use large numerical time steps for evolving the HH neuron equations; (ii) an algorithm of spike-spike corrections within the groups of strongly coupled neurons to account for spike-spike interactions in a single large time step. By using the library method, we can evolve the HH networks using time steps one order of magnitude larger than the typical time steps used for resolving the trajectories without the library, while achieving comparable resolution in statistical quantifications of the network activity, such as average firing rate, interspike interval distribution, power spectra of voltage traces. Moreover, our large time steps using the library method can break the stability requirement of standard methods (such as Runge-Kutta (RK) methods) for the original dynamics. We compare our library-based method with RK methods, and find that our method can capture very well phase-locked, synchronous, and chaotic dynamics of HH neuronal networks. It is important to point out that, in essence, our library-based HH neuron solver can be viewed as a numerical reduction of the HH neuron to an integrate-and-fire (I&F) neuronal representation that does not sacrifice the gating dynamics (as normally done in the analytical reduction to an I&F neuron).

  5. Effect of methylprednisolone on mammalian neuronal networks in vitro.

    PubMed

    Wittstock, Matthias; Rommer, Paulus S; Schiffmann, Florian; Jügelt, Konstantin; Stüwe, Simone; Benecke, Reiner; Schiffmann, Dietmar; Zettl, Uwe K

    2015-01-01

    Glucocorticosteroids (GCS) are widely used for the treatment of neurological diseases, e.g. multiple sclerosis. High levels of GCS are toxic to the central nervous system and can produce adverse effects. The effect of methylprednisolone (MP) on mammalian neuronal networks was studied in vitro. We demonstrate a dose-dependent excitatory effect of MP on cultured neuronal networks, followed by a shut-down of electrical activity using the microelectrode array technique.

  6. Mapping Generative Models onto a Network of Digital Spiking Neurons.

    PubMed

    Pedroni, Bruno U; Das, Srinjoy; Arthur, John V; Merolla, Paul A; Jackson, Bryan L; Modha, Dharmendra S; Kreutz-Delgado, Kenneth; Cauwenberghs, Gert

    2016-05-18

    Stochastic neural networks such as Restricted Boltzmann Machines (RBMs) have been successfully used in applications ranging from speech recognition to image classification, and are particularly interesting because of their potential for generative tasks. Inference and learning in these algorithms use a Markov Chain Monte Carlo procedure called Gibbs sampling, where a logistic function forms the kernel of this sampler. On the other side of the spectrum, neuromorphic systems have shown great promise for low-power and parallelized cognitive computing, but lack well-suited applications and automation procedures. In this work, we propose a systematic method for bridging the RBM algorithm and digital neuromorphic systems, with a generative pattern completion task as proof of concept. For this, we first propose a method of producing the Gibbs sampler using bio-inspired digital noisy integrate-and-fire neurons. Next, we describe the process of mapping generative RBMs trained offline onto the IBM TrueNorth neurosynaptic processor-a low-power digital neuromorphic VLSI substrate. Mapping these algorithms onto neuromorphic hardware presents unique challenges in network connectivity and weight and bias quantization, which, in turn, require architectural and design strategies for the physical realization. Generative performance is analyzed to validate the neuromorphic requirements and to best select the neuron parameters for the model. Lastly, we describe a design automation procedure which achieves optimal resource usage, accounting for the novel hardware adaptations. This work represents the first implementation of generative RBM inference on a neuromorphic VLSI substrate.

  7. Mapping Generative Models onto a Network of Digital Spiking Neurons.

    PubMed

    Pedroni, Bruno U; Das, Srinjoy; Arthur, John V; Merolla, Paul A; Jackson, Bryan L; Modha, Dharmendra S; Kreutz-Delgado, Kenneth; Cauwenberghs, Gert

    2016-08-01

    Stochastic neural networks such as Restricted Boltzmann Machines (RBMs) have been successfully used in applications ranging from speech recognition to image classification, and are particularly interesting because of their potential for generative tasks. Inference and learning in these algorithms use a Markov Chain Monte Carlo procedure called Gibbs sampling, where a logistic function forms the kernel of this sampler. On the other side of the spectrum, neuromorphic systems have shown great promise for low-power and parallelized cognitive computing, but lack well-suited applications and automation procedures. In this work, we propose a systematic method for bridging the RBM algorithm and digital neuromorphic systems, with a generative pattern completion task as proof of concept. For this, we first propose a method of producing the Gibbs sampler using bio-inspired digital noisy integrate-and-fire neurons. Next, we describe the process of mapping generative RBMs trained offline onto the IBM TrueNorth neurosynaptic processor-a low-power digital neuromorphic VLSI substrate. Mapping these algorithms onto neuromorphic hardware presents unique challenges in network connectivity and weight and bias quantization, which, in turn, require architectural and design strategies for the physical realization. Generative performance is analyzed to validate the neuromorphic requirements and to best select the neuron parameters for the model. Lastly, we describe a design automation procedure which achieves optimal resource usage, accounting for the novel hardware adaptations. This work represents the first implementation of generative RBM inference on a neuromorphic VLSI substrate.

  8. An FPGA-Based Silicon Neuronal Network with Selectable Excitability Silicon Neurons.

    PubMed

    Li, Jing; Katori, Yuichi; Kohno, Takashi

    2012-01-01

    This paper presents a digital silicon neuronal network which simulates the nerve system in creatures and has the ability to execute intelligent tasks, such as associative memory. Two essential elements, the mathematical-structure-based digital spiking silicon neuron (DSSN) and the transmitter release based silicon synapse, allow us to tune the excitability of silicon neurons and are computationally efficient for hardware implementation. We adopt mixed pipeline and parallel structure and shift operations to design a sufficient large and complex network without excessive hardware resource cost. The network with 256 full-connected neurons is built on a Digilent Atlys board equipped with a Xilinx Spartan-6 LX45 FPGA. Besides, a memory control block and USB control block are designed to accomplish the task of data communication between the network and the host PC. This paper also describes the mechanism of associative memory performed in the silicon neuronal network. The network is capable of retrieving stored patterns if the inputs contain enough information of them. The retrieving probability increases with the similarity between the input and the stored pattern increasing. Synchronization of neurons is observed when the successful stored pattern retrieval occurs.

  9. An FPGA-Based Silicon Neuronal Network with Selectable Excitability Silicon Neurons

    PubMed Central

    Li, Jing; Katori, Yuichi; Kohno, Takashi

    2012-01-01

    This paper presents a digital silicon neuronal network which simulates the nerve system in creatures and has the ability to execute intelligent tasks, such as associative memory. Two essential elements, the mathematical-structure-based digital spiking silicon neuron (DSSN) and the transmitter release based silicon synapse, allow us to tune the excitability of silicon neurons and are computationally efficient for hardware implementation. We adopt mixed pipeline and parallel structure and shift operations to design a sufficient large and complex network without excessive hardware resource cost. The network with 256 full-connected neurons is built on a Digilent Atlys board equipped with a Xilinx Spartan-6 LX45 FPGA. Besides, a memory control block and USB control block are designed to accomplish the task of data communication between the network and the host PC. This paper also describes the mechanism of associative memory performed in the silicon neuronal network. The network is capable of retrieving stored patterns if the inputs contain enough information of them. The retrieving probability increases with the similarity between the input and the stored pattern increasing. Synchronization of neurons is observed when the successful stored pattern retrieval occurs. PMID:23269911

  10. Synergy and redundancy in timescale dependent multiplex networks of hippocampal and cortical neurons

    NASA Astrophysics Data System (ADS)

    Timme, Nicholas; Ito, Shinya; Myroshnychenko, Maxym; Yeh, Fang-Chin; Hiolski, Emma; Litke, Alan; Beggs, John

    2015-03-01

    Understanding the types of computations small groups of neurons perform is of great importance in neuroscience. To investigate these computations, we used tools from information theory (transfer entropy and the partial information decomposition) to study information processing in time scale dependent effective connectivity networks (i.e. multiplex neural networks). These networks were derived from the spiking activity of thousands of neurons recorded from 60 cortico-hippocampal slice cultures using a high density 512-electrode array with 60 μm inter-electrode spacing and 50 μs temporal resolution. To the best of our knowledge, this preparation and recording method represents a combination of number of recorded neurons and temporal and spatial recording resolutions that is not currently available in any in vivo recording system. We found that neurons that received many connections tended to not processes as much information as neurons that received few connections, but neurons that sent out many connections tended to process more information than neurons that sent out few connections. Also, for slow interactions, we found that neurons that were physically distant tended to participate in more interesting computations than neurons that were more proximally located. NSF Grants 090813 (JMB), 1058291 (JMB), and IIS-0904413 (A.L.).

  11. Manifestation of function-follow-form in cultured neuronal networks.

    PubMed

    Volman, Vladislav; Baruchi, Itay; Ben-Jacob, Eshel

    2005-06-01

    We expose hidden function-follow-form schemata in the recorded activity of cultured neuronal networks by comparing the activity with simulation results of a new modeling approach. Cultured networks grown from an arbitrary mixture of neuron and glia cells in the absence of external stimulations and chemical cues spontaneously form networks of different sizes (from 50 to several millions of neurons) that exhibit non-arbitrary complex spatio-temporal patterns of activity. The latter is marked by formation of a sequence of synchronized bursting events (SBEs)--short time windows (approximately 200 ms) of rapid neuron firing, separated by longer time intervals (seconds) of sporadic neuron firing. The new dynamical synapse and soma (DSS) model, used here, has been successful in generating sequences of SBEs with the same statistical scaling properties (over six time decades) as those of the small networks. Large networks generate statistically distinct sub-groups of SBEs, each with its own characteristic pattern of neuronal firing ('fingerprint'). This special function (activity) motif has been proposed to emanate from a structural (form) motif--self-organization of the large networks into a fabric of overlapping sub-networks of about 1 mm in size. Here we test this function-follow-form idea by investigating the influence of the connectivity architecture of a model network (form) on the structure of its spontaneous activity (function). We show that a repertoire of possible activity states similar to the observed ones can be generated by networks with proper underlying architecture. For example, networks composed of two overlapping sub-networks exhibit distinct types of SBEs, each with its own characteristic pattern of neuron activity that starts at a specific sub-network. We further show that it is possible to regulate the temporal appearance of the different sub-groups of SBEs by an additional non-synaptic current fed into the soma of the modeled neurons. The ability to

  12. GABAergic hub neurons orchestrate synchrony in developing hippocampal networks.

    PubMed

    Bonifazi, P; Goldin, M; Picardo, M A; Jorquera, I; Cattani, A; Bianconi, G; Represa, A; Ben-Ari, Y; Cossart, R

    2009-12-04

    Brain function operates through the coordinated activation of neuronal assemblies. Graph theory predicts that scale-free topologies, which include "hubs" (superconnected nodes), are an effective design to orchestrate synchronization. Whether hubs are present in neuronal assemblies and coordinate network activity remains unknown. Using network dynamics imaging, online reconstruction of functional connectivity, and targeted whole-cell recordings in rats and mice, we found that developing hippocampal networks follow a scale-free topology, and we demonstrated the existence of functional hubs. Perturbation of a single hub influenced the entire network dynamics. Morphophysiological analysis revealed that hub cells are a subpopulation of gamma-aminobutyric acid-releasing (GABAergic) interneurons possessing widespread axonal arborizations. These findings establish a central role for GABAergic interneurons in shaping developing networks and help provide a conceptual framework for studying neuronal synchrony.

  13. Developing neuronal networks: self-organized criticality predicts the future.

    PubMed

    Pu, Jiangbo; Gong, Hui; Li, Xiangning; Luo, Qingming

    2013-01-01

    Self-organized criticality emerged in neural activity is one of the key concepts to describe the formation and the function of developing neuronal networks. The relationship between critical dynamics and neural development is both theoretically and experimentally appealing. However, whereas it is well-known that cortical networks exhibit a rich repertoire of activity patterns at different stages during in vitro maturation, dynamical activity patterns through the entire neural development still remains unclear. Here we show that a series of metastable network states emerged in the developing and "aging" process of hippocampal networks cultured from dissociated rat neurons. The unidirectional sequence of state transitions could be only observed in networks showing power-law scaling of distributed neuronal avalanches. Our data suggest that self-organized criticality may guide spontaneous activity into a sequential succession of homeostatically-regulated transient patterns during development, which may help to predict the tendency of neural development at early ages in the future.

  14. Cluster synchronization in networks of neurons with chemical synapses

    SciTech Connect

    Juang, Jonq; Liang, Yu-Hao

    2014-03-15

    In this work, we study the cluster synchronization of chemically coupled and generally formulated networks which are allowed to be nonidentical. The sufficient condition for the existence of stably synchronous clusters is derived. Specifically, we only need to check the stability of the origins of m decoupled linear systems. Here, m is the number of subpopulations. Examples of nonidentical networks such as Hindmarsh-Rose (HR) neurons with various choices of parameters in different subpopulations, or HR neurons in one subpopulation and FitzHugh-Nagumo neurons in the other subpopulation are provided. Explicit threshold for the coupling strength that guarantees the stably cluster synchronization can be obtained.

  15. Energetic Constraints Produce Self-sustained Oscillatory Dynamics in Neuronal Networks

    PubMed Central

    Burroni, Javier; Taylor, P.; Corey, Cassian; Vachnadze, Tengiz; Siegelmann, Hava T.

    2017-01-01

    Overview: We model energy constraints in a network of spiking neurons, while exploring general questions of resource limitation on network function abstractly. Background: Metabolic states like dietary ketosis or hypoglycemia have a large impact on brain function and disease outcomes. Glia provide metabolic support for neurons, among other functions. Yet, in computational models of glia-neuron cooperation, there have been no previous attempts to explore the effects of direct realistic energy costs on network activity in spiking neurons. Currently, biologically realistic spiking neural networks assume that membrane potential is the main driving factor for neural spiking, and do not take into consideration energetic costs. Methods: We define local energy pools to constrain a neuron model, termed Spiking Neuron Energy Pool (SNEP), which explicitly incorporates energy limitations. Each neuron requires energy to spike, and resources in the pool regenerate over time. Our simulation displays an easy-to-use GUI, which can be run locally in a web browser, and is freely available. Results: Energy dependence drastically changes behavior of these neural networks, causing emergent oscillations similar to those in networks of biological neurons. We analyze the system via Lotka-Volterra equations, producing several observations: (1) energy can drive self-sustained oscillations, (2) the energetic cost of spiking modulates the degree and type of oscillations, (3) harmonics emerge with frequencies determined by energy parameters, and (4) varying energetic costs have non-linear effects on energy consumption and firing rates. Conclusions: Models of neuron function which attempt biological realism may benefit from including energy constraints. Further, we assert that observed oscillatory effects of energy limitations exist in networks of many kinds, and that these findings generalize to abstract graphs and technological applications. PMID:28289370

  16. Energetic Constraints Produce Self-sustained Oscillatory Dynamics in Neuronal Networks.

    PubMed

    Burroni, Javier; Taylor, P; Corey, Cassian; Vachnadze, Tengiz; Siegelmann, Hava T

    2017-01-01

    Overview: We model energy constraints in a network of spiking neurons, while exploring general questions of resource limitation on network function abstractly. Background: Metabolic states like dietary ketosis or hypoglycemia have a large impact on brain function and disease outcomes. Glia provide metabolic support for neurons, among other functions. Yet, in computational models of glia-neuron cooperation, there have been no previous attempts to explore the effects of direct realistic energy costs on network activity in spiking neurons. Currently, biologically realistic spiking neural networks assume that membrane potential is the main driving factor for neural spiking, and do not take into consideration energetic costs. Methods: We define local energy pools to constrain a neuron model, termed Spiking Neuron Energy Pool (SNEP), which explicitly incorporates energy limitations. Each neuron requires energy to spike, and resources in the pool regenerate over time. Our simulation displays an easy-to-use GUI, which can be run locally in a web browser, and is freely available. Results: Energy dependence drastically changes behavior of these neural networks, causing emergent oscillations similar to those in networks of biological neurons. We analyze the system via Lotka-Volterra equations, producing several observations: (1) energy can drive self-sustained oscillations, (2) the energetic cost of spiking modulates the degree and type of oscillations, (3) harmonics emerge with frequencies determined by energy parameters, and (4) varying energetic costs have non-linear effects on energy consumption and firing rates. Conclusions: Models of neuron function which attempt biological realism may benefit from including energy constraints. Further, we assert that observed oscillatory effects of energy limitations exist in networks of many kinds, and that these findings generalize to abstract graphs and technological applications.

  17. Ordering spatiotemporal chaos in complex thermosensitive neuron networks

    NASA Astrophysics Data System (ADS)

    Gong, Yubing; Xu, Bo; Xu, Qiang; Yang, Chuanlu; Ren, Tingqi; Hou, Zhonghuai; Xin, Houwen

    2006-04-01

    We have studied the effect of random long-range connections in chaotic thermosensitive neuron networks with each neuron being capable of exhibiting diverse bursting behaviors, and found stochastic synchronization and optimal spatiotemporal patterns. For a given coupling strength, the chaotic burst-firings of the neurons become more and more synchronized as the number of random connections (or randomness) is increased and, rather, the most pronounced spatiotemporal pattern appears for an optimal randomness. As the coupling strength is increased, the optimal randomness shifts towards a smaller strength. This result shows that random long-range connections can tame the chaos in the neural networks and make the neurons more effectively reach synchronization. Since the model studied can be used to account for hypothalamic neurons of dogfish, catfish, etc., this result may reflect the significant role of random connections in transferring biological information.

  18. Small-world networks in neuronal populations: a computational perspective.

    PubMed

    Zippo, Antonio G; Gelsomino, Giuliana; Van Duin, Pieter; Nencini, Sara; Caramenti, Gian Carlo; Valente, Maurizio; Biella, Gabriele E M

    2013-08-01

    The analysis of the brain in terms of integrated neural networks may offer insights on the reciprocal relation between structure and information processing. Even with inherent technical limits, many studies acknowledge neuron spatial arrangements and communication modes as key factors. In this perspective, we investigated the functional organization of neuronal networks by explicitly assuming a specific functional topology, the small-world network. We developed two different computational approaches. Firstly, we asked whether neuronal populations actually express small-world properties during a definite task, such as a learning task. For this purpose we developed the Inductive Conceptual Network (ICN), which is a hierarchical bio-inspired spiking network, capable of learning invariant patterns by using variable-order Markov models implemented in its nodes. As a result, we actually observed small-world topologies during learning in the ICN. Speculating that the expression of small-world networks is not solely related to learning tasks, we then built a de facto network assuming that the information processing in the brain may occur through functional small-world topologies. In this de facto network, synchronous spikes reflected functional small-world network dependencies. In order to verify the consistency of the assumption, we tested the null-hypothesis by replacing the small-world networks with random networks. As a result, only small world networks exhibited functional biomimetic characteristics such as timing and rate codes, conventional coding strategies and neuronal avalanches, which are cascades of bursting activities with a power-law distribution. Our results suggest that small-world functional configurations are liable to underpin brain information processing at neuronal level.

  19. Reciprocal cholinergic and GABAergic modulation of the small ventrolateral pacemaker neurons of Drosophila's circadian clock neuron network.

    PubMed

    Lelito, Katherine R; Shafer, Orie T

    2012-04-01

    The relatively simple clock neuron network of Drosophila is a valuable model system for the neuronal basis of circadian timekeeping. Unfortunately, many key neuronal classes of this network are inaccessible to electrophysiological analysis. We have therefore adopted the use of genetically encoded sensors to address the physiology of the fly's circadian clock network. Using genetically encoded Ca(2+) and cAMP sensors, we have investigated the physiological responses of two specific classes of clock neuron, the large and small ventrolateral neurons (l- and s-LN(v)s), to two neurotransmitters implicated in their modulation: acetylcholine (ACh) and γ-aminobutyric acid (GABA). Live imaging of l-LN(v) cAMP and Ca(2+) dynamics in response to cholinergic agonist and GABA application were well aligned with published electrophysiological data, indicating that our sensors were capable of faithfully reporting acute physiological responses to these transmitters within single adult clock neuron soma. We extended these live imaging methods to s-LN(v)s, critical neuronal pacemakers whose physiological properties in the adult brain are largely unknown. Our s-LN(v) experiments revealed the predicted excitatory responses to bath-applied cholinergic agonists and the predicted inhibitory effects of GABA and established that the antagonism of ACh and GABA extends to their effects on cAMP signaling. These data support recently published but physiologically untested models of s-LN(v) modulation and lead to the prediction that cholinergic and GABAergic inputs to s-LN(v)s will have opposing effects on the phase and/or period of the molecular clock within these critical pacemaker neurons.

  20. Two networks of electrically coupled inhibitory neurons in neocortex

    NASA Astrophysics Data System (ADS)

    Gibson, Jay R.; Beierlein, Michael; Connors, Barry W.

    1999-11-01

    Inhibitory interneurons are critical to sensory transformations, plasticity and synchronous activity in the neocortex. There are many types of inhibitory neurons, but their synaptic organization is poorly understood. Here we describe two functionally distinct inhibitory networks comprising either fast-spiking (FS) or low-threshold spiking (LTS) neurons. Paired-cell recordings showed that inhibitory neurons of the same type were strongly interconnected by electrical synapses, but electrical synapses between different inhibitory cell types were rare. The electrical synapses were strong enough to synchronize spikes in coupled interneurons. Inhibitory chemical synapses were also common between FS cells, and between FS and LTS cells, but LTS cells rarely inhibited one another. Thalamocortical synapses, which convey sensory information to the cortex, specifically and strongly excited only the FS cell network. The electrical and chemical synaptic connections of different types of inhibitory neurons are specific, and may allow each inhibitory network to function independently.

  1. Network activity of mirror neurons depends on experience.

    PubMed

    Ushakov, Vadim L; Kartashov, Sergey I; Zavyalova, Victoria V; Bezverhiy, Denis D; Posichanyuk, Vladimir I; Terentev, Vasliliy N; Anokhin, Konstantin V

    2013-03-01

    In this work, the investigation of network activity of mirror neurons systems in animal brains depending on experience (existence or absence performance of the shown actions) was carried out. It carried out the research of mirror neurons network in the C57/BL6 line mice in the supervision task of swimming mice-demonstrators in Morris water maze. It showed the presence of mirror neurons systems in the motor cortex M1, M2, cingular cortex, hippocampus in mice groups, having experience of the swimming and without it. The conclusion is drawn about the possibility of the new functional network systems formation by means of mirror neurons systems and the acquisition of new knowledge through supervision by the animals in non-specific tasks.

  2. Effects of extracellular potassium diffusion on electrically coupled neuron networks

    NASA Astrophysics Data System (ADS)

    Wu, Xing-Xing; Shuai, Jianwei

    2015-02-01

    Potassium accumulation and diffusion during neuronal epileptiform activity have been observed experimentally, and potassium lateral diffusion has been suggested to play an important role in nonsynaptic neuron networks. We adopt a hippocampal CA1 pyramidal neuron network in a zero-calcium condition to better understand the influence of extracellular potassium dynamics on the stimulus-induced activity. The potassium concentration in the interstitial space for each neuron is regulated by potassium currents, Na+-K+ pumps, glial buffering, and ion diffusion. In addition to potassium diffusion, nearby neurons are also coupled through gap junctions. Our results reveal that the latency of the first spike responding to stimulus monotonically decreases with increasing gap-junction conductance but is insensitive to potassium diffusive coupling. The duration of network oscillations shows a bell-like shape with increasing potassium diffusive coupling at weak gap-junction coupling. For modest electrical coupling, there is an optimal K+ diffusion strength, at which the flow of potassium ions among the network neurons appropriately modulates interstitial potassium concentrations in a degree that provides the most favorable environment for the generation and continuance of the action potential waves in the network.

  3. Orientation selectivity in inhibition-dominated networks of spiking neurons: effect of single neuron properties and network dynamics.

    PubMed

    Sadeh, Sadra; Rotter, Stefan

    2015-01-01

    The neuronal mechanisms underlying the emergence of orientation selectivity in the primary visual cortex of mammals are still elusive. In rodents, visual neurons show highly selective responses to oriented stimuli, but neighboring neurons do not necessarily have similar preferences. Instead of a smooth map, one observes a salt-and-pepper organization of orientation selectivity. Modeling studies have recently confirmed that balanced random networks are indeed capable of amplifying weakly tuned inputs and generating highly selective output responses, even in absence of feature-selective recurrent connectivity. Here we seek to elucidate the neuronal mechanisms underlying this phenomenon by resorting to networks of integrate-and-fire neurons, which are amenable to analytic treatment. Specifically, in networks of perfect integrate-and-fire neurons, we observe that highly selective and contrast invariant output responses emerge, very similar to networks of leaky integrate-and-fire neurons. We then demonstrate that a theory based on mean firing rates and the detailed network topology predicts the output responses, and explains the mechanisms underlying the suppression of the common-mode, amplification of modulation, and contrast invariance. Increasing inhibition dominance in our networks makes the rectifying nonlinearity more prominent, which in turn adds some distortions to the otherwise essentially linear prediction. An extension of the linear theory can account for all the distortions, enabling us to compute the exact shape of every individual tuning curve in our networks. We show that this simple form of nonlinearity adds two important properties to orientation selectivity in the network, namely sharpening of tuning curves and extra suppression of the modulation. The theory can be further extended to account for the nonlinearity of the leaky model by replacing the rectifier by the appropriate smooth input-output transfer function. These results are robust and do not

  4. Orientation Selectivity in Inhibition-Dominated Networks of Spiking Neurons: Effect of Single Neuron Properties and Network Dynamics

    PubMed Central

    Sadeh, Sadra; Rotter, Stefan

    2015-01-01

    The neuronal mechanisms underlying the emergence of orientation selectivity in the primary visual cortex of mammals are still elusive. In rodents, visual neurons show highly selective responses to oriented stimuli, but neighboring neurons do not necessarily have similar preferences. Instead of a smooth map, one observes a salt-and-pepper organization of orientation selectivity. Modeling studies have recently confirmed that balanced random networks are indeed capable of amplifying weakly tuned inputs and generating highly selective output responses, even in absence of feature-selective recurrent connectivity. Here we seek to elucidate the neuronal mechanisms underlying this phenomenon by resorting to networks of integrate-and-fire neurons, which are amenable to analytic treatment. Specifically, in networks of perfect integrate-and-fire neurons, we observe that highly selective and contrast invariant output responses emerge, very similar to networks of leaky integrate-and-fire neurons. We then demonstrate that a theory based on mean firing rates and the detailed network topology predicts the output responses, and explains the mechanisms underlying the suppression of the common-mode, amplification of modulation, and contrast invariance. Increasing inhibition dominance in our networks makes the rectifying nonlinearity more prominent, which in turn adds some distortions to the otherwise essentially linear prediction. An extension of the linear theory can account for all the distortions, enabling us to compute the exact shape of every individual tuning curve in our networks. We show that this simple form of nonlinearity adds two important properties to orientation selectivity in the network, namely sharpening of tuning curves and extra suppression of the modulation. The theory can be further extended to account for the nonlinearity of the leaky model by replacing the rectifier by the appropriate smooth input-output transfer function. These results are robust and do not

  5. Spiking neural networks for cortical neuronal spike train decoding.

    PubMed

    Fang, Huijuan; Wang, Yongji; He, Jiping

    2010-04-01

    Recent investigation of cortical coding and computation indicates that temporal coding is probably a more biologically plausible scheme used by neurons than the rate coding used commonly in most published work. We propose and demonstrate in this letter that spiking neural networks (SNN), consisting of spiking neurons that propagate information by the timing of spikes, are a better alternative to the coding scheme based on spike frequency (histogram) alone. The SNN model analyzes cortical neural spike trains directly without losing temporal information for generating more reliable motor command for cortically controlled prosthetics. In this letter, we compared the temporal pattern classification result from the SNN approach with results generated from firing-rate-based approaches: conventional artificial neural networks, support vector machines, and linear regression. The results show that the SNN algorithm can achieve higher classification accuracy and identify the spiking activity related to movement control earlier than the other methods. Both are desirable characteristics for fast neural information processing and reliable control command pattern recognition for neuroprosthetic applications.

  6. Constructing Neuronal Network Models in Massively Parallel Environments.

    PubMed

    Ippen, Tammo; Eppler, Jochen M; Plesser, Hans E; Diesmann, Markus

    2017-01-01

    Recent advances in the development of data structures to represent spiking neuron network models enable us to exploit the complete memory of petascale computers for a single brain-scale network simulation. In this work, we investigate how well we can exploit the computing power of such supercomputers for the creation of neuronal networks. Using an established benchmark, we divide the runtime of simulation code into the phase of network construction and the phase during which the dynamical state is advanced in time. We find that on multi-core compute nodes network creation scales well with process-parallel code but exhibits a prohibitively large memory consumption. Thread-parallel network creation, in contrast, exhibits speedup only up to a small number of threads but has little overhead in terms of memory. We further observe that the algorithms creating instances of model neurons and their connections scale well for networks of ten thousand neurons, but do not show the same speedup for networks of millions of neurons. Our work uncovers that the lack of scaling of thread-parallel network creation is due to inadequate memory allocation strategies and demonstrates that thread-optimized memory allocators recover excellent scaling. An analysis of the loop order used for network construction reveals that more complex tests on the locality of operations significantly improve scaling and reduce runtime by allowing construction algorithms to step through large networks more efficiently than in existing code. The combination of these techniques increases performance by an order of magnitude and harnesses the increasingly parallel compute power of the compute nodes in high-performance clusters and supercomputers.

  7. Constructing Neuronal Network Models in Massively Parallel Environments

    PubMed Central

    Ippen, Tammo; Eppler, Jochen M.; Plesser, Hans E.; Diesmann, Markus

    2017-01-01

    Recent advances in the development of data structures to represent spiking neuron network models enable us to exploit the complete memory of petascale computers for a single brain-scale network simulation. In this work, we investigate how well we can exploit the computing power of such supercomputers for the creation of neuronal networks. Using an established benchmark, we divide the runtime of simulation code into the phase of network construction and the phase during which the dynamical state is advanced in time. We find that on multi-core compute nodes network creation scales well with process-parallel code but exhibits a prohibitively large memory consumption. Thread-parallel network creation, in contrast, exhibits speedup only up to a small number of threads but has little overhead in terms of memory. We further observe that the algorithms creating instances of model neurons and their connections scale well for networks of ten thousand neurons, but do not show the same speedup for networks of millions of neurons. Our work uncovers that the lack of scaling of thread-parallel network creation is due to inadequate memory allocation strategies and demonstrates that thread-optimized memory allocators recover excellent scaling. An analysis of the loop order used for network construction reveals that more complex tests on the locality of operations significantly improve scaling and reduce runtime by allowing construction algorithms to step through large networks more efficiently than in existing code. The combination of these techniques increases performance by an order of magnitude and harnesses the increasingly parallel compute power of the compute nodes in high-performance clusters and supercomputers. PMID:28559808

  8. The phases of small networks of chemical reactors and neurons

    PubMed

    Schinor; Schneider

    2000-07-15

    We present an experimental study of the phase relationships observed in small reactor networks consisting of two and three continuous flow stirred tank reactors. In the three-reactor network one chemical oscillator is coupled to two other reactors in parallel in analogy to a small neural net. Each reactor contains an identical reaction mixture of the excitable Belousov-Zhabotinsky reaction which is characterized by its bifurcation diagram, where the electrical current is the bifurcation parameter. Coupling between the reactors is electrical via Pt-working electrodes and it can be either repulsive (inhibitory) or attractive (excitatory). An external electrical stimulus is applied to all three reactors in the form of an asymmetric electrical current pulse which sweeps across the bifurcation diagram. As a consequence, all three reactors oscillate with characteristic oscillation patterns or remain silent in analogy to the firing of neurons. The observed phase behavior depends on the type of coupling in a complex way. This situation is analogous to the in vivo measurements on single neurons (local neurons and projection neurons) performed by G. Laurent and co-workers on the olfactory system of the locust. We propose a simple neural network similar to the reactor network using the Hodgkin-Huxley model to simulate the action potentials of the coupled single neurons. Analogies between the reactor network and the neural network are discussed.

  9. Analysis and application of neuronal network controllability and observability

    NASA Astrophysics Data System (ADS)

    Su, Fei; Wang, Jiang; Li, Huiyan; Deng, Bin; Yu, Haitao; Liu, Chen

    2017-02-01

    Controllability and observability analyses are important prerequisite for designing suitable neural control strategy, which can help lower the efforts required to control and observe the system dynamics. First, 3-neuron motifs including the excitatory motif, the inhibitory motif, and the mixed motif are constructed to investigate the effects of single neuron and synaptic dynamics on network controllability (observability). Simulation results demonstrate that for networks with the same topological structure, the controllability (observability) of the node always changes if the properties of neurons and synaptic coupling strengths vary. Besides, the inhibitory networks are more controllable (observable) than the excitatory networks when the coupling strengths are the same. Then, the numerically determined controllability results of 3-neuron excitatory motifs are generalized to the desynchronization control of the modular motif network. The control energy and neuronal synchrony measure indexes are used to quantify the controllability of each node in the modular network. The best driver node obtained in this way is the same as the deduced one from motif analysis.

  10. Mild hypoxia affects synaptic connectivity in cultured neuronal networks.

    PubMed

    Hofmeijer, Jeannette; Mulder, Alex T B; Farinha, Ana C; van Putten, Michel J A M; le Feber, Joost

    2014-04-04

    Eighty percent of patients with chronic mild cerebral ischemia/hypoxia resulting from chronic heart failure or pulmonary disease have cognitive impairment. Overt structural neuronal damage is lacking and the precise cause of neuronal damage is unclear. As almost half of the cerebral energy consumption is used for synaptic transmission, and synaptic failure is the first abrupt consequence of acute complete anoxia, synaptic dysfunction is a candidate mechanism for the cognitive deterioration in chronic mild ischemia/hypoxia. Because measurement of synaptic functioning in patients is problematic, we use cultured networks of cortical neurons from new born rats, grown over a multi-electrode array, as a model system. These were exposed to partial hypoxia (partial oxygen pressure of 150Torr lowered to 40-50Torr) during 3 (n=14) or 6 (n=8) hours. Synaptic functioning was assessed before, during, and after hypoxia by assessment of spontaneous network activity, functional connectivity, and synaptically driven network responses to electrical stimulation. Action potential heights and shapes and non-synaptic stimulus responses were used as measures of individual neuronal integrity. During hypoxia of 3 and 6h, there was a statistically significant decrease of spontaneous network activity, functional connectivity, and synaptically driven network responses, whereas direct responses and action potentials remained unchanged. These changes were largely reversible. Our results indicate that in cultured neuronal networks, partial hypoxia during 3 or 6h causes isolated disturbances of synaptic connectivity.

  11. Interplay between excitability type and distributions of neuronal connectivity determines neuronal network synchronization.

    PubMed

    Mofakham, Sima; Fink, Christian G; Booth, Victoria; Zochowski, Michal R

    2016-10-01

    While the interplay between neuronal excitability properties and global properties of network topology is known to affect network propensity for synchronization, it is not clear how detailed characteristics of these properties affect spatiotemporal pattern formation. Here we study mixed networks, composed of neurons having type I and/or type II phase response curves, with varying distributions of local and random connections and show that not only average network properties, but also the connectivity distribution statistics, significantly affect network synchrony. Namely, we study networks with fixed networkwide properties, but vary the number of random connections that nodes project. We show that varying node excitability (type I vs type II) influences network synchrony most dramatically for systems with long-tailed distributions of the number of random connections per node. This indicates that a cluster of even a few highly rewired cells with a high propensity for synchronization can alter the degree of synchrony in the network as a whole. We show this effect generally on a network of coupled Kuramoto oscillators and investigate the impact of this effect more thoroughly in pulse-coupled networks of biophysical neurons.

  12. Interplay between excitability type and distributions of neuronal connectivity determines neuronal network synchronization

    NASA Astrophysics Data System (ADS)

    Mofakham, Sima; Fink, Christian G.; Booth, Victoria; Zochowski, Michal R.

    2016-10-01

    While the interplay between neuronal excitability properties and global properties of network topology is known to affect network propensity for synchronization, it is not clear how detailed characteristics of these properties affect spatiotemporal pattern formation. Here we study mixed networks, composed of neurons having type I and/or type II phase response curves, with varying distributions of local and random connections and show that not only average network properties, but also the connectivity distribution statistics, significantly affect network synchrony. Namely, we study networks with fixed networkwide properties, but vary the number of random connections that nodes project. We show that varying node excitability (type I vs type II) influences network synchrony most dramatically for systems with long-tailed distributions of the number of random connections per node. This indicates that a cluster of even a few highly rewired cells with a high propensity for synchronization can alter the degree of synchrony in the network as a whole. We show this effect generally on a network of coupled Kuramoto oscillators and investigate the impact of this effect more thoroughly in pulse-coupled networks of biophysical neurons.

  13. Nicotinic modulation of neuronal networks: from receptors to cognition.

    PubMed

    Mansvelder, Huibert D; van Aerde, Karlijn I; Couey, Jonathan J; Brussaard, Arjen B

    2006-03-01

    Nicotine affects many aspects of human cognition, including attention and memory. Activation of nicotinic acetylcholine receptors (nAChRs) in neuronal networks modulates activity and information processing during cognitive tasks, which can be observed in electroencephalograms (EEGs) and functional magnetic resonance imaging studies. In this review, we will address aspects of nAChR functioning as well as synaptic and cellular modulation important for nicotinic impact on neuronal networks that ultimately underlie its effects on cognition. Although we will focus on general mechanisms, an emphasis will be put on attention behavior and nicotinic modulation of prefrontal cortex. In addition, we will discuss how nicotinic effects at the neuronal level could be related to its effects on the cognitive level through the study of electrical oscillations as observed in EEGs and brain slices. Very little is known about mechanisms of how nAChR activation leads to a modification of electrical oscillation frequencies in EEGs. The results of studies using pharmacological interventions and transgenic animals implicate some nAChR types in aspects of cognition, but neuronal mechanisms are only poorly understood. We are only beginning to understand how nAChR distribution in neuronal networks impacts network functioning. Unveiling receptor and neuronal mechanisms important for nicotinic modulation of cognition will be instrumental for treatments of human disorders in which cholinergic signaling have been implicated, such as schizophrenia, attention deficit/hyperactivity disorder, and addiction.

  14. Neural Dynamics as Sampling: A Model for Stochastic Computation in Recurrent Networks of Spiking Neurons

    PubMed Central

    Buesing, Lars; Bill, Johannes; Nessler, Bernhard; Maass, Wolfgang

    2011-01-01

    The organization of computations in networks of spiking neurons in the brain is still largely unknown, in particular in view of the inherently stochastic features of their firing activity and the experimentally observed trial-to-trial variability of neural systems in the brain. In principle there exists a powerful computational framework for stochastic computations, probabilistic inference by sampling, which can explain a large number of macroscopic experimental data in neuroscience and cognitive science. But it has turned out to be surprisingly difficult to create a link between these abstract models for stochastic computations and more detailed models of the dynamics of networks of spiking neurons. Here we create such a link and show that under some conditions the stochastic firing activity of networks of spiking neurons can be interpreted as probabilistic inference via Markov chain Monte Carlo (MCMC) sampling. Since common methods for MCMC sampling in distributed systems, such as Gibbs sampling, are inconsistent with the dynamics of spiking neurons, we introduce a different approach based on non-reversible Markov chains that is able to reflect inherent temporal processes of spiking neuronal activity through a suitable choice of random variables. We propose a neural network model and show by a rigorous theoretical analysis that its neural activity implements MCMC sampling of a given distribution, both for the case of discrete and continuous time. This provides a step towards closing the gap between abstract functional models of cortical computation and more detailed models of networks of spiking neurons. PMID:22096452

  15. Modeling the network dynamics of pulse-coupled neurons

    NASA Astrophysics Data System (ADS)

    Chandra, Sarthak; Hathcock, David; Crain, Kimberly; Antonsen, Thomas M.; Girvan, Michelle; Ott, Edward

    2017-03-01

    We derive a mean-field approximation for the macroscopic dynamics of large networks of pulse-coupled theta neurons in order to study the effects of different network degree distributions and degree correlations (assortativity). Using the ansatz of Ott and Antonsen [Chaos 18, 037113 (2008)], we obtain a reduced system of ordinary differential equations describing the mean-field dynamics, with significantly lower dimensionality compared with the complete set of dynamical equations for the system. We find that, for sufficiently large networks and degrees, the dynamical behavior of the reduced system agrees well with that of the full network. This dimensional reduction allows for an efficient characterization of system phase transitions and attractors. For networks with tightly peaked degree distributions, the macroscopic behavior closely resembles that of fully connected networks previously studied by others. In contrast, networks with highly skewed degree distributions exhibit different macroscopic dynamics due to the emergence of degree dependent behavior of different oscillators. For nonassortative networks (i.e., networks without degree correlations), we observe the presence of a synchronously firing phase that can be suppressed by the presence of either assortativity or disassortativity in the network. We show that the results derived here can be used to analyze the effects of network topology on macroscopic behavior in neuronal networks in a computationally efficient fashion.

  16. PAN hollow fiber membranes elicit functional hippocampal neuronal network.

    PubMed

    Morelli, Sabrina; Piscioneri, Antonella; Salerno, Simona; Tasselli, Franco; Di Vito, Anna; Giusi, Giuseppina; Canonaco, Marcello; Drioli, Enrico; De Bartolo, Loredana

    2012-01-01

    This study focuses on the development of an advanced in vitro biohybrid culture model system based on the use of hollow fibre membranes (HFMs) and hippocampal neurons in order to promote the formation of a high density neuronal network. Polyacrylonitrile (PAN) and modified polyetheretherketone (PEEK-WC) membranes were prepared in hollow fibre configuration. The morphological and metabolic behaviour of hippocampal neurons cultured on PAN HF membranes were compared with those cultured on PEEK-WC HF. The differences of cell behaviour between HFMs were evidenced by the morphometric analysis in terms of axon length and also by the investigation of metabolic activity in terms of neurotrophin secretion. These findings suggested that PAN HFMs induced the in vitro reconstruction of very highly functional and complex neuronal networks. Thus, these biomaterials could potentially be used for the in vitro realization of a functional hippocampal tissue analogue for the study of neurobiological functions and/or neurodegenerative diseases.

  17. Thermodynamics and signatures of criticality in a network of neurons.

    PubMed

    Tkačik, Gašper; Mora, Thierry; Marre, Olivier; Amodei, Dario; Palmer, Stephanie E; Berry, Michael J; Bialek, William

    2015-09-15

    The activity of a neural network is defined by patterns of spiking and silence from the individual neurons. Because spikes are (relatively) sparse, patterns of activity with increasing numbers of spikes are less probable, but, with more spikes, the number of possible patterns increases. This tradeoff between probability and numerosity is mathematically equivalent to the relationship between entropy and energy in statistical physics. We construct this relationship for populations of up to N = 160 neurons in a small patch of the vertebrate retina, using a combination of direct and model-based analyses of experiments on the response of this network to naturalistic movies. We see signs of a thermodynamic limit, where the entropy per neuron approaches a smooth function of the energy per neuron as N increases. The form of this function corresponds to the distribution of activity being poised near an unusual kind of critical point. We suggest further tests of criticality, and give a brief discussion of its functional significance.

  18. Anticipated synchronization in neuronal network motifs

    NASA Astrophysics Data System (ADS)

    Matias, F. S.; Gollo, L. L.; Carelli, P. V.; Copelli, M.; Mirasso, C. R.

    2013-01-01

    Two identical dynamical systems coupled unidirectionally (in a so called master-slave configuration) exhibit anticipated synchronization (AS) if the one which receives the coupling (the slave) also receives a negative delayed self-feedback. In oscillatory neuronal systems AS is characterized by a phase-locking with negative time delay τ between the spikes of the master and of the slave (slave fires before the master), while in the usual delayed synchronization (DS) regime τ is positive (slave fires after the master). A 3-neuron motif in which the slave self-feedback is replaced by a feedback loop mediated by an interneuron can exhibits both AS and DS regimes. Here we show that AS is robust in the presence of noise in a 3 Hodgkin-Huxley type neuronal motif. We also show that AS is stable for large values of τ in a chain of connected slaves-interneurons.

  19. Extracting functionally feedforward networks from a population of spiking neurons

    PubMed Central

    Vincent, Kathleen; Tauskela, Joseph S.; Thivierge, Jean-Philippe

    2012-01-01

    Neuronal avalanches are a ubiquitous form of activity characterized by spontaneous bursts whose size distribution follows a power-law. Recent theoretical models have replicated power-law avalanches by assuming the presence of functionally feedforward connections (FFCs) in the underlying dynamics of the system. Accordingly, avalanches are generated by a feedforward chain of activation that persists despite being embedded in a larger, massively recurrent circuit. However, it is unclear to what extent networks of living neurons that exhibit power-law avalanches rely on FFCs. Here, we employed a computational approach to reconstruct the functional connectivity of cultured cortical neurons plated on multielectrode arrays (MEAs) and investigated whether pharmacologically induced alterations in avalanche dynamics are accompanied by changes in FFCs. This approach begins by extracting a functional network of directed links between pairs of neurons, and then evaluates the strength of FFCs using Schur decomposition. In a first step, we examined the ability of this approach to extract FFCs from simulated spiking neurons. The strength of FFCs obtained in strictly feedforward networks diminished monotonically as links were gradually rewired at random. Next, we estimated the FFCs of spontaneously active cortical neuron cultures in the presence of either a control medium, a GABAA receptor antagonist (PTX), or an AMPA receptor antagonist combined with an NMDA receptor antagonist (APV/DNQX). The distribution of avalanche sizes in these cultures was modulated by this pharmacology, with a shallower power-law under PTX (due to the prominence of larger avalanches) and a steeper power-law under APV/DNQX (due to avalanches recruiting fewer neurons) relative to control cultures. The strength of FFCs increased in networks after application of PTX, consistent with an amplification of feedforward activity during avalanches. Conversely, FFCs decreased after application of APV/DNQX, consistent

  20. Self-organization and neuronal avalanches in networks of dissociated cortical neurons.

    PubMed

    Pasquale, V; Massobrio, P; Bologna, L L; Chiappalone, M; Martinoia, S

    2008-06-02

    Dissociated cortical neurons from rat embryos cultured onto micro-electrode arrays exhibit characteristic patterns of electrophysiological activity, ranging from isolated spikes in the first days of development to highly synchronized bursts after 3-4 weeks in vitro. In this work we analyzed these features by considering the approach proposed by the self-organized criticality theory: we found that networks of dissociated cortical neurons also generate spontaneous events of spreading activity, previously observed in cortical slices, in the form of neuronal avalanches. Choosing an appropriate time scale of observation to detect such neuronal avalanches, we studied the dynamics by considering the spontaneous activity during acute recordings in mature cultures and following the development of the network. We observed different behaviors, i.e. sub-critical, critical or super-critical distributions of avalanche sizes and durations, depending on both the age and the development of cultures. In order to clarify this variability, neuronal avalanches were correlated with other statistical parameters describing the global activity of the network. Criticality was found in correspondence to medium synchronization among bursts and high ratio between bursting and spiking activity. Then, the action of specific drugs affecting global bursting dynamics (i.e. acetylcholine and bicuculline) was investigated to confirm the correlation between criticality and regulated balance between synchronization and variability in the bursting activity. Finally, a computational model of neuronal network was developed in order to interpret the experimental results and understand which parameters (e.g. connectivity, excitability) influence the distribution of avalanches. In summary, cortical neurons preserve their capability to self-organize in an effective network even when dissociated and cultured in vitro. The distribution of avalanche features seems to be critical in those cultures displaying

  1. Short-term memory in networks of dissociated cortical neurons.

    PubMed

    Dranias, Mark R; Ju, Han; Rajaram, Ezhilarasan; VanDongen, Antonius M J

    2013-01-30

    Short-term memory refers to the ability to store small amounts of stimulus-specific information for a short period of time. It is supported by both fading and hidden memory processes. Fading memory relies on recurrent activity patterns in a neuronal network, whereas hidden memory is encoded using synaptic mechanisms, such as facilitation, which persist even when neurons fall silent. We have used a novel computational and optogenetic approach to investigate whether these same memory processes hypothesized to support pattern recognition and short-term memory in vivo, exist in vitro. Electrophysiological activity was recorded from primary cultures of dissociated rat cortical neurons plated on multielectrode arrays. Cultures were transfected with ChannelRhodopsin-2 and optically stimulated using random dot stimuli. The pattern of neuronal activity resulting from this stimulation was analyzed using classification algorithms that enabled the identification of stimulus-specific memories. Fading memories for different stimuli, encoded in ongoing neural activity, persisted and could be distinguished from each other for as long as 1 s after stimulation was terminated. Hidden memories were detected by altered responses of neurons to additional stimulation, and this effect persisted longer than 1 s. Interestingly, network bursts seem to eliminate hidden memories. These results are similar to those that have been reported from similar experiments in vivo and demonstrate that mechanisms of information processing and short-term memory can be studied using cultured neuronal networks, thereby setting the stage for therapeutic applications using this platform.

  2. Locking induced by distance-dependent delay in neuronal networks.

    PubMed

    Zhu, Jinjie; Liu, Xianbin

    2016-11-01

    In the present paper, the locking phenomenon induced by distance-dependent delay in ring structured neuronal networks is investigated, wherein each neuron is modeled by a FitzHugh-Nagumo neuron. Through increasing the element time delay, the different spatiotemporal patterns are observed. By calculating the interspike interval and its value that is divided by the delay of the nearest neurons, it is found that these patterns are actually the lockings between the period of spiking and the distance-dependent delay of the connected neurons. The lockings could also be revealed by the mean time lag of the neurons and in different connection topologies. Furthermore, the influences of the network size and the coupling strength are investigated, wherein the former seems to play a negligible role on these locking patterns; in contrast, too small coupling strengths will blur the boundaries of different patterns and too large ones may destroy the high ratio locking patterns. Finally, one may predict the locking order which determines the emergence order of the patterns in the networks.

  3. Locking induced by distance-dependent delay in neuronal networks

    NASA Astrophysics Data System (ADS)

    Zhu, Jinjie; Liu, Xianbin

    2016-11-01

    In the present paper, the locking phenomenon induced by distance-dependent delay in ring structured neuronal networks is investigated, wherein each neuron is modeled by a FitzHugh-Nagumo neuron. Through increasing the element time delay, the different spatiotemporal patterns are observed. By calculating the interspike interval and its value that is divided by the delay of the nearest neurons, it is found that these patterns are actually the lockings between the period of spiking and the distance-dependent delay of the connected neurons. The lockings could also be revealed by the mean time lag of the neurons and in different connection topologies. Furthermore, the influences of the network size and the coupling strength are investigated, wherein the former seems to play a negligible role on these locking patterns; in contrast, too small coupling strengths will blur the boundaries of different patterns and too large ones may destroy the high ratio locking patterns. Finally, one may predict the locking order which determines the emergence order of the patterns in the networks.

  4. Autonomous Optimization of Targeted Stimulation of Neuronal Networks.

    PubMed

    Kumar, Sreedhar S; Wülfing, Jan; Okujeni, Samora; Boedecker, Joschka; Riedmiller, Martin; Egert, Ulrich

    2016-08-01

    Driven by clinical needs and progress in neurotechnology, targeted interaction with neuronal networks is of increasing importance. Yet, the dynamics of interaction between intrinsic ongoing activity in neuronal networks and their response to stimulation is unknown. Nonetheless, electrical stimulation of the brain is increasingly explored as a therapeutic strategy and as a means to artificially inject information into neural circuits. Strategies using regular or event-triggered fixed stimuli discount the influence of ongoing neuronal activity on the stimulation outcome and are therefore not optimal to induce specific responses reliably. Yet, without suitable mechanistic models, it is hardly possible to optimize such interactions, in particular when desired response features are network-dependent and are initially unknown. In this proof-of-principle study, we present an experimental paradigm using reinforcement-learning (RL) to optimize stimulus settings autonomously and evaluate the learned control strategy using phenomenological models. We asked how to (1) capture the interaction of ongoing network activity, electrical stimulation and evoked responses in a quantifiable 'state' to formulate a well-posed control problem, (2) find the optimal state for stimulation, and (3) evaluate the quality of the solution found. Electrical stimulation of generic neuronal networks grown from rat cortical tissue in vitro evoked bursts of action potentials (responses). We show that the dynamic interplay of their magnitudes and the probability to be intercepted by spontaneous events defines a trade-off scenario with a network-specific unique optimal latency maximizing stimulus efficacy. An RL controller was set to find this optimum autonomously. Across networks, stimulation efficacy increased in 90% of the sessions after learning and learned latencies strongly agreed with those predicted from open-loop experiments. Our results show that autonomous techniques can exploit quantitative

  5. Autonomous Optimization of Targeted Stimulation of Neuronal Networks

    PubMed Central

    Kumar, Sreedhar S.; Wülfing, Jan; Okujeni, Samora; Boedecker, Joschka; Riedmiller, Martin

    2016-01-01

    Driven by clinical needs and progress in neurotechnology, targeted interaction with neuronal networks is of increasing importance. Yet, the dynamics of interaction between intrinsic ongoing activity in neuronal networks and their response to stimulation is unknown. Nonetheless, electrical stimulation of the brain is increasingly explored as a therapeutic strategy and as a means to artificially inject information into neural circuits. Strategies using regular or event-triggered fixed stimuli discount the influence of ongoing neuronal activity on the stimulation outcome and are therefore not optimal to induce specific responses reliably. Yet, without suitable mechanistic models, it is hardly possible to optimize such interactions, in particular when desired response features are network-dependent and are initially unknown. In this proof-of-principle study, we present an experimental paradigm using reinforcement-learning (RL) to optimize stimulus settings autonomously and evaluate the learned control strategy using phenomenological models. We asked how to (1) capture the interaction of ongoing network activity, electrical stimulation and evoked responses in a quantifiable ‘state’ to formulate a well-posed control problem, (2) find the optimal state for stimulation, and (3) evaluate the quality of the solution found. Electrical stimulation of generic neuronal networks grown from rat cortical tissue in vitro evoked bursts of action potentials (responses). We show that the dynamic interplay of their magnitudes and the probability to be intercepted by spontaneous events defines a trade-off scenario with a network-specific unique optimal latency maximizing stimulus efficacy. An RL controller was set to find this optimum autonomously. Across networks, stimulation efficacy increased in 90% of the sessions after learning and learned latencies strongly agreed with those predicted from open-loop experiments. Our results show that autonomous techniques can exploit

  6. Energy-efficient neural information processing in individual neurons and neuronal networks.

    PubMed

    Yu, Lianchun; Yu, Yuguo

    2017-11-01

    Brains are composed of networks of an enormous number of neurons interconnected with synapses. Neural information is carried by the electrical signals within neurons and the chemical signals among neurons. Generating these electrical and chemical signals is metabolically expensive. The fundamental issue raised here is whether brains have evolved efficient ways of developing an energy-efficient neural code from the molecular level to the circuit level. Here, we summarize the factors and biophysical mechanisms that could contribute to the energy-efficient neural code for processing input signals. The factors range from ion channel kinetics, body temperature, axonal propagation of action potentials, low-probability release of synaptic neurotransmitters, optimal input and noise, the size of neurons and neuronal clusters, excitation/inhibition balance, coding strategy, cortical wiring, and the organization of functional connectivity. Both experimental and computational evidence suggests that neural systems may use these factors to maximize the efficiency of energy consumption in processing neural signals. Studies indicate that efficient energy utilization may be universal in neuronal systems as an evolutionary consequence of the pressure of limited energy. As a result, neuronal connections may be wired in a highly economical manner to lower energy costs and space. Individual neurons within a network may encode independent stimulus components to allow a minimal number of neurons to represent whole stimulus characteristics efficiently. This basic principle may fundamentally change our view of how billions of neurons organize themselves into complex circuits to operate and generate the most powerful intelligent cognition in nature. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  7. FPGA implementation of motifs-based neuronal network and synchronization analysis

    NASA Astrophysics Data System (ADS)

    Deng, Bin; Zhu, Zechen; Yang, Shuangming; Wei, Xile; Wang, Jiang; Yu, Haitao

    2016-06-01

    Motifs in complex networks play a crucial role in determining the brain functions. In this paper, 13 kinds of motifs are implemented with Field Programmable Gate Array (FPGA) to investigate the relationships between the networks properties and motifs properties. We use discretization method and pipelined architecture to construct various motifs with Hindmarsh-Rose (HR) neuron as the node model. We also build a small-world network based on these motifs and conduct the synchronization analysis of motifs as well as the constructed network. We find that the synchronization properties of motif determine that of motif-based small-world network, which demonstrates effectiveness of our proposed hardware simulation platform. By imitation of some vital nuclei in the brain to generate normal discharges, our proposed FPGA-based artificial neuronal networks have the potential to replace the injured nuclei to complete the brain function in the treatment of Parkinson's disease and epilepsy.

  8. Generative modelling of regulated dynamical behavior in cultured neuronal networks

    NASA Astrophysics Data System (ADS)

    Volman, Vladislav; Baruchi, Itay; Persi, Erez; Ben-Jacob, Eshel

    2004-04-01

    The spontaneous activity of cultured in vitro neuronal networks exhibits rich dynamical behavior. Despite the artificial manner of their construction, the networks’ activity includes features which seemingly reflect the action of underlying regulating mechanism rather than arbitrary causes and effects. Here, we study the cultured networks dynamical behavior utilizing a generative modelling approach. The idea is to include the minimal required generic mechanisms to capture the non-autonomous features of the behavior, which can be reproduced by computer modelling, and then, to identify the additional features of biotic regulation in the observed behavior which are beyond the scope of the model. Our model neurons are composed of soma described by the two Morris-Lecar dynamical variables (voltage and fraction of open potassium channels), with dynamical synapses described by the Tsodyks-Markram three variables dynamics. The model neuron satisfies our self-consistency test: when fed with data recorded from a real cultured networks, it exhibits dynamical behavior very close to that of the networks’ “representative” neuron. Specifically, it shows similar statistical scaling properties (approximated by similar symmetric Lévy distribution with finite mean). A network of such M-L elements spontaneously generates (when weak “structured noise” is added) synchronized bursting events (SBEs) similar to the observed ones. Both the neuronal statistical scaling properties within the bursts and the properties of the SBEs time series show generative (a new discussed concept) agreement with the recorded data. Yet, the model network exhibits different structure of temporal variations and does not recover the observed hierarchical temporal ordering, unless fed with recorded special neurons (with much higher rates of activity), thus indicating the existence of self-regulation mechanisms. It also implies that the spontaneous activity is not simply noise-induced. Instead, the

  9. Non-Boltzmann Dynamics in Networks of Neurons

    NASA Astrophysics Data System (ADS)

    Crair, Michael Charles

    We present a theory for a network of neurons that communicate via action potentials. Our model balances the need for an accurate in detail picture for the functioning of neurons with the desire for a simple and tractable description. We view the problem at the mesoscopic level, with an abstract neural state capturing what we assume to be the relevant physical properties of all the ionic and molecular interactions that make up an active cell. We include in our description of the neural state a stochastic component which mimics the intracellular and extracellular commotion in a network of neurons. Because our model is based on a realistic spiking neural network, we can make firm predictions about the behavior of real biological networks of neurons. For instance, we find that attractor dynamics, a general property exhibited by standard models of neural networks, is preserved in our model but the symmetry which exists in standard models between the 'on' and 'off' neural state is broken in our description by the spike driven noisy dynamics. These predictions are generally corroborated by the limited experimental evidence available, and we make suggestions for further experiments that would clarify the validity of our description. The spiking properties of neurons also leads us to a model for learning which is based on modifying the temporal form of neural interactions instead of the usual connection strength. This suggests that a network of neurons can reinforce associative behavior by changing the time course of the neural interactions expressed in the synaptic potentials instead of changing the size of the synaptic interactions.

  10. Rich club neurons dominate Information Transfer in local cortical networks

    NASA Astrophysics Data System (ADS)

    Nigam, Sunny; Shimono, Masanori; Sporns, Olaf; Beggs, John

    2015-03-01

    The performance of complex networks depends on how they route their traffic. It is unknown how information is transferred in local cortical networks of hundreds of closely-spaced neurons. To address this, it is necessary to record simultaneously from hundreds of neurons at a spacing that matches typical axonal connection distances, and at a temporal resolution that matches synaptic delays. We used a 512 electrode array (60 μm spacing) to record spontaneous activity at 20 kHz, simultaneously from up to 700 neurons in slice cultures of mouse somatosensory cortex for 1 hr at a time. We used transfer entropy to quantify directed information transfer (IT) between pairs of neurons. We found an approximately lognormal distribution of firing rates as reported in in-vivo. Pairwise information transfer strengths also were nearly lognormally distributed, similar to synaptic strengths. 20% of the neurons accounted for 70% of the total IT coming into, and going out of the network and were defined as rich nodes. These rich nodes were more densely and strongly connected to each other expected by chance, forming a rich club. This highly uneven distribution of IT has implications for the efficiency and robustness of local cortical networks, and gives clues to the plastic processes that shape them. JSPS.

  11. Neocortical networks entrain neuronal circuits in cerebellar cortex

    PubMed Central

    Roš, Hana; Sachdev, Robert N. S.; Yu, Yuguo; Šestan, Nenad; McCormick, David A.

    2011-01-01

    Activity in neocortex is often characterized by synchronized oscillations of neurons and networks, resulting in the generation of a local field potential and electroencephalogram. Do the neuronal networks of the cerebellum also generate synchronized oscillations and are they under the influence of those in the neocortex? Here we show that in the absence of any overt external stimulus, the cerebellar cortex generates a slow oscillation that is correlated with that of the neocortex. Disruption of the neocortical slow oscillation abolishes the cerebellar slow oscillation, whereas blocking cerebellar activity has no overt effect on the neocortex. We provide evidence that the cerebellar slow oscillation results in part from the activation of granule, Golgi, and Purkinje neurons. In particular, we show that granule and Golgi cells discharge trains of single spikes, and Purkinje cells generate complex spikes, during the Up state of the slow oscillation. Purkinje cell simple spiking is weakly related to the cerebellar and neocortical slow oscillation in a minority of cells. Our results indicate that the cerebellum generates rhythmic network activity that can be recorded as an LFP in the anesthetized animal, which is driven by synchronized oscillations of the neocortex. Furthermore, we show that correlations between neocortical and cerebellar LFPs persist in the awake animal, indicating that neocortical circuits modulate cerebellar neurons in a similar fashion in natural behavioral states. Thus, the projection neurons of the neocortex collectively exert a driving and modulatory influence on cerebellar network activity. PMID:19692605

  12. Detection of 5-hydroxytryptamine (5-HT) in vitro using a hippocampal neuronal network-based biosensor with extracellular potential analysis of neurons.

    PubMed

    Hu, Liang; Wang, Qin; Qin, Zhen; Su, Kaiqi; Huang, Liquan; Hu, Ning; Wang, Ping

    2015-04-15

    5-hydroxytryptamine (5-HT) is an important neurotransmitter in regulating emotions and related behaviors in mammals. To detect and monitor the 5-HT, effective and convenient methods are demanded in investigation of neuronal network. In this study, hippocampal neuronal networks (HNNs) endogenously expressing 5-HT receptors were employed as sensing elements to build an in vitro neuronal network-based biosensor. The electrophysiological characteristics were analyzed in both neuron and network levels. The firing rates and amplitudes were derived from signal to determine the biosensor response characteristics. The experimental results demonstrate a dose-dependent inhibitory effect of 5-HT on hippocampal neuron activities, indicating the effectiveness of this hybrid biosensor in detecting 5-HT with a response range from 0.01μmol/L to 10μmol/L. In addition, the cross-correlation analysis of HNNs activities suggests 5-HT could weaken HNN connectivity reversibly, providing more specificity of this biosensor in detecting 5-HT. Moreover, 5-HT induced spatiotemporal firing pattern alterations could be monitored in neuron and network levels simultaneously by this hybrid biosensor in a convenient and direct way. With those merits, this neuronal network-based biosensor will be promising to be a valuable and utility platform for the study of neurotransmitter in vitro.

  13. Brain extracellular matrix retains connectivity in neuronal networks.

    PubMed

    Bikbaev, Arthur; Frischknecht, Renato; Heine, Martin

    2015-09-29

    The formation and maintenance of connectivity are critically important for the processing and storage of information in neuronal networks. The brain extracellular matrix (ECM) appears during postnatal development and surrounds most neurons in the adult mammalian brain. Importantly, the removal of the ECM was shown to improve plasticity and post-traumatic recovery in the CNS, but little is known about the mechanisms. Here, we investigated the role of the ECM in the regulation of the network activity in dissociated hippocampal cultures grown on microelectrode arrays (MEAs). We found that enzymatic removal of the ECM in mature cultures led to transient enhancement of neuronal activity, but prevented disinhibition-induced hyperexcitability that was evident in age-matched control cultures with intact ECM. Furthermore, the ECM degradation followed by disinhibition strongly affected the network interaction so that it strongly resembled the juvenile pattern seen in naïve developing cultures. Taken together, our results demonstrate that the ECM plays an important role in retention of existing connectivity in mature neuronal networks that can be exerted through synaptic confinement of glutamate. On the other hand, removal of the ECM can play a permissive role in modification of connectivity and adaptive exploration of novel network architecture.

  14. Cultured Neuronal Networks Express Complex Patterns of Activity and Morphological Memory

    NASA Astrophysics Data System (ADS)

    Raichman, Nadav; Rubinsky, Liel; Shein, Mark; Baruchi, Itay; Volman, Vladislav; Ben-Jacob, Eshel

    The following sections are included: * Cultured Neuronal Networks * Recording the Network Activity * Network Engineering * The Formation of Synchronized Bursting Events * The Characterization of the SBEs * Highly-Active Neurons * Function-Form Relations in Cultured Networks * Analyzing the SBEs Motifs * Network Repertoire * Network under Hypothermia * Summary * Acknowledgments * References

  15. Long term behavior of lithographically prepared in vitro neuronal networks.

    PubMed

    Segev, Ronen; Benveniste, Morris; Hulata, Eyal; Cohen, Netta; Palevski, Alexander; Kapon, Eli; Shapira, Yoash; Ben-Jacob, Eshel

    2002-03-18

    We measured the long term spontaneous electrical activity of neuronal networks with different sizes, grown on lithographically prepared substrates and recorded with multi-electrode-array technology. The time sequences of synchronized bursting events were used to characterize network dynamics. All networks exhibit scale-invariant Lévy distributions and long-range correlations. These observations suggest that different-size networks self-organize to adjust their activities over many time scales. As predictions of current models differ from our observations, this calls for revised models.

  16. Gap Junctions in Developing Thalamic and Neocortical Neuronal Networks

    PubMed Central

    Niculescu, Dragos; Lohmann, Christian

    2014-01-01

    The presence of direct, cytoplasmatic, communication between neurons in the brain of vertebrates has been demonstrated a long time ago. These gap junctions have been characterized in many brain areas in terms of subunit composition, biophysical properties, neuronal connectivity patterns, and developmental regulation. Although interesting findings emerged, showing that different subunits are specifically regulated during development, or that excitatory and inhibitory neuronal networks exhibit various electrical connectivity patterns, gap junctions did not receive much further interest. Originally, it was believed that gap junctions represent simple passageways for electrical and biochemical coordination early in development. Today, we know that gap junction connectivity is tightly regulated, following independent developmental patterns for excitatory and inhibitory networks. Electrical connections are important for many specific functions of neurons, and are, for example, required for the development of neuronal stimulus tuning in the visual system. Here, we integrate the available data on neuronal connectivity and gap junction properties, as well as the most recent findings concerning the functional implications of electrical connections in the developing thalamus and neocortex. PMID:23843439

  17. Gap junctions in developing thalamic and neocortical neuronal networks.

    PubMed

    Niculescu, Dragos; Lohmann, Christian

    2014-12-01

    The presence of direct, cytoplasmatic, communication between neurons in the brain of vertebrates has been demonstrated a long time ago. These gap junctions have been characterized in many brain areas in terms of subunit composition, biophysical properties, neuronal connectivity patterns, and developmental regulation. Although interesting findings emerged, showing that different subunits are specifically regulated during development, or that excitatory and inhibitory neuronal networks exhibit various electrical connectivity patterns, gap junctions did not receive much further interest. Originally, it was believed that gap junctions represent simple passageways for electrical and biochemical coordination early in development. Today, we know that gap junction connectivity is tightly regulated, following independent developmental patterns for excitatory and inhibitory networks. Electrical connections are important for many specific functions of neurons, and are, for example, required for the development of neuronal stimulus tuning in the visual system. Here, we integrate the available data on neuronal connectivity and gap junction properties, as well as the most recent findings concerning the functional implications of electrical connections in the developing thalamus and neocortex.

  18. Single-Cell Transcriptional Analysis Reveals Novel Neuronal Phenotypes and Interaction Networks Involved in the Central Circadian Clock

    PubMed Central

    Park, James; Zhu, Haisun; O'Sullivan, Sean; Ogunnaike, Babatunde A.; Weaver, David R.; Schwaber, James S.; Vadigepalli, Rajanikanth

    2016-01-01

    Single-cell heterogeneity confounds efforts to understand how a population of cells organizes into cellular networks that underlie tissue-level function. This complexity is prominent in the mammalian suprachiasmatic nucleus (SCN). Here, individual neurons exhibit a remarkable amount of asynchronous behavior and transcriptional heterogeneity. However, SCN neurons are able to generate precisely coordinated synaptic and molecular outputs that synchronize the body to a common circadian cycle by organizing into cellular networks. To understand this emergent cellular network property, it is important to reconcile single-neuron heterogeneity with network organization. In light of recent studies suggesting that transcriptionally heterogeneous cells organize into distinct cellular phenotypes, we characterized the transcriptional, spatial, and functional organization of 352 SCN neurons from mice experiencing phase-shifts in their circadian cycle. Using the community structure detection method and multivariate analytical techniques, we identified previously undescribed neuronal phenotypes that are likely to participate in regulatory networks with known SCN cell types. Based on the newly discovered neuronal phenotypes, we developed a data-driven neuronal network structure in which multiple cell types interact through known synaptic and paracrine signaling mechanisms. These results provide a basis from which to interpret the functional variability of SCN neurons and describe methodologies toward understanding how a population of heterogeneous single cells organizes into cellular networks that underlie tissue-level function. PMID:27826225

  19. Waves and Oscillations in Networks of Coupled Neurons

    NASA Astrophysics Data System (ADS)

    Ermentrout, B.

    Neural systems are characterized by the interactions of thousands of individual cells called neurons. Individual neurons vary in their properties with some of them spontaneously active and others active only when given a sufficient perturbation. In this note, I will describe work that has been done on the mathematical analysis of waves and synchronous oscillations in spatially distributed networks of neurons. These classes of behavior are observed both in vivo (that is, in the living brain) and in vitro (isolated networks, such as slices of brain tissue.) We focus on these simple behaviors rather than on the possible computations that networks of neurons can do (such as filtering sensory inputs and producing precise motor output) mainly because they are mathematically tractable. The chapter is organized as follows. First, I will introduce the kinds of equations that are of interest and from these abstract some simplified models. I will consider several different types of connectivity - from "all-to-all" to spatially organized. Typically (although not in every case), each individual neuron is represented by a scalar equation for its dynamics. These individuals can be coupled together directly or indirectly and in spatially discrete or continuous arrays.

  20. [Aphasia: a neuronal network disorder].

    PubMed

    Stockert, A; Saur, D

    2017-06-08

    Language processing requires the coordinated interaction of local and distant neural populations within distributed networks of the temporal, frontal and parietal brain regions. Poststroke aphasia is the consequence of both local as well as remote dysfunction within language-specific and domain-general networks. Language recovery, in turn, rests on reorganization processes within these networks. These comprise the resolution of an acute network failure (i. e. diaschisis), the subacute activation of right hemisphere homologous regions and the gradual reintegration of left hemisphere remote and perilesional areas. The application of unifocal noninvasive brain stimulation over these regions provides a means of modulating neural plasticity in order to enhance the reorganization processes underlying language recovery. The lack of knowledge as to the optimal stimulation site, the appropriate stimulation protocol and the proper timing of interventions might explain the only marginal effects of brain stimulation adjunct to speech and language therapy. In addition, individually different contributions of left and right hemisphere regions to recovery due to heterogeneous lesion sites among patients limit the possibility to identify general principles for brain stimulation. The assumption that aphasia is not only the consequence of the focal effect of a brain lesion but arises from remote dysfunctions within associated functional networks ignites the concept for individualized, potentially multifocal therapeutic network modulation.

  1. Carbon nanotubes: artificial nanomaterials to engineer single neurons and neuronal networks.

    PubMed

    Fabbro, Alessandra; Bosi, Susanna; Ballerini, Laura; Prato, Maurizio

    2012-08-15

    In the past decade, nanotechnology applications to the nervous system have often involved the study and the use of novel nanomaterials to improve the diagnosis and therapy of neurological diseases. In the field of nanomedicine, carbon nanotubes are evaluated as promising materials for diverse therapeutic and diagnostic applications. Besides, carbon nanotubes are increasingly employed in basic neuroscience approaches, and they have been used in the design of neuronal interfaces or in that of scaffolds promoting neuronal growth in vitro. Ultimately, carbon nanotubes are thought to hold the potential for the development of innovative neurological implants. In this framework, it is particularly relevant to document the impact of interfacing such materials with nerve cells. Carbon nanotubes were shown, when modified with biologically active compounds or functionalized in order to alter their charge, to affect neurite outgrowth and branching. Notably, purified carbon nanotubes used as scaffolds can promote the formation of nanotube-neuron hybrid networks, able per se to affect neuron integrative abilities, network connectivity, and synaptic plasticity. We focus this review on our work over several years directed to investigate the ability of carbon nanotube platforms in providing a new tool for nongenetic manipulations of neuronal performance and network signaling.

  2. Carbon Nanotubes: Artificial Nanomaterials to Engineer Single Neurons and Neuronal Networks

    PubMed Central

    2012-01-01

    In the past decade, nanotechnology applications to the nervous system have often involved the study and the use of novel nanomaterials to improve the diagnosis and therapy of neurological diseases. In the field of nanomedicine, carbon nanotubes are evaluated as promising materials for diverse therapeutic and diagnostic applications. Besides, carbon nanotubes are increasingly employed in basic neuroscience approaches, and they have been used in the design of neuronal interfaces or in that of scaffolds promoting neuronal growth in vitro. Ultimately, carbon nanotubes are thought to hold the potential for the development of innovative neurological implants. In this framework, it is particularly relevant to document the impact of interfacing such materials with nerve cells. Carbon nanotubes were shown, when modified with biologically active compounds or functionalized in order to alter their charge, to affect neurite outgrowth and branching. Notably, purified carbon nanotubes used as scaffolds can promote the formation of nanotube–neuron hybrid networks, able per se to affect neuron integrative abilities, network connectivity, and synaptic plasticity. We focus this review on our work over several years directed to investigate the ability of carbon nanotube platforms in providing a new tool for nongenetic manipulations of neuronal performance and network signaling. PMID:22896805

  3. Multiscale modeling of brain dynamics: from single neurons and networks to mathematical tools.

    PubMed

    Siettos, Constantinos; Starke, Jens

    2016-09-01

    The extreme complexity of the brain naturally requires mathematical modeling approaches on a large variety of scales; the spectrum ranges from single neuron dynamics over the behavior of groups of neurons to neuronal network activity. Thus, the connection between the microscopic scale (single neuron activity) to macroscopic behavior (emergent behavior of the collective dynamics) and vice versa is a key to understand the brain in its complexity. In this work, we attempt a review of a wide range of approaches, ranging from the modeling of single neuron dynamics to machine learning. The models include biophysical as well as data-driven phenomenological models. The discussed models include Hodgkin-Huxley, FitzHugh-Nagumo, coupled oscillators (Kuramoto oscillators, Rössler oscillators, and the Hindmarsh-Rose neuron), Integrate and Fire, networks of neurons, and neural field equations. In addition to the mathematical models, important mathematical methods in multiscale modeling and reconstruction of the causal connectivity are sketched. The methods include linear and nonlinear tools from statistics, data analysis, and time series analysis up to differential equations, dynamical systems, and bifurcation theory, including Granger causal connectivity analysis, phase synchronization connectivity analysis, principal component analysis (PCA), independent component analysis (ICA), and manifold learning algorithms such as ISOMAP, and diffusion maps and equation-free techniques. WIREs Syst Biol Med 2016, 8:438-458. doi: 10.1002/wsbm.1348 For further resources related to this article, please visit the WIREs website. © 2016 Wiley Periodicals, Inc.

  4. Bistability induces episodic spike communication by inhibitory neurons in neuronal networks

    NASA Astrophysics Data System (ADS)

    Kazantsev, V. B.; Asatryan, S. Yu.

    2011-09-01

    Bistability is one of the important features of nonlinear dynamical systems. In neurodynamics, bistability has been found in basic Hodgkin-Huxley equations describing the cell membrane dynamics. When the neuron is clamped near its threshold, the stable rest potential may coexist with the stable limit cycle describing periodic spiking. However, this effect is often neglected in network computations where the neurons are typically reduced to threshold firing units (e.g., integrate-and-fire models). We found that the bistability may induce spike communication by inhibitory coupled neurons in the spiking network. The communication is realized in the form of episodic discharges with synchronous (correlated) spikes during the episodes. A spiking phase map is constructed to describe the synchronization and to estimate basic spike phase locking modes.

  5. Bistability induces episodic spike communication by inhibitory neurons in neuronal networks.

    PubMed

    Kazantsev, V B; Asatryan, S Yu

    2011-09-01

    Bistability is one of the important features of nonlinear dynamical systems. In neurodynamics, bistability has been found in basic Hodgkin-Huxley equations describing the cell membrane dynamics. When the neuron is clamped near its threshold, the stable rest potential may coexist with the stable limit cycle describing periodic spiking. However, this effect is often neglected in network computations where the neurons are typically reduced to threshold firing units (e.g., integrate-and-fire models). We found that the bistability may induce spike communication by inhibitory coupled neurons in the spiking network. The communication is realized in the form of episodic discharges with synchronous (correlated) spikes during the episodes. A spiking phase map is constructed to describe the synchronization and to estimate basic spike phase locking modes.

  6. Nonlinear multiplicative dendritic integration in neuron and network models

    PubMed Central

    Zhang, Danke; Li, Yuanqing; Rasch, Malte J.; Wu, Si

    2013-01-01

    Neurons receive inputs from thousands of synapses distributed across dendritic trees of complex morphology. It is known that dendritic integration of excitatory and inhibitory synapses can be highly non-linear in reality and can heavily depend on the exact location and spatial arrangement of inhibitory and excitatory synapses on the dendrite. Despite this known fact, most neuron models used in artificial neural networks today still only describe the voltage potential of a single somatic compartment and assume a simple linear summation of all individual synaptic inputs. We here suggest a new biophysical motivated derivation of a single compartment model that integrates the non-linear effects of shunting inhibition, where an inhibitory input on the route of an excitatory input to the soma cancels or “shunts” the excitatory potential. In particular, our integration of non-linear dendritic processing into the neuron model follows a simple multiplicative rule, suggested recently by experiments, and allows for strict mathematical treatment of network effects. Using our new formulation, we further devised a spiking network model where inhibitory neurons act as global shunting gates, and show that the network exhibits persistent activity in a low firing regime. PMID:23658543

  7. Order-Based Representation in Random Networks of Cortical Neurons

    PubMed Central

    Kermany, Einat; Lyakhov, Vladimir; Zrenner, Christoph; Marom, Shimon

    2008-01-01

    The wide range of time scales involved in neural excitability and synaptic transmission might lead to ongoing change in the temporal structure of responses to recurring stimulus presentations on a trial-to-trial basis. This is probably the most severe biophysical constraint on putative time-based primitives of stimulus representation in neuronal networks. Here we show that in spontaneously developing large-scale random networks of cortical neurons in vitro the order in which neurons are recruited following each stimulus is a naturally emerging representation primitive that is invariant to significant temporal changes in spike times. With a relatively small number of randomly sampled neurons, the information about stimulus position is fully retrievable from the recruitment order. The effective connectivity that makes order-based representation invariant to time warping is characterized by the existence of stations through which activity is required to pass in order to propagate further into the network. This study uncovers a simple invariant in a noisy biological network in vitro; its applicability under in vivo constraints remains to be seen. PMID:19023409

  8. Temporal integration by stochastic recurrent network dynamics with bimodal neurons.

    PubMed

    Okamoto, Hiroshi; Isomura, Yoshikazu; Takada, Masahiko; Fukai, Tomoki

    2007-06-01

    Temporal integration of externally or internally driven information is required for a variety of cognitive processes. This computation is generally linked with graded rate changes in cortical neurons, which typically appear during a delay period of cognitive task in the prefrontal and other cortical areas. Here, we present a neural network model to produce graded (climbing or descending) neuronal activity. Model neurons are interconnected randomly by AMPA-receptor-mediated fast excitatory synapses and are subject to noisy background excitatory and inhibitory synaptic inputs. In each neuron, a prolonged afterdepolarizing potential follows every spike generation. Then, driven by an external input, the individual neurons display bimodal rate changes between a baseline state and an elevated firing state, with the latter being sustained by regenerated afterdepolarizing potentials. When the variance of background input and the uniform weight of recurrent synapses are adequately tuned, we show that stochastic noise and reverberating synaptic input organize these bimodal changes into a sequence that exhibits graded population activity with a nearly constant slope. To test the validity of the proposed mechanism, we analyzed the graded activity of anterior cingulate cortex neurons in monkeys performing delayed conditional Go/No-go discrimination tasks. The delay-period activities of cingulate neurons exhibited bimodal activity patterns and trial-to-trial variability that are similar to those predicted by the proposed model.

  9. Autapse-induced multiple coherence resonance in single neurons and neuronal networks

    NASA Astrophysics Data System (ADS)

    Yilmaz, Ergin; Ozer, Mahmut; Baysal, Veli; Perc, Matjaž

    2016-08-01

    We study the effects of electrical and chemical autapse on the temporal coherence or firing regularity of single stochastic Hodgkin-Huxley neurons and scale-free neuronal networks. Also, we study the effects of chemical autapse on the occurrence of spatial synchronization in scale-free neuronal networks. Irrespective of the type of autapse, we observe autaptic time delay induced multiple coherence resonance for appropriately tuned autaptic conductance levels in single neurons. More precisely, we show that in the presence of an electrical autapse, there is an optimal intensity of channel noise inducing the multiple coherence resonance, whereas in the presence of chemical autapse the occurrence of multiple coherence resonance is less sensitive to the channel noise intensity. At the network level, we find autaptic time delay induced multiple coherence resonance and synchronization transitions, occurring at approximately the same delay lengths. We show that these two phenomena can arise only at a specific range of the coupling strength, and that they can be observed independently of the average degree of the network.

  10. Autapse-induced multiple coherence resonance in single neurons and neuronal networks

    PubMed Central

    Yilmaz, Ergin; Ozer, Mahmut; Baysal, Veli; Perc, Matjaž

    2016-01-01

    We study the effects of electrical and chemical autapse on the temporal coherence or firing regularity of single stochastic Hodgkin-Huxley neurons and scale-free neuronal networks. Also, we study the effects of chemical autapse on the occurrence of spatial synchronization in scale-free neuronal networks. Irrespective of the type of autapse, we observe autaptic time delay induced multiple coherence resonance for appropriately tuned autaptic conductance levels in single neurons. More precisely, we show that in the presence of an electrical autapse, there is an optimal intensity of channel noise inducing the multiple coherence resonance, whereas in the presence of chemical autapse the occurrence of multiple coherence resonance is less sensitive to the channel noise intensity. At the network level, we find autaptic time delay induced multiple coherence resonance and synchronization transitions, occurring at approximately the same delay lengths. We show that these two phenomena can arise only at a specific range of the coupling strength, and that they can be observed independently of the average degree of the network. PMID:27480120

  11. Autapse-induced multiple coherence resonance in single neurons and neuronal networks.

    PubMed

    Yilmaz, Ergin; Ozer, Mahmut; Baysal, Veli; Perc, Matjaž

    2016-08-02

    We study the effects of electrical and chemical autapse on the temporal coherence or firing regularity of single stochastic Hodgkin-Huxley neurons and scale-free neuronal networks. Also, we study the effects of chemical autapse on the occurrence of spatial synchronization in scale-free neuronal networks. Irrespective of the type of autapse, we observe autaptic time delay induced multiple coherence resonance for appropriately tuned autaptic conductance levels in single neurons. More precisely, we show that in the presence of an electrical autapse, there is an optimal intensity of channel noise inducing the multiple coherence resonance, whereas in the presence of chemical autapse the occurrence of multiple coherence resonance is less sensitive to the channel noise intensity. At the network level, we find autaptic time delay induced multiple coherence resonance and synchronization transitions, occurring at approximately the same delay lengths. We show that these two phenomena can arise only at a specific range of the coupling strength, and that they can be observed independently of the average degree of the network.

  12. Studies on a network of complex neurons

    NASA Astrophysics Data System (ADS)

    Chakravarthy, Srinivasa V.; Ghosh, Joydeep

    1993-08-01

    In the last decade, much effort has been directed towards understanding the role of chaos in the brain. Work with rabbits reveals that in the resting state the electrical activity on the surface of the olfactory bulb is chaotic. But, when the animal is involved in a recognition task, the activity shifts to a specific pattern corresponding to the odor that is being recognized. Unstable, quasiperiodic behavior can be found in a class of conservative, deterministic physical systems called the Hamiltonian systems. In this paper, we formulate a complex version of Hopfield's network os real parameters and show that a variation on this model is a conservative system. Conditions under which the complex network can be used as a Content Addressable memory are studied. We also examine the effect of singularities of the complex sigmoid function on the network dynamics. The network exhibits unpredictable behavior at the singularities due to the failure of a uniqueness condition for the solution of the dynamic equations. On incorporating a weight adaptation rule, the structure of the resulting complex network equations is shown to have an interesting similarity with Kosko's Adaptive Bidirectional Associative Memory.

  13. Studies on a network of complex neurons

    NASA Astrophysics Data System (ADS)

    Chakravarthy, Srinivasa V.; Ghosh, Joydeep

    1993-09-01

    In the last decade, much effort has been directed towards understanding the role of chaos in the brain. Work with rabbits reveals that in the resting state the electrical activity on the surface of the olfactory bulb is chaotic. But, when the animal is involved in a recognition task, the activity shifts to a specific pattern corresponding to the odor that is being recognized. Unstable, quasiperiodic behavior can be found in a class of conservative, deterministic physical systems called the Hamiltonian systems. In this paper, we formulate a complex version of Hopfield's network of real parameters and show that a variation on this model is a conservative system. Conditions under which the complex network can be used as a Content Addressable memory are studied. We also examine the effect of singularities of the complex sigmoid function on the network dynamics. The network exhibits unpredictable behavior at the singularities due to the failure of a uniqueness condition for the solution of the dynamic equations. On incorporating a weight adaptation rule, the structure of the resulting complex network equations is shown to have an interesting similarity with Kosko's Adaptive Bidirectional Associative Memory.

  14. Bogdanov-Takens singularity in tri-neuron network with time delay.

    PubMed

    He, Xing; Li, Chuandong; Huang, Tingwen; Li, Chaojie

    2013-06-01

    This brief reports a retarded functional differential equation modeling tri-neuron network with time delay. The Bogdanov-Takens (B-T) bifurcation is investigated by using the center manifold reduction and the normal form method. We get the versal unfolding of the norm forms at the B-T singularity and show that the model can exhibit pitchfork, Hopf, homoclinic, and double-limit cycles bifurcations. Some numerical simulations are given to support the analytic results and explore chaotic dynamics. Finally, an algorithm is given to show that chaotic tri-neuron networks can be used for encrypting a color image.

  15. Degree Correlations Optimize Neuronal Network Sensitivity to Sub-Threshold Stimuli

    PubMed Central

    Schmeltzer, Christian; Kihara, Alexandre Hiroaki; Sokolov, Igor Michailovitsch; Rüdiger, Sten

    2015-01-01

    Information processing in the brain crucially depends on the topology of the neuronal connections. We investigate how the topology influences the response of a population of leaky integrate-and-fire neurons to a stimulus. We devise a method to calculate firing rates from a self-consistent system of equations taking into account the degree distribution and degree correlations in the network. We show that assortative degree correlations strongly improve the sensitivity for weak stimuli and propose that such networks possess an advantage in signal processing. We moreover find that there exists an optimum in assortativity at an intermediate level leading to a maximum in input/output mutual information. PMID:26115374

  16. NEURON: Enabling Autonomicity in Wireless Sensor Networks

    PubMed Central

    Zafeiropoulos, Anastasios; Gouvas, Panagiotis; Liakopoulos, Athanassios; Mentzas, Gregoris; Mitrou, Nikolas

    2010-01-01

    Future Wireless Sensor Networks (WSNs) will be ubiquitous, large-scale networks interconnected with the existing IP infrastructure. Autonomic functionalities have to be designed in order to reduce the complexity of their operation and management, and support the dissemination of knowledge within a WSN. In this paper a novel protocol for energy efficient deployment, clustering and routing in WSNs is proposed that focuses on the incorporation of autonomic functionalities in the existing approaches. The design of the protocol facilitates the design of innovative applications and services that are based on overlay topologies created through cooperation among the sensor nodes. PMID:22399931

  17. Blur identification by multilayer neural network based on multivalued neurons.

    PubMed

    Aizenberg, Igor; Paliy, Dmitriy V; Zurada, Jacek M; Astola, Jaakko T

    2008-05-01

    A multilayer neural network based on multivalued neurons (MLMVN) is a neural network with a traditional feedforward architecture. At the same time, this network has a number of specific different features. Its backpropagation learning algorithm is derivative-free. The functionality of MLMVN is superior to that of the traditional feedforward neural networks and of a variety kernel-based networks. Its higher flexibility and faster adaptation to the target mapping enables to model complex problems using simpler networks. In this paper, the MLMVN is used to identify both type and parameters of the point spread function, whose precise identification is of crucial importance for the image deblurring. The simulation results show the high efficiency of the proposed approach. It is confirmed that the MLMVN is a powerful tool for solving classification problems, especially multiclass ones.

  18. Dynamical Neural Network Model of Hippocampus with Excitatory and Inhibitory Neurons

    NASA Astrophysics Data System (ADS)

    Omori, Toshiaki; Horiguchi, Tsuyoshi

    2004-03-01

    We propose a dynamical neural network model with excitatory neurons and inhibitory neurons for memory function in hippocampus and investigate the effect of inhibitory neurons on memory recall. The results by numerical simulations show that the introduction of inhibitory neurons improves the stability of the memory recall in the proposed model by suppressing the bursting of neurons.

  19. Bifurcations of large networks of two-dimensional integrate and fire neurons.

    PubMed

    Nicola, Wilten; Campbell, Sue Ann

    2013-08-01

    Recently, a class of two-dimensional integrate and fire models has been used to faithfully model spiking neurons. This class includes the Izhikevich model, the adaptive exponential integrate and fire model, and the quartic integrate and fire model. The bifurcation types for the individual neurons have been thoroughly analyzed by Touboul (SIAM J Appl Math 68(4):1045-1079, 2008). However, when the models are coupled together to form networks, the networks can display bifurcations that an uncoupled oscillator cannot. For example, the networks can transition from firing with a constant rate to burst firing. This paper introduces a technique to reduce a full network of this class of neurons to a mean field model, in the form of a system of switching ordinary differential equations. The reduction uses population density methods and a quasi-steady state approximation to arrive at the mean field system. Reduced models are derived for networks with different topologies and different model neurons with biologically derived parameters. The mean field equations are able to qualitatively and quantitatively describe the bifurcations that the full networks display. Extensions and higher order approximations are discussed.

  20. Learning beyond finite memory in recurrent networks of spiking neurons.

    PubMed

    Tino, Peter; Mills, Ashely J S

    2006-03-01

    We investigate possibilities of inducing temporal structures without fading memory in recurrent networks of spiking neurons strictly operating in the pulse-coding regime. We extend the existing gradient-based algorithm for training feedforward spiking neuron networks, SpikeProp (Bohte, Kok, & La Poutré, 2002), to recurrent network topologies, so that temporal dependencies in the input stream are taken into account. It is shown that temporal structures with unbounded input memory specified by simple Moore machines (MM) can be induced by recurrent spiking neuron networks (RSNN). The networks are able to discover pulse-coded representations of abstract information processing states coding potentially unbounded histories of processed inputs. We show that it is often possible to extract from trained RSNN the target MM by grouping together similar spike trains appearing in the recurrent layer. Even when the target MM was not perfectly induced in a RSNN, the extraction procedure was able to reveal weaknesses of the induced mechanism and the extent to which the target machine had been learned.

  1. Neuronal network disintegration: common pathways linking neurodegenerative diseases

    PubMed Central

    Ahmed, Rebekah M; Devenney, Emma M; Irish, Muireann; Ittner, Arne; Naismith, Sharon; Ittner, Lars M; Rohrer, Jonathan D; Halliday, Glenda M; Eisen, Andrew; Hodges, John R; Kiernan, Matthew C

    2016-01-01

    Neurodegeneration refers to a heterogeneous group of brain disorders that progressively evolve. It has been increasingly appreciated that many neurodegenerative conditions overlap at multiple levels and therefore traditional clinicopathological correlation approaches to better classify a disease have met with limited success. Neuronal network disintegration is fundamental to neurodegeneration, and concepts based around such a concept may better explain the overlap between their clinical and pathological phenotypes. In this Review, promoters of overlap in neurodegeneration incorporating behavioural, cognitive, metabolic, motor, and extrapyramidal presentations will be critically appraised. In addition, evidence that may support the existence of large-scale networks that might be contributing to phenotypic differentiation will be considered across a neurodegenerative spectrum. Disintegration of neuronal networks through different pathological processes, such as prion-like spread, may provide a better paradigm of disease and thereby facilitate the identification of novel therapies for neurodegeneration. PMID:27172939

  2. Self-organization in a biochemical-neuron network

    NASA Astrophysics Data System (ADS)

    Okamoto, Masahiro; Maki, Yukihiro; Sekiguchi, Tatsuya; Yoshida, Satoshi

    Mimicking the switching property of cyclic enzyme systems in metabolic pathways, we have proposed a different type of molecular switching device (post-synaptic neuron) whose mechanism can be represented by a threshold-logic function capable of storing short-term memory. We have named this system “biochemical-neuron” and have already developed the board-leveled analog circuit. In the present study, building the integrated artificial neural network system composed of biochemical-neurons, we have investigated the relationship between network responses and time-variant excited stimuli to the network, especially focusing on the examination of some neurophysiological experiments such as “selective elimination of synapses” and “associative long-term depression”. Furthermore we shall discuss the information processing where the time-variant external analog signals are received and transduced to impulse signals.

  3. Activity-Dependent Neuronal Model on Complex Networks

    PubMed Central

    de Arcangelis, Lucilla; Herrmann, Hans J.

    2012-01-01

    Neuronal avalanches are a novel mode of activity in neuronal networks, experimentally found in vitro and in vivo, and exhibit a robust critical behavior: these avalanches are characterized by a power law distribution for the size and duration, features found in other problems in the context of the physics of complex systems. We present a recent model inspired in self-organized criticality, which consists of an electrical network with threshold firing, refractory period, and activity-dependent synaptic plasticity. The model reproduces the critical behavior of the distribution of avalanche sizes and durations measured experimentally. Moreover, the power spectra of the electrical signal reproduce very robustly the power law behavior found in human electroencephalogram (EEG) spectra. We implement this model on a variety of complex networks, i.e., regular, small-world, and scale-free and verify the robustness of the critical behavior. PMID:22470347

  4. Membrane resonance in bursting pacemaker neurons of an oscillatory network is correlated with network frequency.

    PubMed

    Tohidi, Vahid; Nadim, Farzan

    2009-05-20

    Network oscillations typically span a limited range of frequency. In pacemaker-driven networks, including many central pattern generators (CPGs), this frequency range is determined by the properties of bursting pacemaker neurons and their synaptic connections; thus, factors that affect the burst frequency of pacemaker neurons should play a role in determining the network frequency. We examine the role of membrane resonance of pacemaker neurons on the network frequency in the crab pyloric CPG. The pyloric oscillations (frequency of approximately 1 Hz) are generated by a group of pacemaker neurons: the anterior burster (AB) and the pyloric dilator (PD). We examine the impedance profiles of the AB and PD neurons in response to sinusoidal current injections with varying frequency and find that both neuron types exhibit membrane resonance, i.e., demonstrate maximal impedance at a given preferred frequency. The membrane resonance frequencies of the AB and PD neurons fall within the range of the pyloric network oscillation frequency. Experiments with pharmacological blockers and computational modeling show that both calcium currents I(Ca) and the hyperpolarization-activated inward current I(h) are important in producing the membrane resonance in these neurons. We then demonstrate that both the membrane resonance frequency of the PD neuron and its suprathreshold bursting frequency can be shifted in the same direction by either direct current injection or by using the dynamic-clamp technique to inject artificial conductances for I(h) or I(Ca). Together, these results suggest that membrane resonance of pacemaker neurons can be strongly correlated with the CPG oscillation frequency.

  5. Multitasking attractor networks with neuronal threshold noise.

    PubMed

    Agliari, Elena; Barra, Adriano; Galluzzi, Andrea; Isopi, Marco

    2014-01-01

    We consider the multitasking associative network in the low-storage limit and we study its phase diagram with respect to the noise level T and the degree d of dilution in pattern entries. We find that the system is characterized by a rich variety of stable states, including pure states, parallel retrieval states, hierarchically organized states and symmetric mixtures (remarkably, both even and odd), whose complexity increases as the number of patterns P grows. The analysis is performed both analytically and numerically: Exploiting techniques based on partial differential equations, we are able to get the self-consistencies for the order parameters. Such self-consistency equations are then solved and the solutions are further checked through stability theory to catalog their organizations into the phase diagram, which is outlined at the end. This is a further step towards the understanding of spontaneous parallel processing in associative networks.

  6. The Bifurcating Neuron Network 2: an analog associative memory.

    PubMed

    Lee, Geehyuk; Farhat, Nabil H

    2002-01-01

    The Bifurcating Neuron (BN), a chaotic integrate-and-fire neuron, is a model of a neuron augmented by coherent modulation from its environment. The BN is mathematically equivalent to the sine-circle map, and this equivalence relationship allowed us to apply the mathematics of one-dimensional maps to the design of a BN network. The study of the bifurcating diagram of the BN revealed that the BN, under a suitable condition, can function as an amplitude-to-phase converter. Also, being an integrate-and-fire neuron, it has an inherent capability to function as a coincidence detector. These two observations led us to the design of the BN Network 2 (BNN-2), a pulse-coupled neural network that exhibits associative memory of multiple analog patterns. In addition to the usual dynamical properties as an associative memory, the BNN-2 was shown to exhibit volume-holographic memory: it switches to different pages of its memory space as the frequency of the coherent modulation changes, meaning context-sensitive memory.

  7. Beyond Statistical Significance: Implications of Network Structure on Neuronal Activity

    PubMed Central

    Vlachos, Ioannis; Aertsen, Ad; Kumar, Arvind

    2012-01-01

    It is a common and good practice in experimental sciences to assess the statistical significance of measured outcomes. For this, the probability of obtaining the actual results is estimated under the assumption of an appropriately chosen null-hypothesis. If this probability is smaller than some threshold, the results are deemed statistically significant and the researchers are content in having revealed, within their own experimental domain, a “surprising” anomaly, possibly indicative of a hitherto hidden fragment of the underlying “ground-truth”. What is often neglected, though, is the actual importance of these experimental outcomes for understanding the system under investigation. We illustrate this point by giving practical and intuitive examples from the field of systems neuroscience. Specifically, we use the notion of embeddedness to quantify the impact of a neuron's activity on its downstream neurons in the network. We show that the network response strongly depends on the embeddedness of stimulated neurons and that embeddedness is a key determinant of the importance of neuronal activity on local and downstream processing. We extrapolate these results to other fields in which networks are used as a theoretical framework. PMID:22291581

  8. Network architecture underlying maximal separation of neuronal representations

    PubMed Central

    Jortner, Ron A.

    2011-01-01

    One of the most basic and general tasks faced by all nervous systems is extracting relevant information from the organism's surrounding world. While physical signals available to sensory systems are often continuous, variable, overlapping, and noisy, high-level neuronal representations used for decision-making tend to be discrete, specific, invariant, and highly separable. This study addresses the question of how neuronal specificity is generated. Inspired by experimental findings on network architecture in the olfactory system of the locust, I construct a highly simplified theoretical framework which allows for analytic solution of its key properties. For generalized feed-forward systems, I show that an intermediate range of connectivity values between source- and target-populations leads to a combinatorial explosion of wiring possibilities, resulting in input spaces which are, by their very nature, exquisitely sparsely populated. In particular, connection probability ½, as found in the locust antennal-lobe–mushroom-body circuit, serves to maximize separation of neuronal representations across the target Kenyon cells (KCs), and explains their specific and reliable responses. This analysis yields a function expressing response specificity in terms of lower network parameters; together with appropriate gain control this leads to a simple neuronal algorithm for generating arbitrarily sparse and selective codes and linking network architecture and neural coding. I suggest a straightforward way to construct ecologically meaningful representations from this code. PMID:23316159

  9. Three-dimensional functional human neuronal networks in uncompressed low-density electrospun fiber scaffolds.

    PubMed

    Jakobsson, Albin; Ottosson, Maximilian; Zalis, Marina Castro; O'Carroll, David; Johansson, Ulrica Englund; Johansson, Fredrik

    2017-01-05

    We demonstrate an artificial three-dimensional (3D) electrical active human neuronal network system, by the growth of brain neural progenitors in highly porous low density electrospun poly-ε-caprolactone (PCL) fiber scaffolds. In neuroscience research cell-based assays are important experimental instruments for studying neuronal function in health and disease. Traditional cell culture at 2D-surfaces induces abnormal cell-cell contacts and network formation. Hence, there is a tremendous need to explore in vivo-resembling 3D neural cell culture approaches. We present an improved electrospinning method for fabrication of scaffolds that promote neuronal differentiation into highly 3D integrated networks, formation of inhibitory and excitatory synapses and extensive neurite growth. Notably, in 3D scaffolds in vivo-resembling intermixed neuronal and glial cell network were formed, whereas in parallel 2D cultures a neuronal cell layer grew separated from an underlying glial cell layer. Hence, the use of the 3D cell assay presented will most likely provide more physiological relevant results.

  10. Elucidation of The Behavioral Program and Neuronal Network Encoded by Dorsal Raphe Serotonergic Neurons.

    PubMed

    Urban, Daniel J; Zhu, Hu; Marcinkiewcz, Catherine A; Michaelides, Michael; Oshibuchi, Hidehiro; Rhea, Darren; Aryal, Dipendra K; Farrell, Martilias S; Lowery-Gionta, Emily; Olsen, Reid H J; Wetsel, William C; Kash, Thomas L; Hurd, Yasmin L; Tecott, Laurence H; Roth, Bryan L

    2016-04-01

    Elucidating how the brain's serotonergic network mediates diverse behavioral actions over both relatively short (minutes-hours) and long period of time (days-weeks) remains a major challenge for neuroscience. Our relative ignorance is largely due to the lack of technologies with robustness, reversibility, and spatio-temporal control. Recently, we have demonstrated that our chemogenetic approach (eg, Designer Receptors Exclusively Activated by Designer Drugs (DREADDs)) provides a reliable and robust tool for controlling genetically defined neural populations. Here we show how short- and long-term activation of dorsal raphe nucleus (DRN) serotonergic neurons induces robust behavioral responses. We found that both short- and long-term activation of DRN serotonergic neurons induce antidepressant-like behavioral responses. However, only short-term activation induces anxiogenic-like behaviors. In parallel, these behavioral phenotypes were associated with a metabolic map of whole brain network activity via a recently developed non-invasive imaging technology DREAMM (DREADD Associated Metabolic Mapping). Our findings reveal a previously unappreciated brain network elicited by selective activation of DRN serotonin neurons and illuminate potential therapeutic and adverse effects of drugs targeting DRN neurons.

  11. Elucidation of The Behavioral Program and Neuronal Network Encoded by Dorsal Raphe Serotonergic Neurons

    PubMed Central

    Urban, Daniel J; Zhu, Hu; Marcinkiewcz, Catherine A; Michaelides, Michael; Oshibuchi, Hidehiro; Rhea, Darren; Aryal, Dipendra K; Farrell, Martilias S; Lowery-Gionta, Emily; Olsen, Reid H J; Wetsel, William C; Kash, Thomas L; Hurd, Yasmin L; Tecott, Laurence H; Roth, Bryan L

    2016-01-01

    Elucidating how the brain's serotonergic network mediates diverse behavioral actions over both relatively short (minutes–hours) and long period of time (days–weeks) remains a major challenge for neuroscience. Our relative ignorance is largely due to the lack of technologies with robustness, reversibility, and spatio-temporal control. Recently, we have demonstrated that our chemogenetic approach (eg, Designer Receptors Exclusively Activated by Designer Drugs (DREADDs)) provides a reliable and robust tool for controlling genetically defined neural populations. Here we show how short- and long-term activation of dorsal raphe nucleus (DRN) serotonergic neurons induces robust behavioral responses. We found that both short- and long-term activation of DRN serotonergic neurons induce antidepressant-like behavioral responses. However, only short-term activation induces anxiogenic-like behaviors. In parallel, these behavioral phenotypes were associated with a metabolic map of whole brain network activity via a recently developed non-invasive imaging technology DREAMM (DREADD Associated Metabolic Mapping). Our findings reveal a previously unappreciated brain network elicited by selective activation of DRN serotonin neurons and illuminate potential therapeutic and adverse effects of drugs targeting DRN neurons. PMID:26383016

  12. Automated quantification of neuronal networks and single-cell calcium dynamics using calcium imaging

    PubMed Central

    Patel, Tapan P.; Man, Karen; Firestein, Bonnie L.; Meaney, David F.

    2017-01-01

    Background Recent advances in genetically engineered calcium and membrane potential indicators provide the potential to estimate the activation dynamics of individual neurons within larger, mesoscale networks (100s–1000 +neurons). However, a fully integrated automated workflow for the analysis and visualization of neural microcircuits from high speed fluorescence imaging data is lacking. New method Here we introduce FluoroSNNAP, Fluorescence Single Neuron and Network Analysis Package. FluoroSNNAP is an open-source, interactive software developed in MATLAB for automated quantification of numerous biologically relevant features of both the calcium dynamics of single-cells and network activity patterns. FluoroSNNAP integrates and improves upon existing tools for spike detection, synchronization analysis, and inference of functional connectivity, making it most useful to experimentalists with little or no programming knowledge. Results We apply FluoroSNNAP to characterize the activity patterns of neuronal microcircuits undergoing developmental maturation in vitro. Separately, we highlight the utility of single-cell analysis for phenotyping a mixed population of neurons expressing a human mutant variant of the microtubule associated protein tau and wild-type tau. Comparison with existing method(s) We show the performance of semi-automated cell segmentation using spatiotemporal independent component analysis and significant improvement in detecting calcium transients using a template-based algorithm in comparison to peak-based or wavelet-based detection methods. Our software further enables automated analysis of microcircuits, which is an improvement over existing methods. Conclusions We expect the dissemination of this software will facilitate a comprehensive analysis of neuronal networks, promoting the rapid interrogation of circuits in health and disease. PMID:25629800

  13. Effects of acute spinalization on neurons of postural networks

    PubMed Central

    Zelenin, Pavel V.; Lyalka, Vladimir F.; Hsu, Li-Ju; Orlovsky, Grigori N.; Deliagina, Tatiana G.

    2016-01-01

    Postural limb reflexes (PLRs) represent a substantial component of postural corrections. Spinalization results in loss of postural functions, including disappearance of PLRs. The aim of the present study was to characterize the effects of acute spinalization on two populations of spinal neurons (F and E) mediating PLRs, which we characterized previously. For this purpose, in decerebrate rabbits spinalized at T12, responses of interneurons from L5 to stimulation causing PLRs before spinalization, were recorded. The results were compared to control data obtained in our previous study. We found that spinalization affected the distribution of F- and E-neurons across the spinal grey matter, caused a significant decrease in their activity, as well as disturbances in processing of posture-related sensory inputs. A two-fold decrease in the proportion of F-neurons in the intermediate grey matter was observed. Location of populations of F- and E-neurons exhibiting significant decrease in their activity was determined. A dramatic decrease of the efficacy of sensory input from the ipsilateral limb to F-neurons, and from the contralateral limb to E-neurons was found. These changes in operation of postural networks underlie the loss of postural control after spinalization, and represent a starting point for the development of spasticity. PMID:27302149

  14. Predicting Single-Neuron Activity in Locally Connected Networks

    PubMed Central

    Azhar, Feraz; Anderson, William S.

    2014-01-01

    The characterization of coordinated activity in neuronal populations has received renewed interest in the light of advancing experimental techniques that allow recordings from multiple units simultaneously. Across both in vitro and in vivo preparations, nearby neurons show coordinated responses when spontaneously active and when subject to external stimuli. Recent work (Truccolo, Hochberg, & Donoghue, 2010) has connected these coordinated responses to behavior, showing that small ensembles of neurons in arm-related areas of sensorimotor cortex can reliably predict single-neuron spikes in behaving monkeys and humans. We investigate this phenomenon using an analogous point process model, showing that in the case of a computational model of cortex responding to random background inputs, one is similarly able to predict the future state of a single neuron by considering its own spiking history, together with the spiking histories of randomly sampled ensembles of nearby neurons. This model exhibits realistic cortical architecture and displays bursting episodes in the two distinct connectivity schemes studied. We conjecture that the baseline predictability we find in these instances is characteristic of locally connected networks more broadly considered. PMID:22845824

  15. Inhibitory neurons promote robust critical firing dynamics in networks of integrate-and-fire neurons

    NASA Astrophysics Data System (ADS)

    Lu, Zhixin; Squires, Shane; Ott, Edward; Girvan, Michelle

    2016-12-01

    We study the firing dynamics of a discrete-state and discrete-time version of an integrate-and-fire neuronal network model with both excitatory and inhibitory neurons. When the integer-valued state of a neuron exceeds a threshold value, the neuron fires, sends out state-changing signals to its connected neurons, and returns to the resting state. In this model, a continuous phase transition from non-ceaseless firing to ceaseless firing is observed. At criticality, power-law distributions of avalanche size and duration with the previously derived exponents, -3 /2 and -2 , respectively, are observed. Using a mean-field approach, we show analytically how the critical point depends on model parameters. Our main result is that the combined presence of both inhibitory neurons and integrate-and-fire dynamics greatly enhances the robustness of critical power-law behavior (i.e., there is an increased range of parameters, including both sub- and supercritical values, for which several decades of power-law behavior occurs).

  16. Inhibitory neurons promote robust critical firing dynamics in networks of integrate-and-fire neurons.

    PubMed

    Lu, Zhixin; Squires, Shane; Ott, Edward; Girvan, Michelle

    2016-12-01

    We study the firing dynamics of a discrete-state and discrete-time version of an integrate-and-fire neuronal network model with both excitatory and inhibitory neurons. When the integer-valued state of a neuron exceeds a threshold value, the neuron fires, sends out state-changing signals to its connected neurons, and returns to the resting state. In this model, a continuous phase transition from non-ceaseless firing to ceaseless firing is observed. At criticality, power-law distributions of avalanche size and duration with the previously derived exponents, -3/2 and -2, respectively, are observed. Using a mean-field approach, we show analytically how the critical point depends on model parameters. Our main result is that the combined presence of both inhibitory neurons and integrate-and-fire dynamics greatly enhances the robustness of critical power-law behavior (i.e., there is an increased range of parameters, including both sub- and supercritical values, for which several decades of power-law behavior occurs).

  17. A Study of Neuronal Properties, Synaptic Plasticity and Network Interactions Using a Computer Reconstituted Neuronal Network Derived from Fundamental Biophysical Principles

    DTIC Science & Technology

    1992-06-01

    neuronal networks . Proceedings of the Simulation Technology Conference. (in press) n Tam, D. C. (1992) Objec. oriented programming techniques for...12/92 $259,985 USPHS BRSG Grant number RR05425-30 "Synaptic Interactions in Neuronal Networks " 6/91 - 3/92 $10,000 PENDING GRANT Office of Naval...reconstructing functional properties of biological neuronal networks . Proceedings of the Simulation Technology Conference. (in press) Tam, D. C. (1992) Object

  18. Synaptic mechanisms of persistent reverberatory activity in neuronal networks.

    PubMed

    Lau, Pak-Ming; Bi, Guo-Qiang

    2005-07-19

    For brain functions such as working memory and motor planning, neuronal circuits are able to sustain persistent activity after transient inputs. Theoretical studies have suggested that persistent activity can exist in recurrently connected networks as active reverberation. However, the actual cellular processes underlying such reverberation are not well understood. In this study, we investigated the basic synaptic mechanisms responsible for reverberatory activity in small networks of rat hippocampal neurons in vitro. We found that brief stimulation of one neuron in a network could evoke, in an all-or-none fashion, reverberatory activity lasting for seconds. The reverberation was likely to arise from recurrent excitation because it was eliminated by partial inhibition of alpha-amino-3-hydroxy-5-methyl-4-isoxazole propionic acid (AMPA)-type glutamate receptors (but not by blockade of NMDA receptors). In contrast, blocking inhibitory transmission with bicuculline enhanced the reverberation. Furthermore, paired-pulse stimuli with interpulse intervals of 200-400 ms were more effective than single pulses in triggering reverberation, apparently by eliciting higher levels of asynchronous transmitter release. Suppressing asynchronous release by EGTA-AM abolished reverberation, whereas elevating asynchronous release by strontium substantially enhanced reverberation. Finally, manipulating calcium uptake into or release from intracellular stores also modulated the level of reverberation. Thus, the oft-overlooked asynchronous phase of synaptic transmission plays a central role in the emergent phenomenon of network reverberation.

  19. Synaptic mechanisms of persistent reverberatory activity in neuronal networks

    PubMed Central

    Lau, Pak-Ming; Bi, Guo-Qiang

    2005-01-01

    For brain functions such as working memory and motor planning, neuronal circuits are able to sustain persistent activity after transient inputs. Theoretical studies have suggested that persistent activity can exist in recurrently connected networks as active reverberation. However, the actual cellular processes underlying such reverberation are not well understood. In this study, we investigated the basic synaptic mechanisms responsible for reverberatory activity in small networks of rat hippocampal neurons in vitro. We found that brief stimulation of one neuron in a network could evoke, in an all-or-none fashion, reverberatory activity lasting for seconds. The reverberation was likely to arise from recurrent excitation because it was eliminated by partial inhibition of α-amino-3-hydroxy-5-methyl-4-isoxazole propionic acid (AMPA)-type glutamate receptors (but not by blockade of NMDA receptors). In contrast, blocking inhibitory transmission with bicuculline enhanced the reverberation. Furthermore, paired-pulse stimuli with interpulse intervals of 200–400 ms were more effective than single pulses in triggering reverberation, apparently by eliciting higher levels of asynchronous transmitter release. Suppressing asynchronous release by EGTA-AM abolished reverberation, whereas elevating asynchronous release by strontium substantially enhanced reverberation. Finally, manipulating calcium uptake into or release from intracellular stores also modulated the level of reverberation. Thus, the oft-overlooked asynchronous phase of synaptic transmission plays a central role in the emergent phenomenon of network reverberation. PMID:16006530

  20. GABA-A receptor antagonists increase firing, bursting and synchrony of spontaneous activity in neuronal networks grown on microelectrode arrays: a step towards chemical "fingerprinting"

    EPA Science Inventory

    Assessment of effects on spontaneous network activity in neurons grown on MEAs is a proposed method to screen chemicals for potential neurotoxicity. In addition, differential effects on network activity (chemical "fingerprints") could be used to classify chemical modes of action....

  1. GABA-A receptor antagonists increase firing, bursting and synchrony of spontaneous activity in neuronal networks grown on microelectrode arrays: a step towards chemical "fingerprinting"

    EPA Science Inventory

    Assessment of effects on spontaneous network activity in neurons grown on MEAs is a proposed method to screen chemicals for potential neurotoxicity. In addition, differential effects on network activity (chemical "fingerprints") could be used to classify chemical modes of action....

  2. Oscillations in the bistable regime of neuronal networks.

    PubMed

    Roxin, Alex; Compte, Albert

    2016-07-01

    Bistability between attracting fixed points in neuronal networks has been hypothesized to underlie persistent activity observed in several cortical areas during working memory tasks. In network models this kind of bistability arises due to strong recurrent excitation, sufficient to generate a state of high activity created in a saddle-node (SN) bifurcation. On the other hand, canonical network models of excitatory and inhibitory neurons (E-I networks) robustly produce oscillatory states via a Hopf (H) bifurcation due to the E-I loop. This mechanism for generating oscillations has been invoked to explain the emergence of brain rhythms in the β to γ bands. Although both bistability and oscillatory activity have been intensively studied in network models, there has not been much focus on the coincidence of the two. Here we show that when oscillations emerge in E-I networks in the bistable regime, their phenomenology can be explained to a large extent by considering coincident SN and H bifurcations, known as a codimension two Takens-Bogdanov bifurcation. In particular, we find that such oscillations are not composed of a stable limit cycle, but rather are due to noise-driven oscillatory fluctuations. Furthermore, oscillations in the bistable regime can, in principle, have arbitrarily low frequency.

  3. Oscillations in the bistable regime of neuronal networks

    NASA Astrophysics Data System (ADS)

    Roxin, Alex; Compte, Albert

    2016-07-01

    Bistability between attracting fixed points in neuronal networks has been hypothesized to underlie persistent activity observed in several cortical areas during working memory tasks. In network models this kind of bistability arises due to strong recurrent excitation, sufficient to generate a state of high activity created in a saddle-node (SN) bifurcation. On the other hand, canonical network models of excitatory and inhibitory neurons (E-I networks) robustly produce oscillatory states via a Hopf (H) bifurcation due to the E-I loop. This mechanism for generating oscillations has been invoked to explain the emergence of brain rhythms in the β to γ bands. Although both bistability and oscillatory activity have been intensively studied in network models, there has not been much focus on the coincidence of the two. Here we show that when oscillations emerge in E-I networks in the bistable regime, their phenomenology can be explained to a large extent by considering coincident SN and H bifurcations, known as a codimension two Takens-Bogdanov bifurcation. In particular, we find that such oscillations are not composed of a stable limit cycle, but rather are due to noise-driven oscillatory fluctuations. Furthermore, oscillations in the bistable regime can, in principle, have arbitrarily low frequency.

  4. Targeting single neuronal networks for gene expression and cell labeling in vivo.

    PubMed

    Marshel, James H; Mori, Takuma; Nielsen, Kristina J; Callaway, Edward M

    2010-08-26

    To understand fine-scale structure and function of single mammalian neuronal networks, we developed and validated a strategy to genetically target and trace monosynaptic inputs to a single neuron in vitro and in vivo. The strategy independently targets a neuron and its presynaptic network for specific gene expression and fine-scale labeling, using single-cell electroporation of DNA to target infection and monosynaptic retrograde spread of a genetically modifiable rabies virus. The technique is highly reliable, with transsynaptic labeling occurring in every electroporated neuron infected by the virus. Targeting single neocortical neuronal networks in vivo, we found clusters of both spiny and aspiny neurons surrounding the electroporated neuron in each case, in addition to intricately labeled distal cortical and subcortical inputs. This technique, broadly applicable for probing and manipulating single neuronal networks with single-cell resolution in vivo, may help shed new light on fundamental mechanisms underlying circuit development and information processing by neuronal networks throughout the brain.

  5. The Drosophila Clock Neuron Network Features Diverse Coupling Modes and Requires Network-wide Coherence for Robust Circadian Rhythms.

    PubMed

    Yao, Zepeng; Bennett, Amelia J; Clem, Jenna L; Shafer, Orie T

    2016-12-13

    In animals, networks of clock neurons containing molecular clocks orchestrate daily rhythms in physiology and behavior. However, how various types of clock neurons communicate and coordinate with one another to produce coherent circadian rhythms is not well understood. Here, we investigate clock neuron coupling in the brain of Drosophila and demonstrate that the fly's various groups of clock neurons display unique and complex coupling relationships to core pacemaker neurons. Furthermore, we find that coordinated free-running rhythms require molecular clock synchrony not only within the well-characterized lateral clock neuron classes but also between lateral clock neurons and dorsal clock neurons. These results uncover unexpected patterns of coupling in the clock neuron network and reveal that robust free-running behavioral rhythms require a coherence of molecular oscillations across most of the fly's clock neuron network.

  6. A novel recurrent neural network with one neuron and finite-time convergence for k-winners-take-all operation.

    PubMed

    Liu, Qingshan; Dang, Chuangyin; Cao, Jinde

    2010-07-01

    In this paper, based on a one-neuron recurrent neural network, a novel k-winners-take-all ( k -WTA) network is proposed. Finite time convergence of the proposed neural network is proved using the Lyapunov method. The k-WTA operation is first converted equivalently into a linear programming problem. Then, a one-neuron recurrent neural network is proposed to get the kth or (k+1)th largest inputs of the k-WTA problem. Furthermore, a k-WTA network is designed based on the proposed neural network to perform the k-WTA operation. Compared with the existing k-WTA networks, the proposed network has simple structure and finite time convergence. In addition, simulation results on numerical examples show the effectiveness and performance of the proposed k-WTA network.

  7. Thermodynamics and signatures of criticality in a network of neurons

    PubMed Central

    Tkačik, Gašper; Mora, Thierry; Marre, Olivier; Amodei, Dario; Palmer, Stephanie E.; Berry, Michael J.; Bialek, William

    2015-01-01

    The activity of a neural network is defined by patterns of spiking and silence from the individual neurons. Because spikes are (relatively) sparse, patterns of activity with increasing numbers of spikes are less probable, but, with more spikes, the number of possible patterns increases. This tradeoff between probability and numerosity is mathematically equivalent to the relationship between entropy and energy in statistical physics. We construct this relationship for populations of up to N = 160 neurons in a small patch of the vertebrate retina, using a combination of direct and model-based analyses of experiments on the response of this network to naturalistic movies. We see signs of a thermodynamic limit, where the entropy per neuron approaches a smooth function of the energy per neuron as N increases. The form of this function corresponds to the distribution of activity being poised near an unusual kind of critical point. We suggest further tests of criticality, and give a brief discussion of its functional significance. PMID:26330611

  8. Continuous network of endoplasmic reticulum in cerebellar Purkinje neurons.

    PubMed Central

    Terasaki, M; Slater, N T; Fein, A; Schmidek, A; Reese, T S

    1994-01-01

    Purkinje neurons in rat cerebellar slices injected with an oil drop saturated with 1,1'-dihexadecyl-3,3,3',3'-tetramethylindocarbocyanine perchlorate [DiIC16(3) or DiI] to label the endoplasmic reticulum were observed by confocal microscopy. DiI spread throughout the cell body and dendrites and into the axon. DiI spreading is due to diffusion in a continuous bilayer and is not due to membrane trafficking because it also spreads in fixed neurons. DiI stained such features of the endoplasmic reticulum as densities at branch points, reticular networks in the cell body and dendrites, nuclear envelope, spines, and aggregates formed during anoxia nuclear envelope, spines, and aggregates formed during anoxia in low extracellular Ca2+. In cultured rat hippocampal neurons, where optical conditions provide more detail, DiI labeled a clearly delineated network of endoplasmic reticulum in the cell body. We conclude that there is a continuous compartment of endoplasmic reticulum extending from the cell body throughout the dendrites. This compartment may coordinate and integrate neuronal functions. Images PMID:7519781

  9. Neuronal networks provide rapid neuroprotection against spreading toxicity

    PubMed Central

    Samson, Andrew J.; Robertson, Graham; Zagnoni, Michele; Connolly, Christopher N.

    2016-01-01

    Acute secondary neuronal cell death, as seen in neurodegenerative disease, cerebral ischemia (stroke) and traumatic brain injury (TBI), drives spreading neurotoxicity into surrounding, undamaged, brain areas. This spreading toxicity occurs via two mechanisms, synaptic toxicity through hyperactivity, and excitotoxicity following the accumulation of extracellular glutamate. To date, there are no fast-acting therapeutic tools capable of terminating secondary spreading toxicity within a time frame relevant to the emergency treatment of stroke or TBI patients. Here, using hippocampal neurons (DIV 15–20) cultured in microfluidic devices in order to deliver a localized excitotoxic insult, we replicate secondary spreading toxicity and demonstrate that this process is driven by GluN2B receptors. In addition to the modeling of spreading toxicity, this approach has uncovered a previously unknown, fast acting, GluN2A-dependent neuroprotective signaling mechanism. This mechanism utilizes the innate capacity of surrounding neuronal networks to provide protection against both forms of spreading neuronal toxicity, synaptic hyperactivity and direct glutamate excitotoxicity. Importantly, network neuroprotection against spreading toxicity can be effectively stimulated after an excitotoxic insult has been delivered, and may identify a new therapeutic window to limit brain damage. PMID:27650924

  10. Neuronal networks provide rapid neuroprotection against spreading toxicity.

    PubMed

    Samson, Andrew J; Robertson, Graham; Zagnoni, Michele; Connolly, Christopher N

    2016-09-21

    Acute secondary neuronal cell death, as seen in neurodegenerative disease, cerebral ischemia (stroke) and traumatic brain injury (TBI), drives spreading neurotoxicity into surrounding, undamaged, brain areas. This spreading toxicity occurs via two mechanisms, synaptic toxicity through hyperactivity, and excitotoxicity following the accumulation of extracellular glutamate. To date, there are no fast-acting therapeutic tools capable of terminating secondary spreading toxicity within a time frame relevant to the emergency treatment of stroke or TBI patients. Here, using hippocampal neurons (DIV 15-20) cultured in microfluidic devices in order to deliver a localized excitotoxic insult, we replicate secondary spreading toxicity and demonstrate that this process is driven by GluN2B receptors. In addition to the modeling of spreading toxicity, this approach has uncovered a previously unknown, fast acting, GluN2A-dependent neuroprotective signaling mechanism. This mechanism utilizes the innate capacity of surrounding neuronal networks to provide protection against both forms of spreading neuronal toxicity, synaptic hyperactivity and direct glutamate excitotoxicity. Importantly, network neuroprotection against spreading toxicity can be effectively stimulated after an excitotoxic insult has been delivered, and may identify a new therapeutic window to limit brain damage.

  11. Memristor-based neural networks: Synaptic versus neuronal stochasticity

    NASA Astrophysics Data System (ADS)

    Naous, Rawan; AlShedivat, Maruan; Neftci, Emre; Cauwenberghs, Gert; Salama, Khaled Nabil

    2016-11-01

    In neuromorphic circuits, stochasticity in the cortex can be mapped into the synaptic or neuronal components. The hardware emulation of these stochastic neural networks are currently being extensively studied using resistive memories or memristors. The ionic process involved in the underlying switching behavior of the memristive elements is considered as the main source of stochasticity of its operation. Building on its inherent variability, the memristor is incorporated into abstract models of stochastic neurons and synapses. Two approaches of stochastic neural networks are investigated. Aside from the size and area perspective, the impact on the system performance, in terms of accuracy, recognition rates, and learning, among these two approaches and where the memristor would fall into place are the main comparison points to be considered.

  12. Derivation of a neural field model from a network of theta neurons.

    PubMed

    Laing, Carlo R

    2014-07-01

    Neural field models are used to study macroscopic spatiotemporal patterns in the cortex. Their derivation from networks of model neurons normally involves a number of assumptions, which may not be correct. Here we present an exact derivation of a neural field model from an infinite network of theta neurons, the canonical form of a type I neuron. We demonstrate the existence of a "bump" solution in both a discrete network of neurons and in the corresponding neural field model.

  13. Desynchronization in networks of globally coupled neurons with dendritic dynamics.

    PubMed

    Majtanik, Milan; Dolan, Kevin; Tass, Peter A

    2006-10-01

    Effective desynchronization can be exploited as a tool for probing the functional significance of synchronized neural activity underlying perceptual and cognitive processes or as a mild treatment for neurological disorders like Parkinson's disease. In this article we show that pulse-based desynchronization techniques, originally developed for networks of globally coupled oscillators (Kuramoto model), can be adapted to networks of coupled neurons with dendritic dynamics. Compared to the Kuramoto model, the dendritic dynamics significantly alters the response of the neuron to the stimulation. Under medium stimulation amplitude a bistability of the response of a single neuron is observed. When stimulated at some initial phases, the neuron displays only modulations of its firing, whereas at other initial phases it stops oscillating entirely. Significant alterations in the duration of stimulation-induced transients are also observed. These transients endure after the end of the stimulation and cause maximal desynchronization to occur not during the stimulation, but with some delay after the stimulation has been turned off. To account for this delayed desynchronization effect, we have designed a new calibration procedure for finding the stimulation parameters that result in optimal desynchronization. We have also developed a new desynchronization technique by low frequency entrainment. The stimulation techniques originally developed for the Kuramoto model, when using the new calibration procedure, can also be applied to networks with dendritic dynamics. However, the mechanism by which desynchronization is achieved is substantially different than for the network of Kuramoto oscillators. In particular, the addition of dendritic dynamics significantly changes the timing of the stimulation required to obtain desynchronization. We propose desynchronization stimulation for experimental analysis of synchronized neural processes and for the therapy of movement disorders.

  14. New neuronal networks involved in ethanol reinforcement.

    PubMed

    Kiianmaa, Kalervo; Hyytiä, Petri; Samson, Herman H; Engel, Jörgen A; Svensson, Lennart; Söderpalm, Bo; Larsson, Anna; Colombo, Giancarlo; Vacca, Giovanni; Finn, Deborah A; Bachtell, Ryan K; Ryabinin, Andrey E

    2003-02-01

    This article represents the proceedings of a symposium at the 2002 ISBRA/RSA meeting in San Francisco. The organizers were Kalervo Kiianmaa and Andrey E. Ryabinin. The chairs were Kalervo Kiianmaa and Jörgen A. Engel. The presentations were (1) The role of opioidergic and dopaminergic networks in ethanol-seeking behavior, by Kalervo Kiianmaa and Petri Hyytiä; (2) Interaction between the dopamine systems in the prefrontal cortex and nucleus accumbens during ethanol self-administration, by Herman H. Samson; (3) Neurochemical and behavioral studies on ethanol and nicotine interactions, by Jörgen A. Engel, Lennart Svensson, Bo Söderpalm, and Anna Larsson; (4) Involvement of the GABA receptor in alcohol reinforcement in sP rats, by Giancarlo Colombo and Giovanni Vacca; (5) Neuroactive steroids and ethanol reinforcement, by Deborah A. Finn, and (6) Potential contribution of the urocortin system to regulation of alcohol self-administration, by Andrey E. Ryabinin and Ryan K. Bachtell.(B)

  15. Protein cooperation: from neurons to networks.

    PubMed

    Volonté, Cinzia; D'Ambrosi, Nadia; Amadio, Susanna

    2008-10-01

    A constant pattern through the development of cellular life is that not only cells but also subcellular components such as proteins, either being enzymes, receptors, signaling or structural proteins, strictly cooperate. Discerning how protein cooperation originated and propagates over evolutionary time, how proteins work together to a shared outcome far beyond mere interaction, thus represents a theoretical and experimental challenge for evolutionary, molecular, and computational biology, and a timely fruition also for biotechnology. In this review, we describe some basic principles sustaining not only cellular but especially protein cooperative behavior, with particular emphasis on neurobiological systems. We illustrate experimental results and numerical models substantiating that bench research, as well as computer analysis, indeed concurs in recognizing the natural propensity of proteins to cooperate. At the cellular level, we exemplify network connectivity in the thalamus, hippocampus and basal ganglia. At the protein level, we depict numerical models about the receptosome, the protein machinery connecting neurotransmitters or growth factors to specific, unique downstream effector proteins. We primarily focus on the purinergic P2/P1 receptor systems for extracellular purine and pyrimidine nucleotides/nucleosides. By spanning concepts such as single-molecule biology to membrane computing, we seek to stimulate a scientific debate on the implications of protein cooperation in neurobiological systems.

  16. Midline thalamic neurons are differentially engaged during hippocampus network oscillations.

    PubMed

    Lara-Vásquez, Ariel; Espinosa, Nelson; Durán, Ernesto; Stockle, Marcelo; Fuentealba, Pablo

    2016-07-14

    The midline thalamus is reciprocally connected with the medial temporal lobe, where neural circuitry essential for spatial navigation and memory formation resides. Yet, little information is available on the dynamic relationship between activity patterns in the midline thalamus and medial temporal lobe. Here, we report on the functional heterogeneity of anatomically-identified thalamic neurons and the differential modulation of their activity with respect to dorsal hippocampal rhythms in the anesthetized mouse. Midline thalamic neurons expressing the calcium-binding protein calretinin, irrespective of their selective co-expression of calbindin, discharged at overall low levels, did not increase their activity during hippocampal theta oscillations, and their firing rates were inhibited during hippocampal sharp wave-ripples. Conversely, thalamic neurons lacking calretinin discharged at higher rates, increased their activity during hippocampal theta waves, but remained unaffected during sharp wave-ripples. Our results indicate that the midline thalamic system comprises at least two different classes of thalamic projection neuron, which can be partly defined by their differential engagement by hippocampal pathways during specific network oscillations that accompany distinct behavioral contexts. Thus, different midline thalamic neuronal populations might be selectively recruited to support distinct stages of memory processing, consistent with the thalamus being pivotal in the dialogue of cortical circuits.

  17. Midline thalamic neurons are differentially engaged during hippocampus network oscillations

    PubMed Central

    Lara-Vásquez, Ariel; Espinosa, Nelson; Durán, Ernesto; Stockle, Marcelo; Fuentealba, Pablo

    2016-01-01

    The midline thalamus is reciprocally connected with the medial temporal lobe, where neural circuitry essential for spatial navigation and memory formation resides. Yet, little information is available on the dynamic relationship between activity patterns in the midline thalamus and medial temporal lobe. Here, we report on the functional heterogeneity of anatomically-identified thalamic neurons and the differential modulation of their activity with respect to dorsal hippocampal rhythms in the anesthetized mouse. Midline thalamic neurons expressing the calcium-binding protein calretinin, irrespective of their selective co-expression of calbindin, discharged at overall low levels, did not increase their activity during hippocampal theta oscillations, and their firing rates were inhibited during hippocampal sharp wave-ripples. Conversely, thalamic neurons lacking calretinin discharged at higher rates, increased their activity during hippocampal theta waves, but remained unaffected during sharp wave-ripples. Our results indicate that the midline thalamic system comprises at least two different classes of thalamic projection neuron, which can be partly defined by their differential engagement by hippocampal pathways during specific network oscillations that accompany distinct behavioral contexts. Thus, different midline thalamic neuronal populations might be selectively recruited to support distinct stages of memory processing, consistent with the thalamus being pivotal in the dialogue of cortical circuits. PMID:27411890

  18. Synaptic connectivity in hippocampal neuronal networks cultured on micropatterned surfaces.

    PubMed

    Liu, Q Y; Coulombe, M; Dumm, J; Shaffer, K M; Schaffner, A E; Barker, J L; Pancrazio, J J; Stenger, D A; Ma, W

    2000-04-14

    Embryonic rat hippocampal neurons were grown on patterned silane surface in order to organize synapse formations in a controlled manner. The surface patterns were composed of trimethoxysilylpropyl-diethylenetriamine (DETA) lines separated by tridecafluoro-1,1,2,2-tetrahydrooctyl-1-dimethylchlorosilane (13F) spaces. Pre- and post-synaptic specializations were identified by immunostaining for synapsin I and microtubule-associated protein-2 (MAP-2). Functional synaptic connections were examined by recording simultaneously from pairs of neurons using the whole-cell configuration of the patch-clamp technique. Spontaneous and evoked synaptic currents were recorded in neurons cultured for 2-14 days. The formation of functional connections was accompanied by the appearance of spontaneous synaptic currents (SSCs), which could be detected after approximately 3 days in culture in the absence of evoked synaptic currents (ESCs). ESCs were detected only after approximately 7 days in culture, mostly in the form of unidirectional synaptic connections. Other forms of synaptic connectivity, such as bidirectional and autaptic connections, were also identified. Both transient GABAergic and glutamatergic signals mediated the transmissions between communicating cells. These results demonstrate the combination of various types of synaptic connections forming simple and complex networks in neurons cultured on line (DETA)-space (13F) patterns. Finally, precisely synchronized SSCs were recorded in neuron pairs cultured on pattern indicating the existence of a fast-acting feedback mechanism mediated by pre-synaptic GABA(A) receptors.

  19. Genetic networks controlling the development of midbrain dopaminergic neurons

    PubMed Central

    Prakash, Nilima; Wurst, Wolfgang

    2006-01-01

    Recent data have substantially advanced our understanding of midbrain dopaminergic neuron development. Firstly, a Wnt1-regulated genetic network, including Otx2 and Nkx2-2, and a Shh-controlled genetic cascade, including Lmx1a, Msx1 and Nkx6-1, have been unravelled, acting in parallel or sequentially to establish a territory competent for midbrain dopaminergic precursor production at relatively early stages of neural development. Secondly, the same factors (Wnt1 and Lmx1a/Msx1) appear to regulate midbrain dopaminergic and/or neuronal fate specification in the postmitotic progeny of these precursors by controlling the expression of midbrain dopaminergic-specific and/or general proneural factors at later stages of neural development. For the first time, early inductive events have thus been linked to later differentiation processes in midbrain dopaminergic neuron development. Given the pivotal importance of this neuronal population for normal function of the human brain and its involvement in severe neurological and psychiatric disorders such as Parkinson's Disease, these advances open new prospects for potential stem cell-based therapies. We will summarize these new findings in the overall context of midbrain dopaminergic neuron development in this review. PMID:16825303

  20. Slow fluctuations in recurrent networks of spiking neurons

    NASA Astrophysics Data System (ADS)

    Wieland, Stefan; Bernardi, Davide; Schwalger, Tilo; Lindner, Benjamin

    2015-10-01

    Networks of fast nonlinear elements may display slow fluctuations if interactions are strong. We find a transition in the long-term variability of a sparse recurrent network of perfect integrate-and-fire neurons at which the Fano factor switches from zero to infinity and the correlation time is minimized. This corresponds to a bifurcation in a linear map arising from the self-consistency of temporal input and output statistics. More realistic neural dynamics with a leak current and refractory period lead to smoothed transitions and modified critical couplings that can be theoretically predicted.

  1. Modularity Induced Gating and Delays in Neuronal Networks

    PubMed Central

    Shein-Idelson, Mark; Cohen, Gilad; Hanein, Yael

    2016-01-01

    Neural networks, despite their highly interconnected nature, exhibit distinctly localized and gated activation. Modularity, a distinctive feature of neural networks, has been recently proposed as an important parameter determining the manner by which networks support activity propagation. Here we use an engineered biological model, consisting of engineered rat cortical neurons, to study the role of modular topology in gating the activity between cell populations. We show that pairs of connected modules support conditional propagation (transmitting stronger bursts with higher probability), long delays and propagation asymmetry. Moreover, large modular networks manifest diverse patterns of both local and global activation. Blocking inhibition decreased activity diversity and replaced it with highly consistent transmission patterns. By independently controlling modularity and disinhibition, experimentally and in a model, we pose that modular topology is an important parameter affecting activation localization and is instrumental for population-level gating by disinhibition. PMID:27104350

  2. Emergence and robustness of target waves in a neuronal network

    NASA Astrophysics Data System (ADS)

    Xu, Ying; Jin, Wuyin; Ma, Jun

    2015-08-01

    Target waves in excitable media such as neuronal network can regulate the spatial distribution and orderliness as a continuous pacemaker. Three different schemes are used to develop stable target wave in the network, and the potential mechanism for emergence of target waves in the excitable media is investigated. For example, a local pacing driven by external periodical forcing can generate stable target wave in the excitable media, furthermore, heterogeneity and local feedback under self-feedback coupling are also effective to generate continuous target wave as well. To discern the difference of these target waves, a statistical synchronization factor is defined by using mean field theory and artificial defects are introduced into the network to block the target wave, thus the robustness of these target waves could be detected. However, these target waves developed from the above mentioned schemes show different robustness to the blocking from artificial defects. A regular network of Hindmarsh-Rose neurons is designed in a two-dimensional square array, target waves are induced by using three different ways, and then some artificial defects, which are associated with anatomical defects, are set in the network to detect the effect of defects blocking on the travelling waves. It confirms that the robustness of target waves to defects blocking depends on the intrinsic properties (ways to generate target wave) of target waves.

  3. Synchrony in stochastically driven neuronal networks with complex topologies.

    PubMed

    Newhall, Katherine A; Shkarayev, Maxim S; Kramer, Peter R; Kovačič, Gregor; Cai, David

    2015-05-01

    We study the synchronization of a stochastically driven, current-based, integrate-and-fire neuronal model on a preferential-attachment network with scale-free characteristics and high clustering. The synchrony is induced by cascading total firing events where every neuron in the network fires at the same instant of time. We show that in the regime where the system remains in this highly synchronous state, the firing rate of the network is completely independent of the synaptic coupling, and depends solely on the external drive. On the other hand, the ability for the network to maintain synchrony depends on a balance between the fluctuations of the external input and the synaptic coupling strength. In order to accurately predict the probability of repeated cascading total firing events, we go beyond mean-field and treelike approximations and conduct a detailed second-order calculation taking into account local clustering. Our explicit analytical results are shown to give excellent agreement with direct numerical simulations for the particular preferential-attachment network model investigated.

  4. On the continuous differentiability of inter-spike intervals of synaptically connected cortical spiking neurons in a neuronal network.

    PubMed

    Kumar, Gautam; Kothare, Mayuresh V

    2013-12-01

    We derive conditions for continuous differentiability of inter-spike intervals (ISIs) of spiking neurons with respect to parameters (decision variables) of an external stimulating input current that drives a recurrent network of synaptically connected neurons. The dynamical behavior of individual neurons is represented by a class of discontinuous single-neuron models. We report here that ISIs of neurons in the network are continuously differentiable with respect to decision variables if (1) a continuously differentiable trajectory of the membrane potential exists between consecutive action potentials with respect to time and decision variables and (2) the partial derivative of the membrane potential of spiking neurons with respect to time is not equal to the partial derivative of their firing threshold with respect to time at the time of action potentials. Our theoretical results are supported by showing fulfillment of these conditions for a class of known bidimensional spiking neuron models.

  5. Automated quantification of neuronal networks and single-cell calcium dynamics using calcium imaging.

    PubMed

    Patel, Tapan P; Man, Karen; Firestein, Bonnie L; Meaney, David F

    2015-03-30

    Recent advances in genetically engineered calcium and membrane potential indicators provide the potential to estimate the activation dynamics of individual neurons within larger, mesoscale networks (100s-1000+neurons). However, a fully integrated automated workflow for the analysis and visualization of neural microcircuits from high speed fluorescence imaging data is lacking. Here we introduce FluoroSNNAP, Fluorescence Single Neuron and Network Analysis Package. FluoroSNNAP is an open-source, interactive software developed in MATLAB for automated quantification of numerous biologically relevant features of both the calcium dynamics of single-cells and network activity patterns. FluoroSNNAP integrates and improves upon existing tools for spike detection, synchronization analysis, and inference of functional connectivity, making it most useful to experimentalists with little or no programming knowledge. We apply FluoroSNNAP to characterize the activity patterns of neuronal microcircuits undergoing developmental maturation in vitro. Separately, we highlight the utility of single-cell analysis for phenotyping a mixed population of neurons expressing a human mutant variant of the microtubule associated protein tau and wild-type tau. We show the performance of semi-automated cell segmentation using spatiotemporal independent component analysis and significant improvement in detecting calcium transients using a template-based algorithm in comparison to peak-based or wavelet-based detection methods. Our software further enables automated analysis of microcircuits, which is an improvement over existing methods. We expect the dissemination of this software will facilitate a comprehensive analysis of neuronal networks, promoting the rapid interrogation of circuits in health and disease. Copyright © 2015. Published by Elsevier B.V.

  6. Macroscopic complexity from an autonomous network of networks of theta neurons

    PubMed Central

    Luke, Tanushree B.; Barreto, Ernest; So, Paul

    2014-01-01

    We examine the emergence of collective dynamical structures and complexity in a network of interacting populations of neuronal oscillators. Each population consists of a heterogeneous collection of globally-coupled theta neurons, which are a canonical representation of Type-1 neurons. For simplicity, the populations are arranged in a fully autonomous driver-response configuration, and we obtain a full description of the asymptotic macroscopic dynamics of this network. We find that the collective macroscopic behavior of the response population can exhibit equilibrium and limit cycle states, multistability, quasiperiodicity, and chaos, and we obtain detailed bifurcation diagrams that clarify the transitions between these macrostates. Furthermore, we show that despite the complexity that emerges, it is possible to understand the complicated dynamical structure of this system by building on the understanding of the collective behavior of a single population of theta neurons. This work is a first step in the construction of a mathematically-tractable network-of-networks representation of neuronal network dynamics. PMID:25477811

  7. Use of cortical neuronal networks for in vitro material biocompatibility testing.

    PubMed

    Charkhkar, Hamid; Frewin, Christopher; Nezafati, Maysam; Knaack, Gretchen L; Peixoto, Nathalia; Saddow, Stephen E; Pancrazio, Joseph J

    2014-03-15

    Neural interfaces aim to restore neurological function lost during disease or injury. Novel implantable neural interfaces increasingly capitalize on novel materials to achieve microscale coupling with the nervous system. Like any biomedical device, neural interfaces should consist of materials that exhibit biocompatibility in accordance with the international standard ISO10993-5, which describes in vitro testing involving fibroblasts where cytotoxicity serves as the main endpoint. In the present study, we examine the utility of living neuronal networks as functional assays for in vitro material biocompatibility, particularly for materials that comprise implantable neural interfaces. Embryonic mouse cortical tissue was cultured to form functional networks where spontaneous action potentials, or spikes, can be monitored non-invasively using a substrate-integrated microelectrode array. Taking advantage of such a platform, we exposed established positive and negative control materials to the neuronal networks in a consistent method with ISO 10993-5 guidance. Exposure to the negative controls, gold and polyethylene, did not significantly change the neuronal activity whereas the positive controls, copper and polyvinyl chloride (PVC), resulted in reduction of network spike rate. We also compared the functional assay with an established cytotoxicity measure using L929 fibroblast cells. Our findings indicate that neuronal networks exhibit enhanced sensitivity to positive control materials. In addition, we assessed functional neurotoxicity of tungsten, a common microelectrode material, and two conducting polymer formulations that have been used to modify microelectrode properties for in vivo recording and stimulation. These data suggest that cultured neuronal networks are a useful platform for evaluating the functional toxicity of materials intended for implantation in the nervous system.

  8. Convergent neuromodulation onto a network neuron can have divergent effects at the network level.

    PubMed

    Kintos, Nickolas; Nusbaum, Michael P; Nadim, Farzan

    2016-04-01

    Different neuromodulators often target the same ion channel. When such modulators act on different neuron types, this convergent action can enable a rhythmic network to produce distinct outputs. Less clear are the functional consequences when two neuromodulators influence the same ion channel in the same neuron. We examine the consequences of this seeming redundancy using a mathematical model of the crab gastric mill (chewing) network. This network is activated in vitro by the projection neuron MCN1, which elicits a half-center bursting oscillation between the reciprocally-inhibitory neurons LG and Int1. We focus on two neuropeptides which modulate this network, including a MCN1 neurotransmitter and the hormone crustacean cardioactive peptide (CCAP). Both activate the same voltage-gated current (I MI ) in the LG neuron. However, I MI-MCN1 , resulting from MCN1 released neuropeptide, has phasic dynamics in its maximal conductance due to LG presynaptic inhibition of MCN1, while I MI-CCAP retains the same maximal conductance in both phases of the gastric mill rhythm. Separation of time scales allows us to produce a 2D model from which phase plane analysis shows that, as in the biological system, I MI-MCN1 and I MI-CCAP primarily influence the durations of opposing phases of this rhythm. Furthermore, I MI-MCN1 influences the rhythmic output in a manner similar to the Int1-to-LG synapse, whereas I MI-CCAP has an influence similar to the LG-to-Int1 synapse. These results show that distinct neuromodulators which target the same voltage-gated ion channel in the same network neuron can nevertheless produce distinct effects at the network level, providing divergent neuromodulator actions on network activity.

  9. Spectral Entropy Based Neuronal Network Synchronization Analysis Based on Microelectrode Array Measurements

    PubMed Central

    Kapucu, Fikret E.; Välkki, Inkeri; Mikkonen, Jarno E.; Leone, Chiara; Lenk, Kerstin; Tanskanen, Jarno M. A.; Hyttinen, Jari A. K.

    2016-01-01

    Synchrony and asynchrony are essential aspects of the functioning of interconnected neuronal cells and networks. New information on neuronal synchronization can be expected to aid in understanding these systems. Synchronization provides insight in the functional connectivity and the spatial distribution of the information processing in the networks. Synchronization is generally studied with time domain analysis of neuronal events, or using direct frequency spectrum analysis, e.g., in specific frequency bands. However, these methods have their pitfalls. Thus, we have previously proposed a method to analyze temporal changes in the complexity of the frequency of signals originating from different network regions. The method is based on the correlation of time varying spectral entropies (SEs). SE assesses the regularity, or complexity, of a time series by quantifying the uniformity of the frequency spectrum distribution. It has been previously employed, e.g., in electroencephalogram analysis. Here, we revisit our correlated spectral entropy method (CorSE), providing evidence of its justification, usability, and benefits. Here, CorSE is assessed with simulations and in vitro microelectrode array (MEA) data. CorSE is first demonstrated with a specifically tailored toy simulation to illustrate how it can identify synchronized populations. To provide a form of validation, the method was tested with simulated data from integrate-and-fire model based computational neuronal networks. To demonstrate the analysis of real data, CorSE was applied on in vitro MEA data measured from rat cortical cell cultures, and the results were compared with three known event based synchronization measures. Finally, we show the usability by tracking the development of networks in dissociated mouse cortical cell cultures. The results show that temporal correlations in frequency spectrum distributions reflect the network relations of neuronal populations. In the simulated data, CorSE unraveled the

  10. Spectral Entropy Based Neuronal Network Synchronization Analysis Based on Microelectrode Array Measurements.

    PubMed

    Kapucu, Fikret E; Välkki, Inkeri; Mikkonen, Jarno E; Leone, Chiara; Lenk, Kerstin; Tanskanen, Jarno M A; Hyttinen, Jari A K

    2016-01-01

    Synchrony and asynchrony are essential aspects of the functioning of interconnected neuronal cells and networks. New information on neuronal synchronization can be expected to aid in understanding these systems. Synchronization provides insight in the functional connectivity and the spatial distribution of the information processing in the networks. Synchronization is generally studied with time domain analysis of neuronal events, or using direct frequency spectrum analysis, e.g., in specific frequency bands. However, these methods have their pitfalls. Thus, we have previously proposed a method to analyze temporal changes in the complexity of the frequency of signals originating from different network regions. The method is based on the correlation of time varying spectral entropies (SEs). SE assesses the regularity, or complexity, of a time series by quantifying the uniformity of the frequency spectrum distribution. It has been previously employed, e.g., in electroencephalogram analysis. Here, we revisit our correlated spectral entropy method (CorSE), providing evidence of its justification, usability, and benefits. Here, CorSE is assessed with simulations and in vitro microelectrode array (MEA) data. CorSE is first demonstrated with a specifically tailored toy simulation to illustrate how it can identify synchronized populations. To provide a form of validation, the method was tested with simulated data from integrate-and-fire model based computational neuronal networks. To demonstrate the analysis of real data, CorSE was applied on in vitro MEA data measured from rat cortical cell cultures, and the results were compared with three known event based synchronization measures. Finally, we show the usability by tracking the development of networks in dissociated mouse cortical cell cultures. The results show that temporal correlations in frequency spectrum distributions reflect the network relations of neuronal populations. In the simulated data, CorSE unraveled the

  11. Network feedback regulates motor output across a range of modulatory neuron activity.

    PubMed

    Spencer, Robert M; Blitz, Dawn M

    2016-06-01

    Modulatory projection neurons alter network neuron synaptic and intrinsic properties to elicit multiple different outputs. Sensory and other inputs elicit a range of modulatory neuron activity that is further shaped by network feedback, yet little is known regarding how the impact of network feedback on modulatory neurons regulates network output across a physiological range of modulatory neuron activity. Identified network neurons, a fully described connectome, and a well-characterized, identified modulatory projection neuron enabled us to address this issue in the crab (Cancer borealis) stomatogastric nervous system. The modulatory neuron modulatory commissural neuron 1 (MCN1) activates and modulates two networks that generate rhythms via different cellular mechanisms and at distinct frequencies. MCN1 is activated at rates of 5-35 Hz in vivo and in vitro. Additionally, network feedback elicits MCN1 activity time-locked to motor activity. We asked how network activation, rhythm speed, and neuron activity levels are regulated by the presence or absence of network feedback across a physiological range of MCN1 activity rates. There were both similarities and differences in responses of the two networks to MCN1 activity. Many parameters in both networks were sensitive to network feedback effects on MCN1 activity. However, for most parameters, MCN1 activity rate did not determine the extent to which network output was altered by the addition of network feedback. These data demonstrate that the influence of network feedback on modulatory neuron activity is an important determinant of network output and feedback can be effective in shaping network output regardless of the extent of network modulation. Copyright © 2016 the American Physiological Society.

  12. The composite neuron: a realistic one-compartment Purkinje cell model suitable for large-scale neuronal network simulations.

    PubMed

    Coop, A D; Reeke, G N

    2001-01-01

    We present a simple method for the realistic description of neurons that is well suited to the development of large-scale neuronal network models where the interactions within and between neural circuits are the object of study rather than the details of dendritic signal propagation in individual cells. Referred to as the composite approach, it combines in a one-compartment model elements of both the leaky integrator cell and the conductance-based formalism of Hodgkin and Huxley (1952). Composite models treat the cell membrane as an equivalent circuit that contains ligand-gated synaptic, voltage-gated, and voltage- and concentration-dependent conductances. The time dependences of these various conductances are assumed to correlate with their spatial locations in the real cell. Thus, when viewed from the soma, ligand-gated synaptic and other dendritically located conductances can be modeled as either single alpha or double exponential functions of time, whereas, with the exception of discharge-related conductances, somatic and proximal dendritic conductances can be well approximated by simple current-voltage relationships. As an example of the composite approach to neuronal modeling we describe a composite model of a cerebellar Purkinje neuron.

  13. Pharmacodynamics of potassium channel openers in cultured neuronal networks.

    PubMed

    Wu, Calvin; V Gopal, Kamakshi; Lukas, Thomas J; Gross, Guenter W; Moore, Ernest J

    2014-06-05

    A novel class of drugs - potassium (K(+)) channel openers or activators - has recently been shown to cause anticonvulsive and neuroprotective effects by activating hyperpolarizing K(+) currents, and therefore, may show efficacy for treating tinnitus. This study presents measurements of the modulatory effects of four K(+) channel openers on the spontaneous activity and action potential waveforms of neuronal networks. The networks were derived from mouse embryonic auditory cortices and grown on microelectrode arrays. Pentylenetetrazol was used to create hyperactivity states in the neuronal networks as a first approximation for mimicking tinnitus or tinnitus-like activity. We then compared the pharmacodynamics of the four channel activators, retigabine and flupirtine (voltage-gated K(+) channel KV7 activators), NS1619 and isopimaric acid ("big potassium" BK channel activators). The EC50 of retigabine, flupirtine, NS1619, and isopimaric acid were 8.0, 4.0, 5.8, and 7.8µM, respectively. The reduction of hyperactivity compared to the reference activity was significant. The present results highlight the notion of re-purposing the K(+) channel activators for reducing hyperactivity of spontaneously active auditory networks, serving as a platform for these drugs to show efficacy toward target identification, prevention, as well as treatment of tinnitus.

  14. Realistic modeling of neurons and networks: towards brain simulation.

    PubMed

    D'Angelo, Egidio; Solinas, Sergio; Garrido, Jesus; Casellato, Claudia; Pedrocchi, Alessandra; Mapelli, Jonathan; Gandolfi, Daniela; Prestori, Francesca

    2013-01-01

    Realistic modeling is a new advanced methodology for investigating brain functions. Realistic modeling is based on a detailed biophysical description of neurons and synapses, which can be integrated into microcircuits. The latter can, in turn, be further integrated to form large-scale brain networks and eventually to reconstruct complex brain systems. Here we provide a review of the realistic simulation strategy and use the cerebellar network as an example. This network has been carefully investigated at molecular and cellular level and has been the object of intense theoretical investigation. The cerebellum is thought to lie at the core of the forward controller operations of the brain and to implement timing and sensory prediction functions. The cerebellum is well described and provides a challenging field in which one of the most advanced realistic microcircuit models has been generated. We illustrate how these models can be elaborated and embedded into robotic control systems to gain insight into how the cellular properties of cerebellar neurons emerge in integrated behaviors. Realistic network modeling opens up new perspectives for the investigation of brain pathologies and for the neurorobotic field.

  15. Realistic modeling of neurons and networks: towards brain simulation

    PubMed Central

    D’Angelo, Egidio; Solinas, Sergio; Garrido, Jesus; Casellato, Claudia; Pedrocchi, Alessandra; Mapelli, Jonathan; Gandolfi, Daniela; Prestori, Francesca

    Summary Realistic modeling is a new advanced methodology for investigating brain functions. Realistic modeling is based on a detailed biophysical description of neurons and synapses, which can be integrated into microcircuits. The latter can, in turn, be further integrated to form large-scale brain networks and eventually to reconstruct complex brain systems. Here we provide a review of the realistic simulation strategy and use the cerebellar network as an example. This network has been carefully investigated at molecular and cellular level and has been the object of intense theoretical investigation. The cerebellum is thought to lie at the core of the forward controller operations of the brain and to implement timing and sensory prediction functions. The cerebellum is well described and provides a challenging field in which one of the most advanced realistic microcircuit models has been generated. We illustrate how these models can be elaborated and embedded into robotic control systems to gain insight into how the cellular properties of cerebellar neurons emerge in integrated behaviors. Realistic network modeling opens up new perspectives for the investigation of brain pathologies and for the neurorobotic field. PMID:24139652

  16. Information Transmission and Anderson Localization in two-dimensional networks of firing-rate neurons

    NASA Astrophysics Data System (ADS)

    Natale, Joseph; Hentschel, George

    Firing-rate networks offer a coarse model of signal propagation in the brain. Here we analyze sparse, 2D planar firing-rate networks with no synapses beyond a certain cutoff distance. Additionally, we impose Dale's Principle to ensure that each neuron makes only or inhibitory outgoing connections. Using spectral methods, we find that the number of neurons participating in excitations of the network becomes insignificant whenever the connectivity cutoff is tuned to a value near or below the average interneuron separation. Further, neural activations exceeding a certain threshold stay confined to a small region of space. This behavior is an instance of Anderson localization, a disorder-induced phase transition by which an information channel is rendered unable to transmit signals. We discuss several potential implications of localization for both local and long-range computation in the brain. This work was supported in part by Grants JSMF/ 220020321 and NSF/IOS/1208126.

  17. On the applicability of STDP-based learning mechanisms to spiking neuron network models

    NASA Astrophysics Data System (ADS)

    Sboev, A.; Vlasov, D.; Serenko, A.; Rybka, R.; Moloshnikov, I.

    2016-11-01

    The ways to creating practically effective method for spiking neuron networks learning, that would be appropriate for implementing in neuromorphic hardware and at the same time based on the biologically plausible plasticity rules, namely, on STDP, are discussed. The influence of the amount of correlation between input and output spike trains on the learnability by different STDP rules is evaluated. A usability of alternative combined learning schemes, involving artificial and spiking neuron models is demonstrated on the iris benchmark task and on the practical task of gender recognition.

  18. PyNN: A Common Interface for Neuronal Network Simulators

    PubMed Central

    Davison, Andrew P.; Brüderle, Daniel; Eppler, Jochen; Kremkow, Jens; Muller, Eilif; Pecevski, Dejan; Perrinet, Laurent; Yger, Pierre

    2008-01-01

    Computational neuroscience has produced a diversity of software for simulations of networks of spiking neurons, with both negative and positive consequences. On the one hand, each simulator uses its own programming or configuration language, leading to considerable difficulty in porting models from one simulator to another. This impedes communication between investigators and makes it harder to reproduce and build on the work of others. On the other hand, simulation results can be cross-checked between different simulators, giving greater confidence in their correctness, and each simulator has different optimizations, so the most appropriate simulator can be chosen for a given modelling task. A common programming interface to multiple simulators would reduce or eliminate the problems of simulator diversity while retaining the benefits. PyNN is such an interface, making it possible to write a simulation script once, using the Python programming language, and run it without modification on any supported simulator (currently NEURON, NEST, PCSIM, Brian and the Heidelberg VLSI neuromorphic hardware). PyNN increases the productivity of neuronal network modelling by providing high-level abstraction, by promoting code sharing and reuse, and by providing a foundation for simulator-agnostic analysis, visualization and data-management tools. PyNN increases the reliability of modelling studies by making it much easier to check results on multiple simulators. PyNN is open-source software and is available from http://neuralensemble.org/PyNN. PMID:19194529

  19. Sparse Gamma Rhythms Arising through Clustering in Adapting Neuronal Networks

    PubMed Central

    Kilpatrick, Zachary P.; Ermentrout, Bard

    2011-01-01

    Gamma rhythms (30–100 Hz) are an extensively studied synchronous brain state responsible for a number of sensory, memory, and motor processes. Experimental evidence suggests that fast-spiking interneurons are responsible for carrying the high frequency components of the rhythm, while regular-spiking pyramidal neurons fire sparsely. We propose that a combination of spike frequency adaptation and global inhibition may be responsible for this behavior. Excitatory neurons form several clusters that fire every few cycles of the fast oscillation. This is first shown in a detailed biophysical network model and then analyzed thoroughly in an idealized model. We exploit the fact that the timescale of adaptation is much slower than that of the other variables. Singular perturbation theory is used to derive an approximate periodic solution for a single spiking unit. This is then used to predict the relationship between the number of clusters arising spontaneously in the network as it relates to the adaptation time constant. We compare this to a complementary analysis that employs a weak coupling assumption to predict the first Fourier mode to destabilize from the incoherent state of an associated phase model as the external noise is reduced. Both approaches predict the same scaling of cluster number with respect to the adaptation time constant, which is corroborated in numerical simulations of the full system. Thus, we develop several testable predictions regarding the formation and characteristics of gamma rhythms with sparsely firing excitatory neurons. PMID:22125486

  20. A Neuronal Network Model for Pitch Selectivity and Representation

    PubMed Central

    Huang, Chengcheng; Rinzel, John

    2016-01-01

    Pitch is a perceptual correlate of periodicity. Sounds with distinct spectra can elicit the same pitch. Despite the importance of pitch perception, understanding the cellular mechanism of pitch perception is still a major challenge and a mechanistic model of pitch is lacking. A multi-stage neuronal network model is developed for pitch frequency estimation using biophysically-based, high-resolution coincidence detector neurons. The neuronal units respond only to highly coincident input among convergent auditory nerve fibers across frequency channels. Their selectivity for only very fast rising slopes of convergent input enables these slope-detectors to distinguish the most prominent coincidences in multi-peaked input time courses. Pitch can then be estimated from the first-order interspike intervals of the slope-detectors. The regular firing pattern of the slope-detector neurons are similar for sounds sharing the same pitch despite the distinct timbres. The decoded pitch strengths also correlate well with the salience of pitch perception as reported by human listeners. Therefore, our model can serve as a neural representation for pitch. Our model performs successfully in estimating the pitch of missing fundamental complexes and reproducing the pitch variation with respect to the frequency shift of inharmonic complexes. It also accounts for the phase sensitivity of pitch perception in the cases of Schroeder phase, alternating phase and random phase relationships. Moreover, our model can also be applied to stochastic sound stimuli, iterated-ripple-noise, and account for their multiple pitch perceptions. PMID:27378900

  1. Collapse of ordered spatial pattern in neuronal network

    NASA Astrophysics Data System (ADS)

    Song, Xinlin; Wang, Chunni; Ma, Jun; Ren, Guodong

    2016-06-01

    Spatiotemporal systems can emerge some regular spatial patterns due to self organization or under external periodical pacing while external attack or intrinsic collapse can destroy the regularity in the spatial system. For an example, the electrical activities of neurons in nervous system show regular spatial distribution under appropriate coupling and connection. It is believed that distinct regularity could be induced in the media by appropriate forcing or feedback, while a diffusive collapse induced by continuous destruction can cause breakdown of the media. In this paper, the collapse of ordered spatial distribution is investigated in a regular network of neurons (Morris-Lecar, Hindmarsh-Rose) in two-dimensional array. A stable target wave is developed regular spatial distribution emerges by imposing appropriate external forcing with diversity, or generating heterogeneity (parameter diversity in space). The diffusive invasion could be produced by continuous parameter collapse or switch in local area, e.g, the diffusive poisoning in ion channels of potassium in Morris-Lecar neurons causes breakdown in conductance of channels. It is found that target wave-dominated regularity can be suppressed when the collapsed area is diffused in random. Statistical correlation functions for sampled nodes (neurons) are defined to detect the collapse of ordered state by series analysis.

  2. Persistent dynamic attractors in activity patterns of cultured neuronal networks

    NASA Astrophysics Data System (ADS)

    Wagenaar, Daniel A.; Nadasdy, Zoltan; Potter, Steve M.

    2006-05-01

    Three remarkable features of the nervous system—complex spatiotemporal patterns, oscillations, and persistent activity—are fundamental to such diverse functions as stereotypical motor behavior, working memory, and awareness. Here we report that cultured cortical networks spontaneously generate a hierarchical structure of periodic activity with a strongly stereotyped population-wide spatiotemporal structure demonstrating all three fundamental properties in a recurring pattern. During these “superbursts,” the firing sequence of the culture periodically converges to a dynamic attractor orbit. Precursors of oscillations and persistent activity have previously been reported as intrinsic properties of the neurons. However, complex spatiotemporal patterns that are coordinated in a large population of neurons and persist over several hours—and thus are capable of representing and preserving information—cannot be explained by known oscillatory properties of isolated neurons. Instead, the complexity of the observed spatiotemporal patterns implies large-scale self-organization of neurons interacting in a precise temporal order even in vitro, in cultures usually considered to have random connectivity.

  3. APPLICATIONS OF LASERS AND OTHER TOPICS IN QUANTUM ELECTRONICS: Principles for construction of a Hopfield neuron network with the aid of volume echo holograms

    NASA Astrophysics Data System (ADS)

    Manykin, É. A.; Belov, M. N.

    1991-02-01

    A theoretical investigation is reported of the properties of the photon echo as a method for dynamic holography of three-dimensional media. It is shown that an optical Hopfield neuron network can be constructed on this basis. Potential applications of the photon echo in other tasks involving neuron networks in optics are considered.

  4. Quantification of zinc toxicity using neuronal networks on microelectrode arrays.

    PubMed

    Parviz, M; Gross, G W

    2007-05-01

    Murine neuronal networks, derived from embryonic frontal cortex (FC) tissue grown on microelectrode arrays, were used to investigate zinc toxicity at concentrations ranging from 20 to 2000 microM total zinc acetate added to the culture medium. Continual multi-channel recording of spontaneous action potential generation allowed a quantitative analysis of the temporal evolution of network spike activity generation at specific zinc acetate concentrations. Cultures responded with immediate concentration-dependent excitation lasting from 5 to 50 min and consisting of increased spiking and enhanced, coordinated bursting, followed by irreversible activity decay. The time to 50% and 90% activity loss was concentration dependent, highly reproducible, and formed linear functions in log-log plots. Above 100 microM total zinc acetate, the activity loss was associated with massive cell swelling, blebbing, and even vigorous neuronal cell lysing. Glia showed stress, but did not participate in the extensive cell swelling. Network activity loss generally preceded morphological changes. Cultures pretreated with the GABA(A) receptor antagonists bicuculline (40 microM) and picrotoxin (1mM) lacked the initial excitation phase. This suggests that zinc-induced excitation may be mediated by interfering with GABA inhibition. Partial network protection was achieved by stopping spontaneous activity with either tetrodotoxin (200 nM) or lidocaine (250 microM). However, recovery was not complete and slow deterioration of network activity continued over 6-h periods. Removal of zinc by early medium changes showed irreversible, catastrophic network failure to develop in a concentration-dependent time window between 50% and 90% activity loss.

  5. Echo state networks with filter neurons and a delay&sum readout.

    PubMed

    Holzmann, Georg; Hauser, Helmut

    2010-03-01

    Echo state networks (ESNs) are a novel approach to recurrent neural network training with the advantage of a very simple and linear learning algorithm. It has been demonstrated that ESNs outperform other methods on a number of benchmark tasks. Although the approach is appealing, there are still some inherent limitations in the original formulation. Here we suggest two enhancements of this network model. First, the previously proposed idea of filters in neurons is extended to arbitrary infinite impulse response (IIR) filter neurons. This enables such networks to learn multiple attractors and signals at different timescales, which is especially important for modeling real-world time series. Second, a delay&sum readout is introduced, which adds trainable delays in the synaptic connections of output neurons and therefore vastly improves the memory capacity of echo state networks. It is shown in commonly used benchmark tasks and real-world examples, that this new structure is able to significantly outperform standard ESNs and other state-of-the-art models for nonlinear dynamical system modeling. Copyright 2009 Elsevier Ltd. All rights reserved.

  6. Neuronal response impedance mechanism implementing cooperative networks with low firing rates and μs precision

    PubMed Central

    Vardi, Roni; Goldental, Amir; Marmari, Hagar; Brama, Haya; Stern, Edward A.; Sardi, Shira; Sabo, Pinhas; Kanter, Ido

    2015-01-01

    Realizations of low firing rates in neural networks usually require globally balanced distributions among excitatory and inhibitory links, while feasibility of temporal coding is limited by neuronal millisecond precision. We show that cooperation, governing global network features, emerges through nodal properties, as opposed to link distributions. Using in vitro and in vivo experiments we demonstrate microsecond precision of neuronal response timings under low stimulation frequencies, whereas moderate frequencies result in a chaotic neuronal phase characterized by degraded precision. Above a critical stimulation frequency, which varies among neurons, response failures were found to emerge stochastically such that the neuron functions as a low pass filter, saturating the average inter-spike-interval. This intrinsic neuronal response impedance mechanism leads to cooperation on a network level, such that firing rates are suppressed toward the lowest neuronal critical frequency simultaneously with neuronal microsecond precision. Our findings open up opportunities of controlling global features of network dynamics through few nodes with extreme properties. PMID:26124707

  7. On the properties of input-to-output transformations in neuronal networks.

    PubMed

    Olypher, Andrey; Vaillant, Jean

    2016-06-01

    Information processing in neuronal networks in certain important cases can be considered as maps of binary vectors, where ones (spikes) and zeros (no spikes) of input neurons are transformed into spikes and no spikes of output neurons. A simple but fundamental characteristic of such a map is how it transforms distances between input vectors into distances between output vectors. We advanced earlier known results by finding an exact solution to this problem for McCulloch-Pitts neurons. The obtained explicit formulas allow for detailed analysis of how the network connectivity and neuronal excitability affect the transformation of distances in neurons. As an application, we explored a simple model of information processing in the hippocampus, a brain area critically implicated in learning and memory. We found network connectivity and neuronal excitability parameter values that optimize discrimination between similar and distinct inputs. A decrease of neuronal excitability, which in biological neurons may be associated with decreased inhibition, impaired the optimality of discrimination.

  8. Self-Organized Criticality in Developing Neuronal Networks

    PubMed Central

    Tetzlaff, Christian; Okujeni, Samora; Egert, Ulrich; Wörgötter, Florentin; Butz, Markus

    2010-01-01

    Recently evidence has accumulated that many neural networks exhibit self-organized criticality. In this state, activity is similar across temporal scales and this is beneficial with respect to information flow. If subcritical, activity can die out, if supercritical epileptiform patterns may occur. Little is known about how developing networks will reach and stabilize criticality. Here we monitor the development between 13 and 95 days in vitro (DIV) of cortical cell cultures (n = 20) and find four different phases, related to their morphological maturation: An initial low-activity state (≈19 DIV) is followed by a supercritical (≈20 DIV) and then a subcritical one (≈36 DIV) until the network finally reaches stable criticality (≈58 DIV). Using network modeling and mathematical analysis we describe the dynamics of the emergent connectivity in such developing systems. Based on physiological observations, the synaptic development in the model is determined by the drive of the neurons to adjust their connectivity for reaching on average firing rate homeostasis. We predict a specific time course for the maturation of inhibition, with strong onset and delayed pruning, and that total synaptic connectivity should be strongly linked to the relative levels of excitation and inhibition. These results demonstrate that the interplay between activity and connectivity guides developing networks into criticality suggesting that this may be a generic and stable state of many networks in vivo and in vitro. PMID:21152008

  9. Euchromatin histone methyltransferase 1 regulates cortical neuronal network development

    PubMed Central

    Bart Martens, Marijn; Frega, Monica; Classen, Jessica; Epping, Lisa; Bijvank, Elske; Benevento, Marco; van Bokhoven, Hans; Tiesinga, Paul; Schubert, Dirk; Nadif Kasri, Nael

    2016-01-01

    Heterozygous mutations or deletions in the human Euchromatin histone methyltransferase 1 (EHMT1) gene cause Kleefstra syndrome, a neurodevelopmental disorder that is characterized by autistic-like features and severe intellectual disability (ID). Neurodevelopmental disorders including ID and autism may be related to deficits in activity-dependent wiring of brain circuits during development. Although Kleefstra syndrome has been associated with dendritic and synaptic defects in mice and Drosophila, little is known about the role of EHMT1 in the development of cortical neuronal networks. Here we used micro-electrode arrays and whole-cell patch-clamp recordings to investigate the impact of EHMT1 deficiency at the network and single cell level. We show that EHMT1 deficiency impaired neural network activity during the transition from uncorrelated background action potential firing to synchronized network bursting. Spontaneous bursting and excitatory synaptic currents were transiently reduced, whereas miniature excitatory postsynaptic currents were not affected. Finally, we show that loss of function of EHMT1 ultimately resulted in less regular network bursting patterns later in development. These data suggest that the developmental impairments observed in EHMT1-deficient networks may result in a temporal misalignment between activity-dependent developmental processes thereby contributing to the pathophysiology of Kleefstra syndrome. PMID:27767173

  10. Self-organized criticality in developing neuronal networks.

    PubMed

    Tetzlaff, Christian; Okujeni, Samora; Egert, Ulrich; Wörgötter, Florentin; Butz, Markus

    2010-12-02

    Recently evidence has accumulated that many neural networks exhibit self-organized criticality. In this state, activity is similar across temporal scales and this is beneficial with respect to information flow. If subcritical, activity can die out, if supercritical epileptiform patterns may occur. Little is known about how developing networks will reach and stabilize criticality. Here we monitor the development between 13 and 95 days in vitro (DIV) of cortical cell cultures (n = 20) and find four different phases, related to their morphological maturation: An initial low-activity state (≈19 DIV) is followed by a supercritical (≈20 DIV) and then a subcritical one (≈36 DIV) until the network finally reaches stable criticality (≈58 DIV). Using network modeling and mathematical analysis we describe the dynamics of the emergent connectivity in such developing systems. Based on physiological observations, the synaptic development in the model is determined by the drive of the neurons to adjust their connectivity for reaching on average firing rate homeostasis. We predict a specific time course for the maturation of inhibition, with strong onset and delayed pruning, and that total synaptic connectivity should be strongly linked to the relative levels of excitation and inhibition. These results demonstrate that the interplay between activity and connectivity guides developing networks into criticality suggesting that this may be a generic and stable state of many networks in vivo and in vitro.

  11. Dynamical state of the network determines the efficacy of single neuron properties in shaping the network activity.

    PubMed

    Sahasranamam, Ajith; Vlachos, Ioannis; Aertsen, Ad; Kumar, Arvind

    2016-05-23

    Spike patterns are among the most common electrophysiological descriptors of neuron types. Surprisingly, it is not clear how the diversity in firing patterns of the neurons in a network affects its activity dynamics. Here, we introduce the state-dependent stochastic bursting neuron model allowing for a change in its firing patterns independent of changes in its input-output firing rate relationship. Using this model, we show that the effect of single neuron spiking on the network dynamics is contingent on the network activity state. While spike bursting can both generate and disrupt oscillations, these patterns are ineffective in large regions of the network state space in changing the network activity qualitatively. Finally, we show that when single-neuron properties are made dependent on the population activity, a hysteresis like dynamics emerges. This novel phenomenon has important implications for determining the network response to time-varying inputs and for the network sensitivity at different operating points.

  12. Energy substrates that fuel fast neuronal network oscillations.

    PubMed

    Galow, Lukas V; Schneider, Justus; Lewen, Andrea; Ta, Thuy-Truc; Papageorgiou, Ismini E; Kann, Oliver

    2014-01-01

    Fast neuronal network oscillations in the gamma-frequency band (30--100 Hz) provide a fundamental mechanism of complex neuronal information processing in the hippocampus and neocortex of mammals. Gamma oscillations have been implicated in higher brain functions such as sensory perception, motor activity, and memory formation. The oscillations emerge from precise synapse interactions between excitatory principal neurons such as pyramidal cells and inhibitory GABAergic interneurons, and they are associated with high energy expenditure. However, both energy substrates and metabolic pathways that are capable to power cortical gamma oscillations have been less defined. Here, we investigated the energy sources fueling persistent gamma oscillations in the CA3 subfield of organotypic hippocampal slice cultures of the rat. This preparation permits superior oxygen supply as well as fast application of glucose, glycolytic metabolites or drugs such as glycogen phosphorylase inhibitor during extracellular recordings of the local field potential. Our findings are: (i) gamma oscillations persist in the presence of glucose (10 mmol/L) for greater than 60 min in slice cultures while (ii) lowering glucose levels (2.5 mmol/L) significantly reduces the amplitude of the oscillation. (iii) Gamma oscillations are absent at low concentration of lactate (2 mmol/L). (iv) Gamma oscillations persist at high concentration (20 mmol/L) of either lactate or pyruvate, albeit showing significant reductions in the amplitude. (v) The breakdown of glycogen significantly delays the decay of gamma oscillations during glucose deprivation. However, when glucose is present, the turnover of glycogen is not essential to sustain gamma oscillations. Our study shows that fast neuronal network oscillations can be fueled by different energy-rich substrates, with glucose being most effective.

  13. Energy substrates that fuel fast neuronal network oscillations

    PubMed Central

    Galow, Lukas V.; Schneider, Justus; Lewen, Andrea; Ta, Thuy-Truc; Papageorgiou, Ismini E.; Kann, Oliver

    2014-01-01

    Fast neuronal network oscillations in the gamma-frequency band (30–−100 Hz) provide a fundamental mechanism of complex neuronal information processing in the hippocampus and neocortex of mammals. Gamma oscillations have been implicated in higher brain functions such as sensory perception, motor activity, and memory formation. The oscillations emerge from precise synapse interactions between excitatory principal neurons such as pyramidal cells and inhibitory GABAergic interneurons, and they are associated with high energy expenditure. However, both energy substrates and metabolic pathways that are capable to power cortical gamma oscillations have been less defined. Here, we investigated the energy sources fueling persistent gamma oscillations in the CA3 subfield of organotypic hippocampal slice cultures of the rat. This preparation permits superior oxygen supply as well as fast application of glucose, glycolytic metabolites or drugs such as glycogen phosphorylase inhibitor during extracellular recordings of the local field potential. Our findings are: (i) gamma oscillations persist in the presence of glucose (10 mmol/L) for greater than 60 min in slice cultures while (ii) lowering glucose levels (2.5 mmol/L) significantly reduces the amplitude of the oscillation. (iii) Gamma oscillations are absent at low concentration of lactate (2 mmol/L). (iv) Gamma oscillations persist at high concentration (20 mmol/L) of either lactate or pyruvate, albeit showing significant reductions in the amplitude. (v) The breakdown of glycogen significantly delays the decay of gamma oscillations during glucose deprivation. However, when glucose is present, the turnover of glycogen is not essential to sustain gamma oscillations. Our study shows that fast neuronal network oscillations can be fueled by different energy-rich substrates, with glucose being most effective. PMID:25538552

  14. To Break or to Brake Neuronal Network Accelerated by Ammonium Ions?

    PubMed Central

    Dynnik, Vladimir V.; Kononov, Alexey V.; Sergeev, Alexander I.; Teplov, Iliya Y.; Tankanag, Arina V.; Zinchenko, Valery P.

    2015-01-01

    Purpose The aim of present study was to investigate the effects of ammonium ions on in vitro neuronal network activity and to search alternative methods of acute ammonia neurotoxicity prevention. Methods Rat hippocampal neuronal and astrocytes co-cultures in vitro, fluorescent microscopy and perforated patch clamp were used to monitor the changes in intracellular Ca2+- and membrane potential produced by ammonium ions and various modulators in the cells implicated in neural networks. Results Low concentrations of NH4Cl (0.1–4 mM) produce short temporal effects on network activity. Application of 5–8 mM NH4Cl: invariably transforms diverse network firing regimen to identical burst patterns, characterized by substantial neuronal membrane depolarization at plateau phase of potential and high-amplitude Ca2+-oscillations; raises frequency and average for period of oscillations Ca2+-level in all cells implicated in network; results in the appearance of group of «run out» cells with high intracellular Ca2+ and steadily diminished amplitudes of oscillations; increases astrocyte Ca2+-signalling, characterized by the appearance of groups of cells with increased intracellular Ca2+-level and/or chaotic Ca2+-oscillations. Accelerated network activity may be suppressed by the blockade of NMDA or AMPA/kainate-receptors or by overactivation of AMPA/kainite-receptors. Ammonia still activate neuronal firing in the presence of GABA(A) receptors antagonist bicuculline, indicating that «disinhibition phenomenon» is not implicated in the mechanisms of networks acceleration. Network activity may also be slowed down by glycine, agonists of metabotropic inhibitory receptors, betaine, L-carnitine, L-arginine, etc. Conclusions Obtained results demonstrate that ammonium ions accelerate neuronal networks firing, implicating ionotropic glutamate receptors, having preserved the activities of group of inhibitory ionotropic and metabotropic receptors. This may mean, that ammonia

  15. 96-well electroporation method for transfection of mammalian central neurons.

    PubMed

    Buchser, William J; Pardinas, Jose R; Shi, Yan; Bixby, John L; Lemmon, Vance P

    2006-11-01

    Manipulating gene expression in primary neurons has been a goal for many scientists for over 20 years. Vertebrate central nervous system neurons are classically difficult to transfect. Most lipid reagents are inefficient and toxic to the cells, and time-consuming methods such as viral infections are often required to obtain better efficiencies. We have developed an efficient method for the transfection of cerebellar granule neurons and hippocampal neurons with standard plasmid vectors. Using 96-well electroporation plates, square-wave pulses can introduce 96 different plasmids into neurons in a single step. The procedure results in greater than 20% transfection efficiencies and requires only simple solutions of nominal cost. In addition to enabling the rapid optimization of experimental protocols with multiple parameters, this procedure enables the use of high content screening methods to characterize neuronal phenotypes.

  16. 96-Well electroporation method for transfection of mammalian central neurons

    PubMed Central

    Buchser, William J.; Pardinas, Jose R.; Shi, Yan; Bixby, John L.; Lemmon, Vance P.

    2008-01-01

    Manipulating gene expression in primary neurons has been a goal for many scientists for over 20 years. Vertebrate central nervous system neurons are classically difficult to transfect. Most lipid reagents are inefficient and toxic to the cells, and time-consuming methods such as viral infections are often required to obtain better efficiencies. We have developed an efficient method for the transfection of cerebellar granule neurons and hippocampal neurons with standard plasmid vectors. Using 96-well electroporation plates, square-wave pulses can introduce 96 different plasmids into neurons in a single step. The procedure results in greater than 20% transfection efficiencies and requires only simple solutions of nominal cost. In addition to enabling the rapid optimization of experimental protocols with multiple parameters, this procedure enables the use of high content screening methods to characterize neuronal phenotypes. PMID:17140120

  17. Interaction and intelligence in living neuronal networks interfaced with moving robot

    NASA Astrophysics Data System (ADS)

    Kudoh, Suguru N.; Taguchi, Takahisa

    2006-01-01

    Neurons form complex networks and it seems that the living neuronal network can perform certain type of information processing. We are interested in intelligence autonomously formed in vitro. The most important features of the two-dimensional culture neural network are that it is a system in which the information processing is autonomously carries out. We reported previously that the functional connections were dynamically modified by synaptic potentiation and the process may be required for reorganization of the functional group of neurons. Such neuron assemblies are critical for information processing in brain. Certain types of feedback stimulation caused suppression of spontaneous network electrical activities and drastic re-organization of functional connections between neurons, when these activities are initially almost synchronized. The result suggests that neurons in dissociated culture autonomously re-organized their functional neuronal networks interacted with their environment. The spatio-temporal pattern of activity in the networks may be a reflection of their external environment. We also interfaced the cultured neuronal network with moving robot. The planar microelectrodes can be used for detecting neuronal electrical signals from the living neuronal network cultured on a 2-dimensional electrode array. The speed of actuators of moving robot was determined by these detected signals. Our goal is reconstruction of the neural network, which can process "thinking" in the dissociated culture system.

  18. Collective behavior of interacting locally synchronized oscillations in neuronal networks

    NASA Astrophysics Data System (ADS)

    Jalili, Mahdi

    2012-10-01

    Local circuits in the cortex and hippocampus are endowed with resonant, oscillatory firing properties which underlie oscillations in various frequency ranges (e.g. gamma range) frequently observed in the local field potentials, and in electroencephalography. Synchronized oscillations are thought to play important roles in information binding in the brain. This paper addresses the collective behavior of interacting locally synchronized oscillations in realistic neural networks. A network of five neurons is proposed in order to produce locally synchronized oscillations. The neuron models are Hindmarsh-Rose type with electrical and/or chemical couplings. We construct large-scale models using networks of such units which capture the essential features of the dynamics of cells and their connectivity patterns. The profile of the spike synchronization is then investigated considering different model parameters such as strength and ratio of excitatory/inhibitory connections. We also show that transmission time-delay might enhance the spike synchrony. The influence of spike-timing-dependence-plasticity is also studies on the spike synchronization.

  19. Robust spatial memory maps in flickering neuronal networks: a topological model

    NASA Astrophysics Data System (ADS)

    Dabaghian, Yuri; Babichev, Andrey; Memoli, Facundo; Chowdhury, Samir; Rice University Collaboration; Ohio State University Collaboration

    It is widely accepted that the hippocampal place cells provide a substrate of the neuronal representation of the environment--the ``cognitive map''. However, hippocampal network, as any other network in the brain is transient: thousands of hippocampal neurons die every day and the connections formed by these cells constantly change due to various forms of synaptic plasticity. What then explains the remarkable reliability of our spatial memories? We propose a computational approach to answering this question based on a couple of insights. First, we propose that the hippocampal cognitive map is fundamentally topological, and hence it is amenable to analysis by topological methods. We then apply several novel methods from homology theory, to understand how dynamic connections between cells influences the speed and reliability of spatial learning. We simulate the rat's exploratory movements through different environments and study how topological invariants of these environments arise in a network of simulated neurons with ``flickering'' connectivity. We find that despite transient connectivity the network of place cells produces a stable representation of the topology of the environment.

  20. Mean-field equations for neuronal networks with arbitrary degree distributions

    NASA Astrophysics Data System (ADS)

    Nykamp, Duane Q.; Friedman, Daniel; Shaker, Sammy; Shinn, Maxwell; Vella, Michael; Compte, Albert; Roxin, Alex

    2017-04-01

    The emergent dynamics in networks of recurrently coupled spiking neurons depends on the interplay between single-cell dynamics and network topology. Most theoretical studies on network dynamics have assumed simple topologies, such as connections that are made randomly and independently with a fixed probability (Erdös-Rényi network) (ER) or all-to-all connected networks. However, recent findings from slice experiments suggest that the actual patterns of connectivity between cortical neurons are more structured than in the ER random network. Here we explore how introducing additional higher-order statistical structure into the connectivity can affect the dynamics in neuronal networks. Specifically, we consider networks in which the number of presynaptic and postsynaptic contacts for each neuron, the degrees, are drawn from a joint degree distribution. We derive mean-field equations for a single population of homogeneous neurons and for a network of excitatory and inhibitory neurons, where the neurons can have arbitrary degree distributions. Through analysis of the mean-field equations and simulation of networks of integrate-and-fire neurons, we show that such networks have potentially much richer dynamics than an equivalent ER network. Finally, we relate the degree distributions to so-called cortical motifs.

  1. Neuronal Networks during Burst Suppression as Revealed by Source Analysis

    PubMed Central

    Reinicke, Christine; Moeller, Friederike; Anwar, Abdul Rauf; Mideksa, Kidist Gebremariam; Pressler, Ronit; Deuschl, Günther; Stephani, Ulrich; Siniatchkin, Michael

    2015-01-01

    Introduction Burst-suppression (BS) is an electroencephalography (EEG) pattern consisting of alternant periods of slow waves of high amplitude (burst) and periods of so called flat EEG (suppression). It is generally associated with coma of various etiologies (hypoxia, drug-related intoxication, hypothermia, and childhood encephalopathies, but also anesthesia). Animal studies suggest that both the cortex and the thalamus are involved in the generation of BS. However, very little is known about mechanisms of BS in humans. The aim of this study was to identify the neuronal network underlying both burst and suppression phases using source reconstruction and analysis of functional and effective connectivity in EEG. Material/Methods Dynamic imaging of coherent sources (DICS) was applied to EEG segments of 13 neonates and infants with burst and suppression EEG pattern. The brain area with the strongest power in the analyzed frequency (1–4 Hz) range was defined as the reference region. DICS was used to compute the coherence between this reference region and the entire brain. The renormalized partial directed coherence (RPDC) was used to describe the informational flow between the identified sources. Results/Conclusion Delta activity during the burst phases was associated with coherent sources in the thalamus and brainstem as well as bilateral sources in cortical regions mainly frontal and parietal, whereas suppression phases were associated with coherent sources only in cortical regions. Results of the RPDC analyses showed an upwards informational flow from the brainstem towards the thalamus and from the thalamus to cortical regions, which was absent during the suppression phases. These findings may support the theory that a “cortical deafferentiation” between the cortex and sub-cortical structures exists especially in suppression phases compared to burst phases in burst suppression EEGs. Such a deafferentiation may play a role in the poor neurological outcome of

  2. Pharmacological characterization of cultivated neuronal networks: relevance to synaptogenesis and synaptic connectivity.

    PubMed

    Verstraelen, Peter; Pintelon, Isabel; Nuydens, Rony; Cornelissen, Frans; Meert, Theo; Timmermans, Jean-Pierre

    2014-07-01

    Mental disorders, such as schizophrenia or Alzheimer's disease, are associated with impaired synaptogenesis and/or synaptic communication. During development, neurons assemble into neuronal networks, the primary supracellular mediators of information processing. In addition to the orchestrated activation of genetic programs, spontaneous electrical activity and associated calcium signaling have been shown to be critically involved in the maturation of such neuronal networks. We established an in vitro model that recapitulates the maturation of neuronal networks, including spontaneous electrical activity. Upon plating, mouse primary hippocampal neurons grow neurites and interconnect via synapses to form a dish-wide neuronal network. Via live cell calcium imaging, we identified a limited period of time in which the spontaneous activity synchronizes across neurons, indicative of the formation of a functional network. After establishment of network activity, the neurons grow dendritic spines, the density of which was used as a morphological readout for neuronal maturity and connectivity. Hence, quantification of neurite outgrowth, synapse density, spontaneous neuronal activity, and dendritic spine density allowed to study neuronal network maturation from the day of plating until the presence of mature neuronal networks. Via acute pharmacological intervention, we show that synchronized network activity is mediated by the NMDA-R. The balance between kynurenic and quinolinic acid, both neuro-active intermediates in the tryptophan/kynurenine pathway, was shown to be decisive for the maintenance of network activity. Chronic modulation of the neurotrophic support influenced the network formation and revealed the extreme sensitivity of calcium imaging to detect subtle alterations in neuronal physiology. Given the reproducible cultivation in a 96-well setup in combination with fully automated analysis of the calcium recordings, this approach can be used to build a high

  3. Automated Detection of Soma Location and Morphology in Neuronal Network Cultures

    PubMed Central

    Ozcan, Burcin; Negi, Pooran; Laezza, Fernanda; Papadakis, Manos; Labate, Demetrio

    2015-01-01

    Automated identification of the primary components of a neuron and extraction of its sub-cellular features are essential steps in many quantitative studies of neuronal networks. The focus of this paper is the development of an algorithm for the automated detection of the location and morphology of somas in confocal images of neuronal network cultures. This problem is motivated by applications in high-content screenings (HCS), where the extraction of multiple morphological features of neurons on large data sets is required. Existing algorithms are not very efficient when applied to the analysis of confocal image stacks of neuronal cultures. In addition to the usual difficulties associated with the processing of fluorescent images, these types of stacks contain a small number of images so that only a small number of pixels are available along the z-direction and it is challenging to apply conventional 3D filters. The algorithm we present in this paper applies a number of innovative ideas from the theory of directional multiscale representations and involves the following steps: (i) image segmentation based on support vector machines with specially designed multiscale filters; (ii) soma extraction and separation of contiguous somas, using a combination of level set method and directional multiscale filters. We also present an approach to extract the soma’s surface morphology using the 3D shearlet transform. Extensive numerical experiments show that our algorithms are computationally efficient and highly accurate in segmenting the somas and separating contiguous ones. The algorithms presented in this paper will facilitate the development of a high-throughput quantitative platform for the study of neuronal networks for HCS applications. PMID:25853656

  4. Fuzzy operators and cyclic behavior in formal neuronal networks

    NASA Technical Reports Server (NTRS)

    Labos, E.; Holden, A. V.; Laczko, J.; Orzo, L.; Labos, A. S.

    1992-01-01

    Formal neuronal networks (FNN), which are comprised of threshold gates, make use of the unit step function. It is regarded as a degenerated distribution function (DDF) and will be referred to here as a non-fuzzy threshold operator (nFTO). Special networks of this kind generating long cycles of states are modified by introduction of fuzzy threshold operators (FTO), i.e., non-degenerated distribution functions (nDDF). The cyclic behavior of the new nets is compared with the original ones. The interconnection matrix and threshold values are not modified. It is concluded that the original long cycles change the fixed points and short cycles, and as the computer simulations demonstrate, the aperiodic motion that is associated with chaotic behavior appears. The emergence of the above changes depend on the steepness of the threshold operators.

  5. A neuronal network of mitochondrial dynamics regulates metastasis

    PubMed Central

    Caino, M. Cecilia; Seo, Jae Ho; Aguinaldo, Angeline; Wait, Eric; Bryant, Kelly G.; Kossenkov, Andrew V.; Hayden, James E.; Vaira, Valentina; Morotti, Annamaria; Ferrero, Stefano; Bosari, Silvano; Gabrilovich, Dmitry I.; Languino, Lucia R.; Cohen, Andrew R.; Altieri, Dario C.

    2016-01-01

    The role of mitochondria in cancer is controversial. Using a genome-wide shRNA screen, we now show that tumours reprogram a network of mitochondrial dynamics operative in neurons, including syntaphilin (SNPH), kinesin KIF5B and GTPase Miro1/2 to localize mitochondria to the cortical cytoskeleton and power the membrane machinery of cell movements. When expressed in tumours, SNPH inhibits the speed and distance travelled by individual mitochondria, suppresses organelle dynamics, and blocks chemotaxis and metastasis, in vivo. Tumour progression in humans is associated with downregulation or loss of SNPH, which correlates with shortened patient survival, increased mitochondrial trafficking to the cortical cytoskeleton, greater membrane dynamics and heightened cell invasion. Therefore, a SNPH network regulates metastatic competence and may provide a therapeutic target in cancer. PMID:27991488

  6. Granger causality-based synaptic weights estimation for analyzing neuronal networks.

    PubMed

    Shao, Pei-Chiang; Huang, Jian-Jia; Shann, Wei-Chang; Yen, Chen-Tung; Tsai, Meng-Li; Yen, Chien-Chang

    2015-06-01

    Granger causality (GC) analysis has emerged as a powerful analytical method for estimating the causal relationship among various types of neural activity data. However, two problems remain not very clear and further researches are needed: (1) The GC measure is designed to be nonnegative in its original form, lacking of the trait for differentiating the effects of excitations and inhibitions between neurons. (2) How is the estimated causality related to the underlying synaptic weights? Based on the GC, we propose a computational algorithm under a best linear predictor assumption for analyzing neuronal networks by estimating the synaptic weights among them. Under this assumption, the GC analysis can be extended to measure both excitatory and inhibitory effects between neurons. The method was examined by three sorts of simulated networks: those with linear, almost linear, and nonlinear network structures. The method was also illustrated to analyze real spike train data from the anterior cingulate cortex (ACC) and the striatum (STR). The results showed, under the quinpirole administration, the significant existence of excitatory effects inside the ACC, excitatory effects from the ACC to the STR, and inhibitory effects inside the STR.

  7. Neuronal network disturbance after focal ischemia in rats

    SciTech Connect

    Kataoka, K.; Hayakawa, T.; Yamada, K.; Mushiroi, T.; Kuroda, R.; Mogami, H. )

    1989-09-01

    We studied functional disturbances following left middle cerebral artery occlusion in rats. Neuronal function was evaluated by (14C)2-deoxyglucose autoradiography 1 day after occlusion. We analyzed the mechanisms of change in glucose utilization outside the infarct using Fink-Heimer silver impregnation, axonal transport of wheat germ agglutinin-conjugated-horseradish peroxidase, and succinate dehydrogenase histochemistry. One day after occlusion, glucose utilization was remarkably reduced in the areas surrounding the infarct. There were many silver grains indicating degeneration of the synaptic terminals in the cortical areas surrounding the infarct and the ipsilateral cingulate cortex. Moreover, in the left thalamus where the left middle cerebral artery supplied no blood, glucose utilization significantly decreased compared with sham-operated rats. In the left thalamus, massive silver staining of degenerated synaptic terminals and decreases in succinate dehydrogenase activity were observed 4 and 5 days after occlusion. The absence of succinate dehydrogenase staining may reflect early changes in retrograde degeneration of thalamic neurons after ischemic injury of the thalamocortical pathway. Terminal degeneration even affected areas remote from the infarct: there were silver grains in the contralateral hemisphere transcallosally connected to the infarct and in the ipsilateral substantia nigra. Axonal transport study showed disruption of the corticospinal tract by subcortical ischemia; the transcallosal pathways in the cortex surrounding the infarct were preserved. The relation between neural function and the neuronal network in the area surrounding the focal cerebral infarct is discussed with regard to ischemic penumbra and diaschisis.

  8. Microstate description of stable chaos in networks of spiking neurons

    NASA Astrophysics Data System (ADS)

    Puelma Touzel, Maximilian; Michael, Monteforte; Wolf, Fred

    2014-03-01

    Dynamic instabilities have been proposed to explain the decorrelation of stimulus-driven activity observed in sensory areas such as the olfactory bulb, but are sensitive to noise. Simple neuron models coupled through inhibition can nevertheless exhibit a negative maximum Lyapunov exponent, despite displaying irregular and asynchronous (AI) activity and having an exponential instability to finite-sized perturbations above a critical strength that scales with the size, density and activity of the circuit. This stable chaos, a phenomenon first found in coupled-map lattices, produces a large, finite set of locally-attracting, yet mutually-repelling AI spike sequences ideally suited for discrete, high-dimensional coding. We analyze the effects of finite-sized perturbations on the spiking microstate and reveal the mechanism underlying the stable chaos. From this, we can analytically derive the aforementioned scaling relations and estimate the critical value of previously observed transitions to conventional chaos. This work highlights the features of intra-neuron dynamics and inter-neuron coupling that generate this phase space structure, which might serve as an attractor reservoir that downstream networks can use to decode sensory input.

  9. Serotonin and Prefrontal Cortex Function: Neurons, Networks, and Circuits

    PubMed Central

    Puig, M. Victoria; Gulledge, Allan T.

    2012-01-01

    Higher-order executive tasks such as learning, working memory, and behavioral flexibility depend on the prefrontal cortex (PFC), the brain region most elaborated in primates. The prominent innervation by serotonin neurons and the dense expression of serotonergic receptors in the PFC suggest that serotonin is a major modulator of its function. The most abundant serotonin receptors in the PFC, 5-HT1A, 5-HT2A and 5-HT3A receptors, are selectively expressed in distinct populations of pyramidal neurons and inhibitory interneurons, and play a critical role in modulating cortical activity and neural oscillations (brain waves). Serotonergic signaling is altered in many psychiatric disorders such as schizophrenia and depression, where parallel changes in receptor expression and brain waves have been observed. Furthermore, many psychiatric drug treatments target serotonergic receptors in the PFC. Thus, understanding the role of serotonergic neurotransmission in PFC function is of major clinical importance. Here we review recent findings concerning the powerful influences of serotonin on single neurons, neural networks, and cortical circuits in the PFC of the rat, where the effects of serotonin have been most thoroughly studied. PMID:22076606

  10. Constrained Synaptic Connectivity in Functional Mammalian Neuronal Networks Grown on Patterned Surfaces

    NASA Astrophysics Data System (ADS)

    Bourdieu, Laurent; Wyart, Claire; Ybert, Christophe; Herr, Catherine; Chatenay, Didier

    2002-03-01

    The use of ordered neuronal networks in vitro is a promising approach to study the development and the activity of neuronal assemblies. However in previous attempts, sufficient growth control and physiological maturation of neurons could not be achieved. We describe an original protocol in which polylysine patterns confine the adhesion of cellular bodies to prescribed spots and the neuritic growth to thin lines. Hippocampal neurons are maintained healthy in serum free medium up to five weeks in vitro. Electrophysiology and immunochemistry show that neurons exhibit mature excitatory and inhibitory synapses and calcium imaging reveals spontaneous bursting activity of neurons in isolated networks. Neurons in these geometrical networks form functional synapses preferentially to their first neighbors. We have therefore established a simple and robust protocol to constrain both the location of neuronal cell bodies and their pattern of connectivity.

  11. Training a Network of Electronic Neurons for Control of a Mobile Robot

    NASA Astrophysics Data System (ADS)

    Vromen, T. G. M.; Steur, E.; Nijmeijer, H.

    An adaptive training procedure is developed for a network of electronic neurons, which controls a mobile robot driving around in an unknown environment while avoiding obstacles. The neuronal network controls the angular velocity of the wheels of the robot based on the sensor readings. The nodes in the neuronal network controller are clusters of neurons rather than single neurons. The adaptive training procedure ensures that the input-output behavior of the clusters is identical, even though the constituting neurons are nonidentical and have, in isolation, nonidentical responses to the same input. In particular, we let the neurons interact via a diffusive coupling, and the proposed training procedure modifies the diffusion interaction weights such that the neurons behave synchronously with a predefined response. The working principle of the training procedure is experimentally validated and results of an experiment with a mobile robot that is completely autonomously driving in an unknown environment with obstacles are presented.

  12. Graded information extraction by neural-network dynamics with multihysteretic neurons.

    PubMed

    Tsuboshita, Yukihiro; Okamoto, Hiroshi

    2009-09-01

    A major goal in the study of neural networks is to create novel information-processing algorithms inferred from the real brain. Recent neurophysiological evidence of graded persistent activity suggests that the brain possesses neural mechanisms for retrieval of graded information, which could be described by the neural-network dynamics with attractors that are continuously dependent on the initial state. Theoretical studies have also demonstrated that model neurons with a multihysteretic response property can generate robust continuous attractors. Inspired by these lines of evidence, we proposed an algorithm given by the multihysteretic neuron-network dynamics, devised to retrieve graded information specific to a given topic (i.e., context, represented by the initial state). To demonstrate the validity of the proposed algorithm, we examined keyword extraction from documents, which is best fitted for evaluating the appropriateness of retrieval of graded information. The performance of keyword extraction by using our algorithm was significantly high (measured by the average precision of document retrieval, for which the appropriateness of keyword extraction is crucial) compared with standard document-retrieval methods. Moreover, our algorithm exhibited much higher performance than the neural-network dynamics with bistable neurons, which can also produce robust continuous attractors but only represent dichotomous information at the single-cell level. These findings indicate that the capability to manage graded information at the single-cell level was essential for obtaining a high performing algorithm.

  13. Exact event-driven implementation for recurrent networks of stochastic perfect integrate-and-fire neurons.

    PubMed

    Taillefumier, Thibaud; Touboul, Jonathan; Magnasco, Marcelo

    2012-12-01

    In vivo cortical recording reveals that indirectly driven neural assemblies can produce reliable and temporally precise spiking patterns in response to stereotyped stimulation. This suggests that despite being fundamentally noisy, the collective activity of neurons conveys information through temporal coding. Stochastic integrate-and-fire models delineate a natural theoretical framework to study the interplay of intrinsic neural noise and spike timing precision. However, there are inherent difficulties in simulating their networks' dynamics in silico with standard numerical discretization schemes. Indeed, the well-posedness of the evolution of such networks requires temporally ordering every neuronal interaction, whereas the order of interactions is highly sensitive to the random variability of spiking times. Here, we answer these issues for perfect stochastic integrate-and-fire neurons by designing an exact event-driven algorithm for the simulation of recurrent networks, with delayed Dirac-like interactions. In addition to being exact from the mathematical standpoint, our proposed method is highly efficient numerically. We envision that our algorithm is especially indicated for studying the emergence of polychronized motifs in networks evolving under spike-timing-dependent plasticity with intrinsic noise.

  14. Recent Developments in VSD Imaging of Small Neuronal Networks

    ERIC Educational Resources Information Center

    Hill, Evan S.; Bruno, Angela M.; Frost, William N.

    2014-01-01

    Voltage-sensitive dye (VSD) imaging is a powerful technique that can provide, in single experiments, a large-scale view of network activity unobtainable with traditional sharp electrode recording methods. Here we review recent work using VSDs to study small networks and highlight several results from this approach. Topics covered include circuit…

  15. Recent Developments in VSD Imaging of Small Neuronal Networks

    ERIC Educational Resources Information Center

    Hill, Evan S.; Bruno, Angela M.; Frost, William N.

    2014-01-01

    Voltage-sensitive dye (VSD) imaging is a powerful technique that can provide, in single experiments, a large-scale view of network activity unobtainable with traditional sharp electrode recording methods. Here we review recent work using VSDs to study small networks and highlight several results from this approach. Topics covered include circuit…

  16. Dynamic Control of Synchronous Activity in Networks of Spiking Neurons

    PubMed Central

    Hutt, Axel; Mierau, Andreas; Lefebvre, Jérémie

    2016-01-01

    Oscillatory brain activity is believed to play a central role in neural coding. Accumulating evidence shows that features of these oscillations are highly dynamic: power, frequency and phase fluctuate alongside changes in behavior and task demands. The role and mechanism supporting this variability is however poorly understood. We here analyze a network of recurrently connected spiking neurons with time delay displaying stable synchronous dynamics. Using mean-field and stability analyses, we investigate the influence of dynamic inputs on the frequency of firing rate oscillations. We show that afferent noise, mimicking inputs to the neurons, causes smoothing of the system’s response function, displacing equilibria and altering the stability of oscillatory states. Our analysis further shows that these noise-induced changes cause a shift of the peak frequency of synchronous oscillations that scales with input intensity, leading the network towards critical states. We lastly discuss the extension of these principles to periodic stimulation, in which externally applied driving signals can trigger analogous phenomena. Our results reveal one possible mechanism involved in shaping oscillatory activity in the brain and associated control principles. PMID:27669018

  17. Neuron-Like Networks Between Ribosomal Proteins Within the Ribosome

    NASA Astrophysics Data System (ADS)

    Poirot, Olivier; Timsit, Youri

    2016-05-01

    From brain to the World Wide Web, information-processing networks share common scale invariant properties. Here, we reveal the existence of neural-like networks at a molecular scale within the ribosome. We show that with their extensions, ribosomal proteins form complex assortative interaction networks through which they communicate through tiny interfaces. The analysis of the crystal structures of 50S eubacterial particles reveals that most of these interfaces involve key phylogenetically conserved residues. The systematic observation of interactions between basic and aromatic amino acids at the interfaces and along the extension provides new structural insights that may contribute to decipher the molecular mechanisms of signal transmission within or between the ribosomal proteins. Similar to neurons interacting through “molecular synapses”, ribosomal proteins form a network that suggest an analogy with a simple molecular brain in which the “sensory-proteins” innervate the functional ribosomal sites, while the “inter-proteins” interconnect them into circuits suitable to process the information flow that circulates during protein synthesis. It is likely that these circuits have evolved to coordinate both the complex macromolecular motions and the binding of the multiple factors during translation. This opens new perspectives on nanoscale information transfer and processing.

  18. Simulation of Code Spectrum and Code Flow of Cultured Neuronal Networks.

    PubMed

    Tamura, Shinichi; Nishitani, Yoshi; Hosokawa, Chie; Miyoshi, Tomomitsu; Sawai, Hajime

    2016-01-01

    It has been shown that, in cultured neuronal networks on a multielectrode, pseudorandom-like sequences (codes) are detected, and they flow with some spatial decay constant. Each cultured neuronal network is characterized by a specific spectrum curve. That is, we may consider the spectrum curve as a "signature" of its associated neuronal network that is dependent on the characteristics of neurons and network configuration, including the weight distribution. In the present study, we used an integrate-and-fire model of neurons with intrinsic and instantaneous fluctuations of characteristics for performing a simulation of a code spectrum from multielectrodes on a 2D mesh neural network. We showed that it is possible to estimate the characteristics of neurons such as the distribution of number of neurons around each electrode and their refractory periods. Although this process is a reverse problem and theoretically the solutions are not sufficiently guaranteed, the parameters seem to be consistent with those of neurons. That is, the proposed neural network model may adequately reflect the behavior of a cultured neuronal network. Furthermore, such prospect is discussed that code analysis will provide a base of communication within a neural network that will also create a base of natural intelligence.

  19. Quantification of degeneracy in Hodgkin-Huxley neurons on Newman-Watts small world network.

    PubMed

    Man, Menghua; Zhang, Ya; Ma, Guilei; Friston, Karl; Liu, Shanghe

    2016-08-07

    Degeneracy is a fundamental source of biological robustness, complexity and evolvability in many biological systems. However, degeneracy is often confused with redundancy. Furthermore, the quantification of degeneracy has not been addressed for realistic neuronal networks. The objective of this paper is to characterize degeneracy in neuronal network models via quantitative mathematic measures. Firstly, we establish Hodgkin-Huxley neuronal networks with Newman-Watts small world network architectures. Secondly, in order to calculate the degeneracy, redundancy and complexity in the ensuing networks, we use information entropy to quantify the information a neuronal response carries about the stimulus - and mutual information to measure the contribution of each subset of the neuronal network. Finally, we analyze the interdependency of degeneracy, redundancy and complexity - and how these three measures depend upon network architectures. Our results suggest that degeneracy can be applied to any neuronal network as a formal measure, and degeneracy is distinct from redundancy. Qualitatively degeneracy and complexity are more highly correlated over different network architectures, in comparison to redundancy. Quantitatively, the relationship between both degeneracy and redundancy depends on network coupling strength: both degeneracy and redundancy increase with complexity for small coupling strengths; however, as coupling strength increases, redundancy decreases with complexity (in contrast to degeneracy, which is relatively invariant). These results suggest that the degeneracy is a general topologic characteristic of neuronal networks, which could be applied quantitatively in neuroscience and connectomics.

  20. Search for periodicity in the observational data by means of artificial neuron networks

    NASA Astrophysics Data System (ADS)

    Baluev, R.

    2012-05-01

    The possibility of application of artificial neural networks is considered for two classical model problems of observational data reduction: (i) the identification of periodic oscillations in noisy time series and (ii) assessment of the frequency of this oscillation (on the existing time series). On the inputs of the neural networks the values of the time series are given, and on the output, respectively, we have either an indicatior of the presence of signal (from 0 to 1), or the assessment of its frequency. It is shown that the theoretical limit, which a neural network can achieve in the training to solve such problems, corresponds to the Bayesian theory of estimation and testing of statistical hypotheses. Training of the neural network was carried out with a help of means of open-source package FANN. The best results were achieved using the algorithm Cascade2, which allows finding the optimal number of network neurons (not just the weight of the connection between them). In comparison with traditional methods based on the periodogram, which require long calculations, the trained neural network works almost instantly. Thus, artificial neural networks are very promising for the processing of large data sets. However, the threshold of signal detection so far failed to bring to Bayesian theoretical limit. In addition, it is not yet possible to train the neural network to analyze time-series with arbitrarily-uneven distribution of observations. This indicates on a need for further investigations to improve the efficiency of the method.

  1. Probabilistic inference in general graphical models through sampling in stochastic networks of spiking neurons.

    PubMed

    Pecevski, Dejan; Buesing, Lars; Maass, Wolfgang

    2011-12-01

    An important open problem of computational neuroscience is the generic organization of computations in networks of neurons in the brain. We show here through rigorous theoretical analysis that inherent stochastic features of spiking neurons, in combination with simple nonlinear computational operations in specific network motifs and dendritic arbors, enable networks of spiking neurons to carry out probabilistic inference through sampling in general graphical models. In particular, it enables them to carry out probabilistic inference in Bayesian networks with converging arrows ("explaining away") and with undirected loops, that occur in many real-world tasks. Ubiquitous stochastic features of networks of spiking neurons, such as trial-to-trial variability and spontaneous activity, are necessary ingredients of the underlying computational organization. We demonstrate through computer simulations that this approach can be scaled up to neural emulations of probabilistic inference in fairly large graphical models, yielding some of the most complex computations that have been carried out so far in networks of spiking neurons.

  2. C-Mantec: a novel constructive neural network algorithm incorporating competition between neurons.

    PubMed

    Subirats, José L; Franco, Leonardo; Jerez, José M

    2012-02-01

    C-Mantec is a novel neural network constructive algorithm that combines competition between neurons with a stable modified perceptron learning rule. The neuron learning is governed by the thermal perceptron rule that ensures stability of the acquired knowledge while the architecture grows and while the neurons compete for new incoming information. Competition makes it possible that even after new units have been added to the network, existing neurons still can learn if the incoming information is similar to their stored knowledge, and this constitutes a major difference with existing constructing algorithms. The new algorithm is tested on two different sets of benchmark problems: a Boolean function set used in logic circuit design and a well studied set of real world problems. Both sets were used to analyze the size of the constructed architectures and the generalization ability obtained and to compare the results with those from other standard and well known classification algorithms. The problem of overfitting is also analyzed, and a new built-in method to avoid its effects is devised and successfully applied within an active learning paradigm that filter noisy examples. The results show that the new algorithm generates very compact neural architectures with state-of-the-art generalization capabilities.

  3. Connectivity, excitability and activity patterns in neuronal networks

    NASA Astrophysics Data System (ADS)

    le Feber, Joost; Stoyanova, Irina I.; Chiappalone, Michela

    2014-06-01

    Extremely synchronized firing patterns such as those observed in brain diseases like epilepsy may result from excessive network excitability. Although network excitability is closely related to (excitatory) connectivity, a direct measure for network excitability remains unavailable. Several methods currently exist for estimating network connectivity, most of which are related to cross-correlation. An example is the conditional firing probability (CFP) analysis which calculates the pairwise probability (CFPi,j) that electrode j records an action potential at time t = τ, given that electrode i recorded a spike at t = 0. However, electrode i often records multiple spikes within the analysis interval, and CFP values are biased by the on-going dynamic state of the network. Here we show that in a linear approximation this bias may be removed by deconvoluting CFPi,j with the autocorrelation of i (i.e. CFPi,i), to obtain the single pulse response (SPRi,j)—the average response at electrode j to a single spike at electrode i. Thus, in a linear system SPRs would be independent of the dynamic network state. Nonlinear components of synaptic transmission, such as facilitation and short term depression, will however still affect SPRs. Therefore SPRs provide a clean measure of network excitability. We used carbachol and ghrelin to moderately activate cultured cortical networks to affect their dynamic state. Both neuromodulators transformed the bursting firing patterns of the isolated networks into more dispersed firing. We show that the influence of the dynamic state on SPRs is much smaller than the effect on CFPs, but not zero. The remaining difference reflects the alteration in network excitability. We conclude that SPRs are less contaminated by the dynamic network state and that mild excitation may decrease network excitability, possibly through short term synaptic depression.

  4. Multiparametric characterisation of neuronal network activity for in vitro agrochemical neurotoxicity assessment.

    PubMed

    Alloisio, Susanna; Nobile, Mario; Novellino, Antonio

    2015-05-01

    The last few decades have seen the marketing of hundreds of new pesticide products with a forecasted expansion of the global agrochemical industry. As several pesticides directly target nervous tissue as their mechanism of toxicity, alternative methods to routine in vivo animal testing, such as the Multi Electrode Array (MEAs)-based approach, have been proposed as an in vitro tool to perform sensitive, quick and low cost neuro-toxicological screening. Here, we examined the effects of a training set of eleven active substances known to have neuronal or non-neuronal targets, contained in the most commonly used agrochemicals, on the spontaneous electrical activity of cortical neuronal networks grown on MEAs. A multiparametric characterisation of neuronal network firing and bursting was performed with the aim of investigating how this can contribute to the efficient evaluation of in vitro chemical-induced neurotoxicity. The analysis of MFR, MBR, MBD, MISI_B and % Spikes_B parameters identified four different groups of chemicals: one wherein only inhibition is observed (chlorpyrifos, deltamethrin, orysastrobin, dimoxystrobin); a second one in which all parameters, except the MISI_B, are inhibited (carbaryl, quinmerac); a third in which increases at low chemical concentration are followed by decreases at high concentration, with exception of MISI_B that only decreased (fipronil); a fourth in which no effects are observed (paraquat, glyphosate, imidacloprid, mepiquat). The overall results demonstrated that the multiparametric description of the neuronal networks activity makes MEA-based screening platform an accurate and consistent tool for the evaluation of the toxic potential of chemicals. In particular, among the bursting parameters the MISI_B was the best that correlates with potency and may help to better define chemical toxicity when MFR is affected only at relatively high concentration.

  5. Effect of synaptic plasticity on the structure and dynamics of disordered networks of coupled neurons

    NASA Astrophysics Data System (ADS)

    Bayati, M.; Valizadeh, A.

    2012-07-01

    In an all-to-all network of integrate-and-fire neurons in which there is a disorder in the intrinsic oscillatory frequencies of the neurons, we show that through spike-timing-dependent plasticity the synapses which have the high-frequency neurons as presynaptic tend to be potentiated while the links originated from the low-frequency neurons are weakened. The emergent effective flow of directed connections introduces the high-frequency neurons as the more influential elements in the network and facilitates synchronization by decreasing the synaptic cost for onset of synchronization.

  6. A combined Bodian-Nissl stain for improved network analysis in neuronal cell culture.

    PubMed

    Hightower, M; Gross, G W

    1985-11-01

    Bodian and Nissl procedures were combined to stain dissociated mouse spinal cord cells cultured on coverslips. The Bodian technique stains fine neuronal processes in great detail as well as an intracellular fibrillar network concentrated around the nucleus and in proximal neurites. The Nissl stain clearly delimits neuronal cytoplasm in somata and in large dendrites. A combination of these techniques allows the simultaneous depiction of neuronal perikarya and all afferent and efferent processes. Costaining with little background staining by either procedure suggests high specificity for neurons. This procedure could be exploited for routine network analysis of cultured neurons.

  7. Network dynamics of cultured hippocampal neurons in a multi-electrode array

    NASA Astrophysics Data System (ADS)

    Taguchi, Takahisa; Kudoh, Suguru N.

    2005-02-01

    The neurons in dissociation culture autonomously re-organized their functional neuronal networks, after the process for elongating neurites and establishing synaptic connections. The spatio-temporal patterns of activity in the networks might be a reflection of functional neuron assemblies. The functional connections were dynamically modified by synaptic potentiation and the process may be required for reorganization of the functional group of neurons. Such neuron assemblies are critical for information processing in brain. To visualize the functional connections between neurons, we have analyzed the autonomous activity of synaptically induced action potentials in the living neuronal networks on a multi-electrode array, using "connection map analysis" that we developed for this purpose. Moreover, we designed aan original wide area covering electrode array and succeeded in recording spontaneous action potentials from wider area than commercial multi electrode arrays.

  8. A real-time hybrid neuron network for highly parallel cognitive systems.

    PubMed

    Christiaanse, Gerrit Jan; Zjajo, Amir; Galuzzi, Carlo; van Leuken, Rene

    2016-08-01

    For comprehensive understanding of how neurons communicate with each other, new tools need to be developed that can accurately mimic the behaviour of such neurons and neuron networks under `real-time' constraints. In this paper, we propose an easily customisable, highly pipelined, neuron network design, which executes optimally scheduled floating-point operations for maximal amount of biophysically plausible neurons per FPGA family type. To reduce the required amount of resources without adverse effect on the calculation latency, a single exponent instance is used for multiple neuron calculation operations. Experimental results indicate that the proposed network design allows the simulation of up to 1188 neurons on Virtex7 (XC7VX550T) device in brain real-time yielding a speed-up of x12.4 compared to the state-of-the art.

  9. Linking Neurons to Network Function and Behavior by Two-Photon Holographic Optogenetics and Volumetric Imaging.

    PubMed

    Dal Maschio, Marco; Donovan, Joseph C; Helmbrecht, Thomas O; Baier, Herwig

    2017-05-17

    We introduce a flexible method for high-resolution interrogation of circuit function, which combines simultaneous 3D two-photon stimulation of multiple targeted neurons, volumetric functional imaging, and quantitative behavioral tracking. This integrated approach was applied to dissect how an ensemble of premotor neurons in the larval zebrafish brain drives a basic motor program, the bending of the tail. We developed an iterative photostimulation strategy to identify minimal subsets of channelrhodopsin (ChR2)-expressing neurons that are sufficient to initiate tail movements. At the same time, the induced network activity was recorded by multiplane GCaMP6 imaging across the brain. From this dataset, we computationally identified activity patterns associated with distinct components of the elicited behavior and characterized the contributions of individual neurons. Using photoactivatable GFP (paGFP), we extended our protocol to visualize single functionally identified neurons and reconstruct their morphologies. Together, this toolkit enables linking behavior to circuit activity with unprecedented resolution. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Neuronal oscillations and functional interactions between resting state networks.

    PubMed

    Lei, Xu; Wang, Yulin; Yuan, Hong; Mantini, Dante

    2014-07-01

    Functional magnetic imaging (fMRI) studies showed that resting state activity in the healthy brain is organized into multiple large-scale networks encompassing distant regions. A key finding of resting state fMRI studies is the anti-correlation typically observed between the dorsal attention network (DAN) and the default mode network (DMN), which - during task performance - are activated and deactivated, respectively. Previous studies have suggested that alcohol administration modulates the balance of activation/deactivation in brain networks, as well as it induces significant changes in oscillatory activity measured by electroencephalography (EEG). However, our knowledge of alcohol-induced changes in band-limited EEG power and their potential link with the functional interactions between DAN and DMN is still very limited. Here we address this issue, examining the neuronal effects of alcohol administration during resting state by using simultaneous EEG-fMRI. Our findings show increased EEG power in the theta frequency band (4-8 Hz) after administration of alcohol compared to placebo, which was prominent over the frontal cortex. More interestingly, increased frontal tonic EEG activity in this band was associated with greater anti-correlation between the DAN and the frontal component of the DMN. Furthermore, EEG theta power and DAN-DMN anti-correlation were relatively greater in subjects who reported a feeling of euphoria after alcohol administration, which may result from a diminished inhibition exerted by the prefrontal cortex. Overall, our findings suggest that slow brain rhythms are responsible for dynamic functional interactions between brain networks. They also confirm the applicability and potential usefulness of EEG-fMRI for central nervous system drug research.

  11. A small change in neuronal network topology can induce explosive synchronization transition and activity propagation in the entire network.

    PubMed

    Wang, Zhenhua; Tian, Changhai; Dhamala, Mukesh; Liu, Zonghua

    2017-04-03

    We here study explosive synchronization transitions and network activity propagation in networks of coupled neurons to provide a new understanding of the relationship between network topology and explosive dynamical transitions as in epileptic seizures and their propagations in the brain. We model local network motifs and configurations of coupled neurons and analyze the activity propagations between a group of active neurons to their inactive neuron neighbors in a variety of network configurations. We find that neuronal activity propagation is limited to local regions when network is highly clustered with modular structures as in the normal brain networks. When the network cluster structure is slightly changed, the activity propagates to the entire network, which is reminiscent of epileptic seizure propagation in the brain. Finally, we analyze intracranial electroencephalography (IEEG) recordings of a seizure episode from a epilepsy patient and uncover that explosive synchronization-like transition occurs around the clinically defined onset of seizure. These findings may provide a possible mechanism for the recurrence of epileptic seizures, which are known to be the results of aberrant neuronal network structure and/or function in the brain.

  12. Classification of adult human dentate nucleus border neurons: Artificial neural networks and multidimensional approach.

    PubMed

    Grbatinić, Ivan; Milošević, Nebojša

    2016-09-07

    Primary aim in this study is to investigate whether external and internal border neurons of adult human dentate nucleus express the same neuromorphological features or belong to a different morphological types i.e. whether can be classified not only by way of their topology as external and internal, but also based on their morphological features or in addition to their topology also by way of their morphology. Secondary aim is to determine and compare various methodologies in order to perform the first aim in a more accurate and efficient manner. Blocks of tissue were cut out from the adult human cerebellum and stained according to the Kopsch-Bubenaite method. Border neurons of the dentate nucleus were investigated and digitized under the light microscope and processed thereafter. Seventeen parameters quantifying various aspects of neuron morphology are then measured. They can be categorized as shape, magnitude, complexity, length and branching parameters. Analyzes used are neural networks, separate unifactor, cluster, principal component, discriminant and correlation-comparison analysis. The external and internal border neurons differ significantly in six of the seventeen parameters investigated, mainly concerning dendritic ramification patterns, overall shape of dendritic tree and dendritic length. All six methodological approaches are in accordance showing slight clustering of data. Classification is based on six parameters: neuron (field) area, dendritic (field) area, total dendrite length, and position of maximal dendritic arborization density. Cluster analysis shows two data clusters. Separate unifactor analysis demonstrates inter-cluster differences with statistical significance (p < 0.05) for all six parameters separately. Principal component, discriminant and correlation-comparison analysis further prove the result on a more factor integrate manner and explain it, respectively. Thus, these neurons can be classified, not only according to their location but

  13. Integration of neuroblasts into a two-dimensional small world neuronal network

    NASA Astrophysics Data System (ADS)

    Schneider-Mizell, Casey; Zochowski, Michal; Sander, Leonard

    2009-03-01

    Neurogenesis in the adult brain has been suggested to be important for learning and functional robustness to the neuronal death. New neurons integrate themselves into existing neuronal networks by moving into a target destination, extending axonal and dendritic processes, and inducing synaptogenesis to connect to active neurons. We hypothesize that increased plasticity of the network to novel stimuli can arise from activity-dependent cell and process motility rules. In complement to a similar in vitro model, we investigate a computational model of a two-dimensional small world network of integrate and fire neurons. After steady-state activity is reached in the extant network, we introduce new neurons which move, stop, and connect themselves through rules governed by position and firing rate.

  14. Energy-efficient population coding constrains network size of a neuronal array system

    PubMed Central

    Yu, Lianchun; Zhang, Chi; Liu, Liwei; Yu, Yuguo

    2016-01-01

    We consider the open issue of how the energy efficiency of the neural information transmission process, in a general neuronal array, constrains the network size, and how well this network size ensures the reliable transmission of neural information in a noisy environment. By direct mathematical analysis, we have obtained general solutions proving that there exists an optimal number of neurons in the network, where the average coding energy cost (defined as energy consumption divided by mutual information) per neuron passes through a global minimum for both subthreshold and superthreshold signals. With increases in background noise intensity, the optimal neuronal number decreases for subthreshold signals and increases for suprathreshold signals. The existence of an optimal number of neurons in an array network reveals a general rule for population coding that states that the neuronal number should be large enough to ensure reliable information transmission that is robust to the noisy environment but small enough to minimize energy cost. PMID:26781354

  15. Energy-efficient population coding constrains network size of a neuronal array system.

    PubMed

    Yu, Lianchun; Zhang, Chi; Liu, Liwei; Yu, Yuguo

    2016-01-19

    We consider the open issue of how the energy efficiency of the neural information transmission process, in a general neuronal array, constrains the network size, and how well this network size ensures the reliable transmission of neural information in a noisy environment. By direct mathematical analysis, we have obtained general solutions proving that there exists an optimal number of neurons in the network, where the average coding energy cost (defined as energy consumption divided by mutual information) per neuron passes through a global minimum for both subthreshold and superthreshold signals. With increases in background noise intensity, the optimal neuronal number decreases for subthreshold signals and increases for suprathreshold signals. The existence of an optimal number of neurons in an array network reveals a general rule for population coding that states that the neuronal number should be large enough to ensure reliable information transmission that is robust to the noisy environment but small enough to minimize energy cost.

  16. Energy-efficient population coding constrains network size of a neuronal array system

    NASA Astrophysics Data System (ADS)

    Yu, Lianchun; Zhang, Chi; Liu, Liwei; Yu, Yuguo

    2016-01-01

    We consider the open issue of how the energy efficiency of the neural information transmission process, in a general neuronal array, constrains the network size, and how well this network size ensures the reliable transmission of neural information in a noisy environment. By direct mathematical analysis, we have obtained general solutions proving that there exists an optimal number of neurons in the network, where the average coding energy cost (defined as energy consumption divided by mutual information) per neuron passes through a global minimum for both subthreshold and superthreshold signals. With increases in background noise intensity, the optimal neuronal number decreases for subthreshold signals and increases for suprathreshold signals. The existence of an optimal number of neurons in an array network reveals a general rule for population coding that states that the neuronal number should be large enough to ensure reliable information transmission that is robust to the noisy environment but small enough to minimize energy cost.

  17. Biological modelling of a computational spiking neural network with neuronal avalanches

    NASA Astrophysics Data System (ADS)

    Li, Xiumin; Chen, Qing; Xue, Fangzheng

    2017-05-01

    In recent years, an increasing number of studies have demonstrated that networks in the brain can self-organize into a critical state where dynamics exhibit a mixture of ordered and disordered patterns. This critical branching phenomenon is termed neuronal avalanches. It has been hypothesized that the homeostatic level balanced between stability and plasticity of this critical state may be the optimal state for performing diverse neural computational tasks. However, the critical region for high performance is narrow and sensitive for spiking neural networks (SNNs). In this paper, we investigated the role of the critical state in neural computations based on liquid-state machines, a biologically plausible computational neural network model for real-time computing. The computational performance of an SNN when operating at the critical state and, in particular, with spike-timing-dependent plasticity for updating synaptic weights is investigated. The network is found to show the best computational performance when it is subjected to critical dynamic states. Moreover, the active-neuron-dominant structure refined from synaptic learning can remarkably enhance the robustness of the critical state and further improve computational accuracy. These results may have important implications in the modelling of spiking neural networks with optimal computational performance. This article is part of the themed issue `Mathematical methods in medicine: neuroscience, cardiology and pathology'.

  18. Biological modelling of a computational spiking neural network with neuronal avalanches.

    PubMed

    Li, Xiumin; Chen, Qing; Xue, Fangzheng

    2017-06-28

    In recent years, an increasing number of studies have demonstrated that networks in the brain can self-organize into a critical state where dynamics exhibit a mixture of ordered and disordered patterns. This critical branching phenomenon is termed neuronal avalanches. It has been hypothesized that the homeostatic level balanced between stability and plasticity of this critical state may be the optimal state for performing diverse neural computational tasks. However, the critical region for high performance is narrow and sensitive for spiking neural networks (SNNs). In this paper, we investigated the role of the critical state in neural computations based on liquid-state machines, a biologically plausible computational neural network model for real-time computing. The computational performance of an SNN when operating at the critical state and, in particular, with spike-timing-dependent plasticity for updating synaptic weights is investigated. The network is found to show the best computational performance when it is subjected to critical dynamic states. Moreover, the active-neuron-dominant structure refined from synaptic learning can remarkably enhance the robustness of the critical state and further improve computational accuracy. These results may have important implications in the modelling of spiking neural networks with optimal computational performance.This article is part of the themed issue 'Mathematical methods in medicine: neuroscience, cardiology and pathology'. © 2017 The Author(s).

  19. Functional Phase Response Curves: A Method for Understanding Synchronization of Adapting Neurons

    PubMed Central

    Cui, Jianxia; Canavier, Carmen C.; Butera, Robert J.

    2009-01-01

    Phase response curves (PRCs) for a single neuron are often used to predict the synchrony of mutually coupled neurons. Previous theoretical work on pulse-coupled oscillators used single-pulse perturbations. We propose an alternate method in which functional PRCs (fPRCs) are generated using a train of pulses applied at a fixed delay after each spike, with the PRC measured when the phasic relationship between the stimulus and the subsequent spike in the neuron has converged. The essential information is the dependence of the recovery time from pulse onset until the next spike as a function of the delay between the previous spike and the onset of the applied pulse. Experimental fPRCs in Aplysia pacemaker neurons were different from single-pulse PRCs, principally due to adaptation. In the biological neuron, convergence to the fully adapted recovery interval was slower at some phases than that at others because the change in the effective intrinsic period due to adaptation changes the effective phase resetting in a way that opposes and slows the effects of adaptation. The fPRCs for two isolated adapting model neurons were used to predict the existence and stability of 1:1 phase-locked network activity when the two neurons were coupled. A stability criterion was derived by linearizing a coupled map based on the fPRC and the existence and stability criteria were successfully tested in two-simulated-neuron networks with reciprocal inhibition or excitation. The fPRC is the first PRC-based tool that can account for adaptation in analyzing networks of neural oscillators. PMID:19420126

  20. Functional phase response curves: a method for understanding synchronization of adapting neurons.

    PubMed

    Cui, Jianxia; Canavier, Carmen C; Butera, Robert J

    2009-07-01

    Phase response curves (PRCs) for a single neuron are often used to predict the synchrony of mutually coupled neurons. Previous theoretical work on pulse-coupled oscillators used single-pulse perturbations. We propose an alternate method in which functional PRCs (fPRCs) are generated using a train of pulses applied at a fixed delay after each spike, with the PRC measured when the phasic relationship between the stimulus and the subsequent spike in the neuron has converged. The essential information is the dependence of the recovery time from pulse onset until the next spike as a function of the delay between the previous spike and the onset of the applied pulse. Experimental fPRCs in Aplysia pacemaker neurons were different from single-pulse PRCs, principally due to adaptation. In the biological neuron, convergence to the fully adapted recovery interval was slower at some phases than that at others because the change in the effective intrinsic period due to adaptation changes the effective phase resetting in a way that opposes and slows the effects of adaptation. The fPRCs for two isolated adapting model neurons were used to predict the existence and stability of 1:1 phase-locked network activity when the two neurons were coupled. A stability criterion was derived by linearizing a coupled map based on the fPRC and the existence and stability criteria were successfully tested in two-simulated-neuron networks with reciprocal inhibition or excitation. The fPRC is the first PRC-based tool that can account for adaptation in analyzing networks of neural oscillators.

  1. Hybrid Scheme for Modeling Local Field Potentials from Point-Neuron Networks.

    PubMed

    Hagen, Espen; Dahmen, David; Stavrinou, Maria L; Lindén, Henrik; Tetzlaff, Tom; van Albada, Sacha J; Grün, Sonja; Diesmann, Markus; Einevoll, Gaute T

    2016-12-01

    With rapidly advancing multi-electrode recording technology, the local field potential (LFP) has again become a popular measure of neuronal activity in both research and clinical applications. Proper understanding of the LFP requires detailed mathematical modeling incorporating the anatomical and electrophysiological features of neurons near the recording electrode, as well as synaptic inputs from the entire network. Here we propose a hybrid modeling scheme combining efficient point-neuron network models with biophysical principles underlying LFP generation by real neurons. The LFP predictions rely on populations of network-equivalent multicompartment neuron models with layer-specific synaptic connectivity, can be used with an arbitrary number of point-neuron network populations, and allows for a full separation of simulated network dynamics and LFPs. We apply the scheme to a full-scale cortical network model for a ∼1 mm(2) patch of primary visual cortex, predict laminar LFPs for different network states, assess the relative LFP contribution from different laminar populations, and investigate effects of input correlations and neuron density on the LFP. The generic nature of the hybrid scheme and its public implementation in hybridLFPy form the basis for LFP predictions from other and larger point-neuron network models, as well as extensions of the current application with additional biological detail.

  2. Analysis of connectivity map: Control to glutamate injured and phenobarbital treated neuronal network

    NASA Astrophysics Data System (ADS)

    Kamal, Hassan; Kanhirodan, Rajan; Srinivas, Kalyan V.; Sikdar, Sujit K.

    2010-04-01

    We study the responses of a cultured neural network when it is exposed to epileptogenesis glutamate injury causing epilepsy and subsequent treatment with phenobarbital by constructing connectivity map of neurons using correlation matrix. This study is particularly useful in understanding the pharmaceutical drug induced changes in the neuronal network properties with insights into changes at the systems biology level.

  3. Analysis on the synchronized network of Hindmarsh-Rose neuronal models

    NASA Astrophysics Data System (ADS)

    Feng, Youceng; Li, Wei

    2015-04-01

    We studied a network of pulse-coupled Hindmarsh-Rose neurons and discovered that all that matters for the onset of complete synchrony is the number of signals, k, received by each neuron. This is independent of all other details of the network structure.

  4. Numbers And Gains Of Neurons In Winner-Take-All Networks

    NASA Technical Reports Server (NTRS)

    Brown, Timothy X.

    1993-01-01

    Report presents theoretical study of gains required in neurons to implement winner-take-all electronic neural network of given size and related question of maximum size of winner-take-all network in which neurons have specified sigmoid transfer or response function with specified gain.

  5. A self-adapting approach for the detection of bursts and network bursts in neuronal cultures.

    PubMed

    Pasquale, Valentina; Martinoia, Sergio; Chiappalone, Michela

    2010-08-01

    Dissociated networks of neurons typically exhibit bursting behavior, whose features are strongly influenced by the age of the culture, by chemical/electrical stimulation or by environmental conditions. To help the experimenter in identifying the changes possibly induced by specific protocols, we developed a self-adapting method for detecting both bursts and network bursts from electrophysiological activity recorded by means of micro-electrode arrays. The algorithm is based on the computation of the logarithmic inter-spike interval histogram and automatically detects the best threshold to distinguish between inter- and intra-burst inter-spike intervals for each recording channel of the array. An analogous procedure is followed for the detection of network bursts, looking for sequences of closely spaced single-channel bursts. We tested our algorithm on recordings of spontaneous as well as chemically stimulated activity, comparing its performance to other methods available in the literature.

  6. Voltage-sensitive dye recording from networks of cultured neurons

    NASA Astrophysics Data System (ADS)

    Chien, Chi-Bin

    This thesis describes the development and testing of a sensitive apparatus for recording electrical activity from microcultures of rat superior cervical ganglion (SCG) neurons by using voltage-sensitive fluorescent dyes.The apparatus comprises a feedback-regulated mercury arc light source, an inverted epifluorescence microscope, a novel fiber-optic camera with discrete photodiode detectors, and low-noise preamplifiers. Using an NA 0.75 objective and illuminating at 10 W/cm2 with the 546 nm mercury line, a typical SCG neuron stained with the styryl dye RH423 gives a detected photocurrent of 1 nA; the light source and optical detectors are quiet enough that the shot noise in this photocurrent--about.03% rms--dominates. The design, theory, and performance of this dye-recording apparatus are discussed in detail.Styryl dyes such as RH423 typically give signals of 1%/100 mV on these cells; the signals are linear in membrane potential, but do not appear to arise from a purely electrochromic mechanism. Given this voltage sensitivity and the noise level of the apparatus, it should be possible to detect both action potentials and subthreshold synaptic potentials from SCG cell bodies. In practice, dye recording can easily detect action potentials from every neuron in an SCG microculture, but small synaptic potentials are obscured by dye signals from the dense network of axons.In another microculture system that does not have such long and complex axons, this dye-recording apparatus should be able to detect synaptic potentials, making it possible to noninvasively map the synaptic connections in a microculture, and thus to study long-term synaptic plasticity.

  7. FPGA implementation of a biological neural network based on the Hodgkin-Huxley neuron model

    PubMed Central

    Yaghini Bonabi, Safa; Asgharian, Hassan; Safari, Saeed; Nili Ahmadabadi, Majid

    2014-01-01

    A set of techniques for efficient implementation of Hodgkin-Huxley-based (H-H) model of a neural network on FPGA (Field Programmable Gate Array) is presented. The central implementation challenge is H-H model complexity that puts limits on the network size and on the execution speed. However, basics of the original model cannot be compromised when effect of synaptic specifications on the network behavior is the subject of study. To solve the problem, we used computational techniques such as CORDIC (Coordinate Rotation Digital Computer) algorithm and step-by-step integration in the implementation of arithmetic circuits. In addition, we employed different techniques such as sharing resources to preserve the details of model as well as increasing the network size in addition to keeping the network execution speed close to real time while having high precision. Implementation of a two mini-columns network with 120/30 excitatory/inhibitory neurons is provided to investigate the characteristic of our method in practice. The implementation techniques provide an opportunity to construct large FPGA-based network models to investigate the effect of different neurophysiological mechanisms, like voltage-gated channels and synaptic activities, on the behavior of a neural network in an appropriate execution time. Additional to inherent properties of FPGA, like parallelism and re-configurability, our approach makes the FPGA-based system a proper candidate for study on neural control of cognitive robots and systems as well. PMID:25484854

  8. FPGA implementation of a biological neural network based on the Hodgkin-Huxley neuron model.

    PubMed

    Yaghini Bonabi, Safa; Asgharian, Hassan; Safari, Saeed; Nili Ahmadabadi, Majid

    2014-01-01

    A set of techniques for efficient implementation of Hodgkin-Huxley-based (H-H) model of a neural network on FPGA (Field Programmable Gate Array) is presented. The central implementation challenge is H-H model complexity that puts limits on the network size and on the execution speed. However, basics of the original model cannot be compromised when effect of synaptic specifications on the network behavior is the subject of study. To solve the problem, we used computational techniques such as CORDIC (Coordinate Rotation Digital Computer) algorithm and step-by-step integration in the implementation of arithmetic circuits. In addition, we employed different techniques such as sharing resources to preserve the details of model as well as increasing the network size in addition to keeping the network execution speed close to real time while having high precision. Implementation of a two mini-columns network with 120/30 excitatory/inhibitory neurons is provided to investigate the characteristic of our method in practice. The implementation techniques provide an opportunity to construct large FPGA-based network models to investigate the effect of different neurophysiological mechanisms, like voltage-gated channels and synaptic activities, on the behavior of a neural network in an appropriate execution time. Additional to inherent properties of FPGA, like parallelism and re-configurability, our approach makes the FPGA-based system a proper candidate for study on neural control of cognitive robots and systems as well.

  9. Roles of inhibitory neurons in rewiring-induced synchronization in pulse-coupled neural networks.

    PubMed

    Kanamaru, Takashi; Aihara, Kazuyuki

    2010-05-01

    The roles of inhibitory neurons in synchronous firing are examined in a network of excitatory and inhibitory neurons with Watts and Strogatz's rewiring. By examining the persistence of the synchronous firing that exists in the random network, it was found that there is a probability of rewiring at which a transition between the synchronous state and the asynchronous state takes place, and the dynamics of the inhibitory neurons play an important role in determining this probability.

  10. New methods for the computer-assisted 3-D reconstruction of neurons from confocal image stacks.

    PubMed

    Schmitt, Stephan; Evers, Jan Felix; Duch, Carsten; Scholz, Michael; Obermayer, Klaus

    2004-12-01

    Exact geometrical reconstructions of neuronal architecture are indispensable for the investigation of neuronal function. Neuronal shape is important for the wiring of networks, and dendritic architecture strongly affects neuronal integration and firing properties as demonstrated by modeling approaches. Confocal microscopy allows to scan neurons with submicron resolution. However, it is still a tedious task to reconstruct complex dendritic trees with fine structures just above voxel resolution. We present a framework assisting the reconstruction. User time investment is strongly reduced by automatic methods, which fit a skeleton and a surface to the data, while the user can interact and thus keeps full control to ensure a high quality reconstruction. The reconstruction process composes a successive gain of metric parameters. First, a structural description of the neuron is built, including the topology and the exact dendritic lengths and diameters. We use generalized cylinders with circular cross sections. The user provides a rough initialization by marking the branching points. The axes and radii are fitted to the data by minimizing an energy functional, which is regularized by a smoothness constraint. The investigation of proximity to other structures throughout dendritic trees requires a precise surface reconstruction. In order to achieve accuracy of 0.1 microm and below, we additionally implemented a segmentation algorithm based on geodesic active contours that allow for arbitrary cross sections and uses locally adapted thresholds. In summary, this new reconstruction tool saves time and increases quality as compared to other methods, which have previously been applied to real neurons.

  11. A hierarchical neuronal network for planning behavior

    PubMed Central

    Dehaene, Stanislas; Changeux, Jean-Pierre

    1997-01-01

    Planning a goal-directed sequence of behavior is a higher function of the human brain that relies on the integrity of prefrontal cortical areas. In the Tower of London test, a puzzle in which beads sliding on pegs must be moved to match a designated goal configuration, patients with lesioned prefrontal cortex show deficits in planning a goal-directed sequence of moves. We propose a neuronal network model of sequence planning that passes this test and, when lesioned, fails in a way that mimics prefrontal patients’ behavior. Our model comprises a descending planning system with hierarchically organized plan, operation, and gesture levels, and an ascending evaluative system that analyzes the problem and computes internal reward signals that index the correct/erroneous status of the plan. Multiple parallel pathways connecting the evaluative and planning systems amend the plan and adapt it to the current problem. The model illustrates how specialized hierarchically organized neuronal assemblies may collectively emulate central executive or supervisory functions of the human brain. PMID:9371839

  12. Simulation of restricted neural networks with reprogrammable neurons

    SciTech Connect

    Hartline, D.K. )

    1989-05-01

    This paper describes a network model composed of reprogrammable neurons. It incorporates the following design features: spikes can be generated by a model representing repetitive firing at axon (and dendritic) trigger zones; active responses (plateau potentials; delaying mechanisms) are simulated with Hodgkin-huxley type kinetics; synaptic interactions both spike-mediated and non-spiking chemical ('chemotonic'), simulate transmitter release and binding to postsynaptic receptors. Facilitation and antifacilitation of spike-mediated postsynaptic potentials (PSP's) are included. Chemical pools are used to simulate second messenger systems, trapping of ions in extracellular spaces, and electrogenic pumps, as well as biochemical reaction chains of quite general character. Modulation of any of the parameters of any compartment can be effected through the pools. Intracellular messengers of three kinds are simulated explicitly: those produced by voltage-gated processes (e.g. Ca); those dependent on transmitter (or hormone) binding; and those dependent on other internal messengers (e.g., internally released Ca; enzymatically activated pathways).

  13. Interrogation Methods and Terror Networks

    NASA Astrophysics Data System (ADS)

    Baccara, Mariagiovanna; Bar-Isaac, Heski

    We examine how the structure of terror networks varies with legal limits on interrogation and the ability of authorities to extract information from detainees. We assume that terrorist networks are designed to respond optimally to a tradeoff caused by information exchange: Diffusing information widely leads to greater internal efficiency, but it leaves the organization more vulnerable to law enforcement. The extent of this vulnerability depends on the law enforcement authority’s resources, strategy and interrogation methods. Recognizing that the structure of a terrorist network responds to the policies of law enforcement authorities allows us to begin to explore the most effective policies from the authorities’ point of view.

  14. Multiple network interface core apparatus and method

    SciTech Connect

    Underwood, Keith D; Hemmert, Karl Scott

    2011-04-26

    A network interface controller and network interface control method comprising providing a single integrated circuit as a network interface controller and employing a plurality of network interface cores on the single integrated circuit.

  15. Delay-induced multiple stochastic resonances on scale-free neuronal networks.

    PubMed

    Wang, Qingyun; Perc, Matjaz; Duan, Zhisheng; Chen, Guanrong

    2009-06-01

    We study the effects of periodic subthreshold pacemaker activity and time-delayed coupling on stochastic resonance over scale-free neuronal networks. As the two extreme options, we introduce the pacemaker, respectively, to the neuron with the highest degree and to one of the neurons with the lowest degree within the network, but we also consider the case when all neurons are exposed to the periodic forcing. In the absence of delay, we show that an intermediate intensity of noise is able to optimally assist the pacemaker in imposing its rhythm on the whole ensemble, irrespective to its placing, thus providing evidences for stochastic resonance on the scale-free neuronal networks. Interestingly thereby, if the forcing in form of a periodic pulse train is introduced to all neurons forming the network, the stochastic resonance decreases as compared to the case when only a single neuron is paced. Moreover, we show that finite delays in coupling can significantly affect the stochastic resonance on scale-free neuronal networks. In particular, appropriately tuned delays can induce multiple stochastic resonances independently of the placing of the pacemaker, but they can also altogether destroy stochastic resonance. Delay-induced multiple stochastic resonances manifest as well-expressed maxima of the correlation measure, appearing at every multiple of the pacemaker period. We argue that fine-tuned delays and locally active pacemakers are vital for assuring optimal conditions for stochastic resonance on complex neuronal networks.

  16. Optimal balance of the striatal medium spiny neuron network.

    PubMed

    Ponzi, Adam; Wickens, Jeffery R

    2013-04-01

    Slowly varying activity in the striatum, the main Basal Ganglia input structure, is important for the learning and execution of movement sequences. Striatal medium spiny neurons (MSNs) form cell assemblies whose population firing rates vary coherently on slow behaviourally relevant timescales. It has been shown that such activity emerges in a model of a local MSN network but only at realistic connectivities of 10 ~ 20% and only when MSN generated inhibitory post-synaptic potentials (IPSPs) are realistically sized. Here we suggest a reason for this. We investigate how MSN network generated population activity interacts with temporally varying cortical driving activity, as would occur in a behavioural task. We find that at unrealistically high connectivity a stable winners-take-all type regime is found where network activity separates into fixed stimulus dependent regularly firing and quiescent components. In this regime only a small number of population firing rate components interact with cortical stimulus variations. Around 15% connectivity a transition to a more dynamically active regime occurs where all cells constantly switch between activity and quiescence. In this low connectivity regime, MSN population components wander randomly and here too are independent of variations in cortical driving. Only in the transition regime do weak changes in cortical driving interact with many population components so that sequential cell assemblies are reproducibly activated for many hundreds of milliseconds after stimulus onset and peri-stimulus time histograms display strong stimulus and temporal specificity. We show that, remarkably, this activity is maximized at striatally realistic connectivities and IPSP sizes. Thus, we suggest the local MSN network has optimal characteristics - it is neither too stable to respond in a dynamically complex temporally extended way to cortical variations, nor is it too unstable to respond in a consistent repeatable way. Rather, it is

  17. Coarse-grained event tree analysis for quantifying Hodgkin-Huxley neuronal network dynamics.

    PubMed

    Sun, Yi; Rangan, Aaditya V; Zhou, Douglas; Cai, David

    2012-02-01

    We present an event tree analysis of studying the dynamics of the Hodgkin-Huxley (HH) neuronal networks. Our study relies on a coarse-grained projection to event trees and to the event chains that comprise these trees by using a statistical collection of spatial-temporal sequences of relevant physiological observables (such as sequences of spiking multiple neurons). This projection can retain information about network dynamics that covers multiple features, swiftly and robustly. We demonstrate that for even small differences in inputs, some dynamical regimes of HH networks contain sufficiently higher order statistics as reflected in event chains within the event tree analysis. Therefore, this analysis is effective in discriminating small differences in inputs. Moreover, we use event trees to analyze the results computed from an efficient library-based numerical method proposed in our previous work, where a pre-computed high resolution data library of typical neuronal trajectories during the interval of an action potential (spike) allows us to avoid resolving the spikes in detail. In this way, we can evolve the HH networks using time steps one order of magnitude larger than the typical time steps used for resolving the trajectories without the library, while achieving comparable statistical accuracy in terms of average firing rate and power spectra of voltage traces. Our numerical simulation results show that the library method is efficient in the sense that the results generated by using this numerical method with much larger time steps contain sufficiently high order statistical structure of firing events that are similar to the ones obtained using a regular HH solver. We use our event tree analysis to demonstrate these statistical similarities.

  18. Lucifer yellow filling of immunohistochemically pre-labeled neurons: a new method to characterize neuronal subpopulations.

    PubMed

    Galuske, R A; Delius, J A; Singer, W

    1993-07-01

    We describe a new technique for the morphological characterization of immunohistochemically labeled neuron populations. We demonstrate that it is possible to fill neurons iontophoretically with Lucifer Yellow (LY) in fixed slices of cat visual cortex after the respective cells have been identified by indirect immunofluorescence for the neural cell adhesion molecule N-CAM 180, with the VC1.1 antibody or with an antibody against glutamate dehydrogenase (GAD). Morphological analysis of the injected cells at the light and electron microscopic level revealed that the N-CAM 180-positive neurons share the features of neuropeptidergic cortical interneurons. Depending on the antibody applied, the immunohistochemical treatment had little or no noticeable effect on the quality of LY filling or on the preservation of morphological details of the pre-labeled cells. This makes the method described ideally suited for the light and electron microscopic examination of selected, immunologically characterized neuron subpopulations.

  19. Self-organized criticality in a network of interacting neurons

    NASA Astrophysics Data System (ADS)

    Cowan, J. D.; Neuman, J.; Kiewiet, B.; van Drongelen, W.

    2013-04-01

    This paper contains an analysis of a simple neural network that exhibits self-organized criticality. Such criticality follows from the combination of a simple neural network with an excitatory feedback loop that generates bistability, in combination with an anti-Hebbian synapse in its input pathway. Using the methods of statistical field theory, we show how one can formulate the stochastic dynamics of such a network as the action of a path integral, which we then investigate using renormalization group methods. The results indicate that the network exhibits hysteresis in switching back and forward between its two stable states, each of which loses its stability at a saddle-node bifurcation. The renormalization group analysis shows that the fluctuations in the neighborhood of such bifurcations have the signature of directed percolation. Thus, the network states undergo the neural analog of a phase transition in the universality class of directed percolation. The network replicates the behavior of the original sand-pile model of Bak, Tang and Wiesenfeld in that the fluctuations about the two states show power-law statistics.

  20. Self-organization of synchronous activity propagation in neuronal networks driven by local excitation

    PubMed Central

    Bayati, Mehdi; Valizadeh, Alireza; Abbassian, Abdolhossein; Cheng, Sen

    2015-01-01

    Many experimental and theoretical studies have suggested that the reliable propagation of synchronous neural activity is crucial for neural information processing. The propagation of synchronous firing activity in so-called synfire chains has been studied extensively in feed-forward networks of spiking neurons. However, it remains unclear how such neural activity could emerge in recurrent neuronal networks through synaptic plasticity. In this study, we investigate whether local excitation, i.e., neurons that fire at a higher frequency than the other, spontaneously active neurons in the network, can shape a network to allow for synchronous activity propagation. We use two-dimensional, locally connected and heterogeneous neuronal networks with spike-timing dependent plasticity (STDP). We find that, in our model, local excitation drives profound network changes within seconds. In the emergent network, neural activity propagates synchronously through the network. This activity originates from the site of the local excitation and propagates through the network. The synchronous activity propagation persists, even when the local excitation is removed, since it derives from the synaptic weight matrix. Importantly, once this connectivity is established it remains stable even in the presence of spontaneous activity. Our results suggest that synfire-chain-like activity can emerge in a relatively simple way in realistic neural networks by locally exciting the desired origin of the neuronal sequence. PMID:26089794

  1. Self-organization of synchronous activity propagation in neuronal networks driven by local excitation.

    PubMed

    Bayati, Mehdi; Valizadeh, Alireza; Abbassian, Abdolhossein; Cheng, Sen

    2015-01-01

    Many experimental and theoretical studies have suggested that the reliable propagation of synchronous neural activity is crucial for neural information processing. The propagation of synchronous firing activity in so-called synfire chains has been studied extensively in feed-forward networks of spiking neurons. However, it remains unclear how such neural activity could emerge in recurrent neuronal networks through synaptic plasticity. In this study, we investigate whether local excitation, i.e., neurons that fire at a higher frequency than the other, spontaneously active neurons in the network, can shape a network to allow for synchronous activity propagation. We use two-dimensional, locally connected and heterogeneous neuronal networks with spike-timing dependent plasticity (STDP). We find that, in our model, local excitation drives profound network changes within seconds. In the emergent network, neural activity propagates synchronously through the network. This activity originates from the site of the local excitation and propagates through the network. The synchronous activity propagation persists, even when the local excitation is removed, since it derives from the synaptic weight matrix. Importantly, once this connectivity is established it remains stable even in the presence of spontaneous activity. Our results suggest that synfire-chain-like activity can emerge in a relatively simple way in realistic neural networks by locally exciting the desired origin of the neuronal sequence.

  2. Parallel hierarchical method in networks

    NASA Astrophysics Data System (ADS)

    Malinochka, Olha; Tymchenko, Leonid

    2007-09-01

    This method of parallel-hierarchical Q-transformation offers new approach to the creation of computing medium - of parallel -hierarchical (PH) networks, being investigated in the form of model of neurolike scheme of data processing [1-5]. The approach has a number of advantages as compared with other methods of formation of neurolike media (for example, already known methods of formation of artificial neural networks). The main advantage of the approach is the usage of multilevel parallel interaction dynamics of information signals at different hierarchy levels of computer networks, that enables to use such known natural features of computations organization as: topographic nature of mapping, simultaneity (parallelism) of signals operation, inlaid cortex, structure, rough hierarchy of the cortex, spatially correlated in time mechanism of perception and training [5].

  3. Experiments in clustered neuronal networks: A paradigm for complex modular dynamics

    NASA Astrophysics Data System (ADS)

    Teller, Sara; Soriano, Jordi

    2016-06-01

    Uncovering the interplay activity-connectivity is one of the major challenges in neuroscience. To deepen in the understanding of how a neuronal circuit shapes network dynamics, neuronal cultures have emerged as remarkable systems given their accessibility and easy manipulation. An attractive configuration of these in vitro systems consists in an ensemble of interconnected clusters of neurons. Using calcium fluorescence imaging to monitor spontaneous activity in these clustered neuronal networks, we were able to draw functional maps and reveal their topological features. We also observed that these networks exhibit a hierarchical modular dynamics, in which clusters fire in small groups that shape characteristic communities in the network. The structure and stability of these communities is sensitive to chemical or physical action, and therefore their analysis may serve as a proxy for network health. Indeed, the combination of all these approaches is helping to develop models to quantify damage upon network degradation, with promising applications for the study of neurological disorders in vitro.

  4. Long-term optical stimulation of channelrhodopsin-expressing neurons to study network plasticity

    PubMed Central

    Lignani, Gabriele; Ferrea, Enrico; Difato, Francesco; Amarù, Jessica; Ferroni, Eleonora; Lugarà, Eleonora; Espinoza, Stefano; Gainetdinov, Raul R.; Baldelli, Pietro; Benfenati, Fabio

    2013-01-01

    Neuronal plasticity produces changes in excitability, synaptic transmission, and network architecture in response to external stimuli. Network adaptation to environmental conditions takes place in time scales ranging from few seconds to days, and modulates the entire network dynamics. To study the network response to defined long-term experimental protocols, we setup a system that combines optical and electrophysiological tools embedded in a cell incubator. Primary hippocampal neurons transduced with lentiviruses expressing channelrhodopsin-2/H134R were subjected to various photostimulation protocols in a time window in the order of days. To monitor the effects of light-induced gating of network activity, stimulated transduced neurons were simultaneously recorded using multi-electrode arrays (MEAs). The developed experimental model allows discerning short-term, long-lasting, and adaptive plasticity responses of the same neuronal network to distinct stimulation frequencies applied over different temporal windows. PMID:23970852

  5. The frequency preference of neurons and synapses in a recurrent oscillatory network.

    PubMed

    Tseng, Hua-an; Martinez, Diana; Nadim, Farzan

    2014-09-17

    A variety of neurons and synapses shows a maximal response at a preferred frequency, generally considered to be important in shaping network activity. We are interested in whether all neurons and synapses in a recurrent oscillatory network can have preferred frequencies and, if so, whether these frequencies are the same or correlated, and whether they influence the network activity. We address this question using identified neurons in the pyloric network of the crab Cancer borealis. Previous work has shown that the pyloric pacemaker neurons exhibit membrane potential resonance whose resonance frequency is correlated with the network frequency. The follower lateral pyloric (LP) neuron makes reciprocally inhibitory synapses with the pacemakers. We find that LP shows resonance at a higher frequency than the pacemakers and the network frequency falls between the two. We also find that the reciprocal synapses between the pacemakers and LP have preferred frequencies but at significantly lower values. The preferred frequency of the LP to pacemaker synapse is correlated with the presynaptic preferred frequency, which is most pronounced when the peak voltage of the LP waveform is within the dynamic range of the synaptic activation curve and a shift in the activation curve by the modulatory neuropeptide proctolin shifts the frequency preference. Proctolin also changes the power of the LP neuron resonance without significantly changing the resonance frequency. These results indicate that different neuron types and synapses in a network may have distinct preferred frequencies, which are subject to neuromodulation and may interact to shape network oscillations.

  6. Chimera patterns in two-dimensional networks of coupled neurons

    NASA Astrophysics Data System (ADS)

    Schmidt, Alexander; Kasimatis, Theodoros; Hizanidis, Johanne; Provata, Astero; Hövel, Philipp

    2017-03-01

    We discuss synchronization patterns in networks of FitzHugh-Nagumo and leaky integrate-and-fire oscillators coupled in a two-dimensional toroidal geometry. A common feature between the two models is the presence of fast and slow dynamics, a typical characteristic of neurons. Earlier studies have demonstrated that both models when coupled nonlocally in one-dimensional ring networks produce chimera states for a large range of parameter values. In this study, we give evidence of a plethora of two-dimensional chimera patterns of various shapes, including spots, rings, stripes, and grids, observed in both models, as well as additional patterns found mainly in the FitzHugh-Nagumo system. Both systems exhibit multistability: For the same parameter values, different initial conditions give rise to different dynamical states. Transitions occur between various patterns when the parameters (coupling range, coupling strength, refractory period, and coupling phase) are varied. Many patterns observed in the two models follow similar rules. For example, the diameter of the rings grows linearly with the coupling radius.

  7. Chimera patterns in two-dimensional networks of coupled neurons.

    PubMed

    Schmidt, Alexander; Kasimatis, Theodoros; Hizanidis, Johanne; Provata, Astero; Hövel, Philipp

    2017-03-01

    We discuss synchronization patterns in networks of FitzHugh-Nagumo and leaky integrate-and-fire oscillators coupled in a two-dimensional toroidal geometry. A common feature between the two models is the presence of fast and slow dynamics, a typical characteristic of neurons. Earlier studies have demonstrated that both models when coupled nonlocally in one-dimensional ring networks produce chimera states for a large range of parameter values. In this study, we give evidence of a plethora of two-dimensional chimera patterns of various shapes, including spots, rings, stripes, and grids, observed in both models, as well as additional patterns found mainly in the FitzHugh-Nagumo system. Both systems exhibit multistability: For the same parameter values, different initial conditions give rise to different dynamical states. Transitions occur between various patterns when the parameters (coupling range, coupling strength, refractory period, and coupling phase) are varied. Many patterns observed in the two models follow similar rules. For example, the diameter of the rings grows linearly with the coupling radius.

  8. Interplay between population firing stability and single neuron dynamics in hippocampal networks.

    PubMed

    Slomowitz, Edden; Styr, Boaz; Vertkin, Irena; Milshtein-Parush, Hila; Nelken, Israel; Slutsky, Michael; Slutsky, Inna

    2015-01-03

    Neuronal circuits' ability to maintain the delicate balance between stability and flexibility in changing environments is critical for normal neuronal functioning. However, to what extent individual neurons and neuronal populations maintain internal firing properties remains largely unknown. In this study, we show that distributions of spontaneous population firing rates and synchrony are subject to accurate homeostatic control following increase of synaptic inhibition in cultured hippocampal networks. Reduction in firing rate triggered synaptic and intrinsic adaptive responses operating as global homeostatic mechanisms to maintain firing macro-stability, without achieving local homeostasis at the single-neuron level. Adaptive mechanisms, while stabilizing population firing properties, reduced short-term facilitation essential for synaptic discrimination of input patterns. Thus, invariant ongoing population dynamics emerge from intrinsically unstable activity patterns of individual neurons and synapses. The observed differences in the precision of homeostatic control at different spatial scales challenge cell-autonomous theory of network homeostasis and suggest the existence of network-wide regulation rules.

  9. Causal Interrogation of Neuronal Networks and Behavior through Virally Transduced Ivermectin Receptors.

    PubMed

    Obenhaus, Horst A; Rozov, Andrei; Bertocchi, Ilaria; Tang, Wannan; Kirsch, Joachim; Betz, Heinrich; Sprengel, Rolf

    2016-01-01

    The causal interrogation of neuronal networks involved in specific behaviors requires the spatially and temporally controlled modulation of neuronal activity. For long-term manipulation of neuronal activity, chemogenetic tools provide a reasonable alternative to short-term optogenetic approaches. Here we show that virus mediated gene transfer of the ivermectin (IVM) activated glycine receptor mutant GlyRα1 (AG) can be used for the selective and reversible silencing of specific neuronal networks in mice. In the striatum, dorsal hippocampus, and olfactory bulb, GlyRα1 (AG) promoted IVM dependent effects in representative behavioral assays. Moreover, GlyRα1 (AG) mediated silencing had a strong and reversible impact on neuronal ensemble activity and c-Fos activation in the olfactory bulb. Together our results demonstrate that long-term, reversible and re-inducible neuronal silencing via GlyRα1 (AG) is a promising tool for the interrogation of network mechanisms underlying the control of behavior and memory formation.

  10. Macroscopic self-oscillations and aging transition in a network of synaptically coupled quadratic integrate-and-fire neurons

    NASA Astrophysics Data System (ADS)

    Ratas, Irmantas; Pyragas, Kestutis

    2016-09-01

    We analyze the dynamics of a large network of coupled quadratic integrate-and-fire neurons, which represent the canonical model for class I neurons near the spiking threshold. The network is heterogeneous in that it includes both inherently spiking and excitable neurons. The coupling is global via synapses that take into account the finite width of synaptic pulses. Using a recently developed reduction method based on the Lorentzian ansatz, we derive a closed system of equations for the neuron's firing rate and the mean membrane potential, which are exact in the infinite-size limit. The bifurcation analysis of the reduced equations reveals a rich scenario of asymptotic behavior, the most interesting of which is the macroscopic limit-cycle oscillations. It is shown that the finite width of synaptic pulses is a necessary condition for the existence of such oscillations. The robustness of the oscillations against aging damage, which transforms spiking neurons into nonspiking neurons, is analyzed. The validity of the reduced equations is confirmed by comparing their solutions with the solutions of microscopic equations for the finite-size networks.

  11. Macroscopic self-oscillations and aging transition in a network of synaptically coupled quadratic integrate-and-fire neurons.

    PubMed

    Ratas, Irmantas; Pyragas, Kestutis

    2016-09-01

    We analyze the dynamics of a large network of coupled quadratic integrate-and-fire neurons, which represent the canonical model for class I neurons near the spiking threshold. The network is heterogeneous in that it includes both inherently spiking and excitable neurons. The coupling is global via synapses that take into account the finite width of synaptic pulses. Using a recently developed reduction method based on the Lorentzian ansatz, we derive a closed system of equations for the neuron's firing rate and the mean membrane potential, which are exact in the infinite-size limit. The bifurcation analysis of the reduced equations reveals a rich scenario of asymptotic behavior, the most interesting of which is the macroscopic limit-cycle oscillations. It is shown that the finite width of synaptic pulses is a necessary condition for the existence of such oscillations. The robustness of the oscillations against aging damage, which transforms spiking neurons into nonspiking neurons, is analyzed. The validity of the reduced equations is confirmed by comparing their solutions with the solutions of microscopic equations for the finite-size networks.

  12. Multilaminar networks of cortical neurons integrate common inputs from sensory thalamus.

    PubMed

    Morgenstern, Nicolás A; Bourg, Jacques; Petreanu, Leopoldo

    2016-08-01

    Neurons in the thalamorecipient layers of sensory cortices integrate thalamic and recurrent cortical input. Cortical neurons form fine-scale, functionally cotuned networks, but whether interconnected cortical neurons within a column process common thalamocortical inputs is unknown. We tested how local and thalamocortical connectivity relate to each other by analyzing cofluctuations of evoked responses in cortical neurons after photostimulation of thalamocortical axons. We found that connected pairs of pyramidal neurons in layer (L) 4 of mouse visual cortex share more inputs from the dorsal lateral geniculate nucleus than nonconnected pairs. Vertically aligned connected pairs of L4 and L2/3 neurons were also preferentially contacted by the same thalamocortical axons. Our results provide a circuit mechanism for the observed amplification of sensory responses by L4 circuits. They also show that sensory information is concurrently processed in L4 and L2/3 by columnar networks of interconnected neurons contacted by the same thalamocortical axons.

  13. Pseudo-Lyapunov exponents and predictability of Hodgkin-Huxley neuronal network dynamics.

    PubMed

    Sun, Yi; Zhou, Douglas; Rangan, Aaditya V; Cai, David

    2010-04-01

    We present a numerical analysis of the dynamics of all-to-all coupled Hodgkin-Huxley (HH) neuronal networks with Poisson spike inputs. It is important to point out that, since the dynamical vector of the system contains discontinuous variables, we propose a so-called pseudo-Lyapunov exponent adapted from the classical definition using only continuous dynamical variables, and apply it in our numerical investigation. The numerical results of the largest Lyapunov exponent using this new definition are consistent with the dynamical regimes of the network. Three typical dynamical regimes-asynchronous, chaotic and synchronous, are found as the synaptic coupling strength increases from weak to strong. We use the pseudo-Lyapunov exponent and the power spectrum analysis of voltage traces to characterize the types of the network behavior. In the nonchaotic (asynchronous or synchronous) dynamical regimes, i.e., the weak or strong coupling limits, the pseudo-Lyapunov exponent is negative and there is a good numerical convergence of the solution in the trajectory-wise sense by using our numerical methods. Consequently, in these regimes the evolution of neuronal networks is reliable. For the chaotic dynamical regime with an intermediate strong coupling, the pseudo-Lyapunov exponent is positive, and there is no numerical convergence of the solution and only statistical quantifications of the numerical results are reliable. Finally, we present numerical evidence that the value of pseudo-Lyapunov exponent coincides with that of the standard Lyapunov exponent for systems we have been able to examine.

  14. Constrained synaptic connectivity in functional mammalian neuronal networks grown on patterned surfaces.

    PubMed

    Wyart, Claire; Ybert, Christophe; Bourdieu, Laurent; Herr, Catherine; Prinz, Christelle; Chatenay, Didier

    2002-06-30

    The use of ordered neuronal networks in vitro is a promising approach to study the development and the activity of small neuronal assemblies. However, in previous attempts, sufficient growth control and physiological maturation of neurons could not be achieved. Here we describe an original protocol in which polylysine patterns confine the adhesion of cellular bodies to prescribed spots and the neuritic growth to thin lines. Hippocampal neurons in these networks are maintained healthy in serum free medium up to 5 weeks in vitro. Electrophysiology and immunochemistry show that neurons exhibit mature excitatory and inhibitory synapses and calcium imaging reveals spontaneous activity of neurons in isolated networks. We demonstrate that neurons in these geometrical networks form functional synapses preferentially to their first neighbors. We have, therefore, established a simple and robust protocol to constrain both the location of neuronal cell bodies and their pattern of connectivity. Moreover, the long term maintenance of the geometry and the physiology of the networks raises the possibility of new applications for systematic screening of pharmacological agents and for electronic to neuron devices.

  15. SERS investigations and electrical recording of neuronal networks with three-dimensional plasmonic nanoantennas (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    De Angelis, Francesco

    2017-06-01

    SERS investigations and electrical recording of neuronal networks with three-dimensional plasmonic nanoantennas Michele Dipalo, Valeria Caprettini, Anbrea Barbaglia, Laura Lovato, Francesco De Angelis e-mail: francesco.deangelis@iit.it Istituto Italiano di Tecnologia, Via Morego 30, 16163, Genova Biological systems are analysed mainly by optical, chemical or electrical methods. Normally each of these techniques provides only partial information about the environment, while combined investigations could reveal new phenomena occurring in complex systems such as in-vitro neuronal networks. Aiming at the merging of optical and electrical investigations of biological samples, we introduced three-dimensional plasmonic nanoantennas on CMOS-based electrical sensors [1]. The overall device is then capable of enhanced Raman Analysis of cultured cells combined with electrical recording of neuronal activity. The Raman measurements show a much higher sensitivity when performed on the tip of the nanoantenna in respect to the flat substrate [2]; this effect is a combination of the high plasmonic field enhancement and of the tight adhesion of cells on the nanoantenna tip. Furthermore, when plasmonic opto-poration is exploited [3] the 3D nanoelectrodes are able to penetrate through the cell membrane thus accessing the intracellular environment. Our latest results (unpublished) show that the technique is completely non-invasive and solves many problems related to state-of-the-art intracellular recording approaches on large neuronal networks. This research received funding from ERC-IDEAS Program: "Neuro-Plasmonics" [Grant n. 616213]. References: [1] M. Dipalo, G. C. Messina, H. Amin, R. La Rocca, V. Shalabaeva, A. Simi, A. Maccione, P. Zilio, L. Berdondini, F. De Angelis, Nanoscale 2015, 7, 3703. [2] R. La Rocca, G. C. Messina, M. Dipalo, V. Shalabaeva, F. De Angelis, Small 2015, 11, 4632. [3] G. C. Messina et al., Spatially, Temporally, and Quantitatively Controlled Delivery of

  16. Dynamic State Transitions in the Nervous System: From Ion Channels to Neurons to Networks

    NASA Astrophysics Data System (ADS)

    Århem, Peter; Braun, Hans A.; Huber, Martin T.; Liljenström, Hans

    The following sections are included: * Introduction * Ion channels: The microscopic scale * The variety of ion channels * Channel kinetics * Neurons: The mesoscopic scale * The feedback loops between membrane potential and ion currents * Neuron models: Concepts and examples * Impulse pattern modulation by ion channel densities * Oscillatory patterns * Irregular patterns * Impulse pattern modulation by subthreshold oscillations * The cold receptor model * Deterministic patterns and noise induced state-transitions on temperature scaling * Neuronal networks: The oscopic scale * Random channel events cause network state transitions * A hippocampal neural network model * Simulating noise-induced state transitions * Functional significance of oscopic neurodynamics * Conclusions * Appendix A: Computation of the neuron models * Hippocampal neuron model * The cold receptor model * Appendix B: Neural network model * References

  17. Autaptic self-feedback-induced synchronization transitions in Newman-Watts neuronal network with time delays

    NASA Astrophysics Data System (ADS)

    Wang, Qi; Gong, Yubing; Wu, Yanan

    2015-04-01

    Autapse is a special synapse that connects a neuron to itself. In this work, we numerically study the effect of chemical autapse on the synchronization of Newman-Watts Hodgkin-Huxley neuron network with time delays. It is found that the neurons exhibit synchronization transitions as autaptic self-feedback delay is varied, and the phenomenon enhances when autaptic self-feedback strength increases. Moreover, this phenomenon becomes strongest when network time delay or coupling strength is optimal. It is also found that the synchronization transitions by network time delay can be enhanced by autaptic activity and become strongest when autaptic delay is optimal. These results show that autaptic delayed self-feedback activity can intermittently enhance and reduce the synchronization of the neuronal network and hence plays an important role in regulating the synchronization of the neurons. These findings could find potential implications for the information processing and transmission in neural systems.

  18. Dynamic range in small-world networks of Hodgkin-Huxley neurons with chemical synapses

    NASA Astrophysics Data System (ADS)

    Batista, C. A. S.; Viana, R. L.; Lopes, S. R.; Batista, A. M.

    2014-09-01

    According to Stevens' law the relationship between stimulus and response is a power-law within an interval called the dynamic range. The dynamic range of sensory organs is found to be larger than that of a single neuron, suggesting that the network structure plays a key role in the behavior of both the scaling exponent and the dynamic range of neuron assemblies. In order to verify computationally the relationships between stimulus and response for spiking neurons, we investigate small-world networks of neurons described by the Hodgkin-Huxley equations connected by chemical synapses. We found that the dynamic range increases with the network size, suggesting that the enhancement of the dynamic range observed in sensory organs, with respect to single neurons, is an emergent property of complex network dynamics.

  19. Global and local synchrony of coupled neurons in small-world networks.

    PubMed

    Masuda, Naoki; Aihara, Kazuyuki

    2004-04-01

    Synchronous firing of neurons is thought to play important functional roles such as feature binding and switching of cognitive states. Although synchronization has mainly been investigated so far using model neurons with simple connection topology, real neural networks have more complex structures. Here we examine the behavior of pulse-coupled leaky integrate-and-fire neurons with various network structures. We first show that the dispersion of the number of connections for neurons influences dynamical behavior even if other major topological statistics are kept fixed. The rewiring probability parameter representing the randomness of networks bridges two spatially opposite frameworks: precise local synchrony and rough global synchrony. Finally, cooperation of the global connections and the local clustering property, which is prominent in small-world networks, forces synchrony of distant neuronal groups receiving coherent inputs.

  20. Effects of channel noise on firing coherence of small-world Hodgkin-Huxley neuronal networks

    NASA Astrophysics Data System (ADS)

    Sun, X. J.; Lei, J. Z.; Perc, M.; Lu, Q. S.; Lv, S. J.

    2011-01-01

    We investigate the effects of channel noise on firing coherence of Watts-Strogatz small-world networks consisting of biophysically realistic HH neurons having a fraction of blocked voltage-gated sodium and potassium ion channels embedded in their neuronal membranes. The intensity of channel noise is determined by the number of non-blocked ion channels, which depends on the fraction of working ion channels and the membrane patch size with the assumption of homogeneous ion channel density. We find that firing coherence of the neuronal network can be either enhanced or reduced depending on the source of channel noise. As shown in this paper, sodium channel noise reduces firing coherence of neuronal networks; in contrast, potassium channel noise enhances it. Furthermore, compared with potassium channel noise, sodium channel noise plays a dominant role in affecting firing coherence of the neuronal network. Moreover, we declare that the observed phenomena are independent of the rewiring probability.

  1. Parametric Anatomical Modeling: a method for modeling the anatomical layout of neurons and their projections

    PubMed Central

    Pyka, Martin; Klatt, Sebastian; Cheng, Sen

    2014-01-01

    Computational models of neural networks can be based on a variety of different parameters. These parameters include, for example, the 3d shape of neuron layers, the neurons' spatial projection patterns, spiking dynamics and neurotransmitter systems. While many well-developed approaches are available to model, for example, the spiking dynamics, there is a lack of approaches for modeling the anatomical layout of neurons and their projections. We present a new method, called Parametric Anatomical Modeling (PAM), to fill this gap. PAM can be used to derive network connectivities and conduction delays from anatomical data, such as the position and shape of the neuronal layers and the dendritic and axonal projection patterns. Within the PAM framework, several mapping techniques between layers can account for a large variety of connection properties between pre- and post-synaptic neuron layers. PAM is implemented as a Python tool and integrated in the 3d modeling software Blender. We demonstrate on a 3d model of the hippocampal formation how PAM can help reveal complex properties of the synaptic connectivity and conduction delays, properties that might be relevant to uncover the function of the hippocampus. Based on these analyses, two experimentally testable predictions arose: (i) the number of neurons and the spread of connections is heterogeneously distributed across the main anatomical axes, (ii) the distribution of connection lengths in CA3-CA1 differ qualitatively from those between DG-CA3 and CA3-CA3. Models created by PAM can also serve as an educational tool to visualize the 3d connectivity of brain regions. The low-dimensional, but yet biologically plausible, parameter space renders PAM suitable to analyse allometric and evolutionary factors in networks and to model the complexity of real networks with comparatively little effort. PMID:25309338

  2. Parametric Anatomical Modeling: a method for modeling the anatomical layout of neurons and their projections.

    PubMed

    Pyka, Martin; Klatt, Sebastian; Cheng, Sen

    2014-01-01

    Computational models of neural networks can be based on a variety of different parameters. These parameters include, for example, the 3d shape of neuron layers, the neurons' spatial projection patterns, spiking dynamics and neurotransmitter systems. While many well-developed approaches are available to model, for example, the spiking dynamics, there is a lack of approaches for modeling the anatomical layout of neurons and their projections. We present a new method, called Parametric Anatomical Modeling (PAM), to fill this gap. PAM can be used to derive network connectivities and conduction delays from anatomical data, such as the position and shape of the neuronal layers and the dendritic and axonal projection patterns. Within the PAM framework, several mapping techniques between layers can account for a large variety of connection properties between pre- and post-synaptic neuron layers. PAM is implemented as a Python tool and integrated in the 3d modeling software Blender. We demonstrate on a 3d model of the hippocampal formation how PAM can help reveal complex properties of the synaptic connectivity and conduction delays, properties that might be relevant to uncover the function of the hippocampus. Based on these analyses, two experimentally testable predictions arose: (i) the number of neurons and the spread of connections is heterogeneously distributed across the main anatomical axes, (ii) the distribution of connection lengths in CA3-CA1 differ qualitatively from those between DG-CA3 and CA3-CA3. Models created by PAM can also serve as an educational tool to visualize the 3d connectivity of brain regions. The low-dimensional, but yet biologically plausible, parameter space renders PAM suitable to analyse allometric and evolutionary factors in networks and to model the complexity of real networks with comparatively little effort.

  3. Targeting Neuronal Networks with Combined Drug and Stimulation Paradigms Guided by Neuroimaging to Treat Brain Disorders.

    PubMed

    Faingold, Carl L; Blumenfeld, Hal

    2015-10-01

    Improved therapy of brain disorders can be achieved by focusing on neuronal networks, utilizing combined pharmacological and stimulation paradigms guided by neuroimaging. Neuronal networks that mediate normal brain functions, such as hearing, interact with other networks, which is important but commonly neglected. Network interaction changes often underlie brain disorders, including epilepsy. "Conditional multireceptive" (CMR) brain areas (e.g., brainstem reticular formation and amygdala) are critical in mediating neuroplastic changes that facilitate network interactions. CMR neurons receive multiple inputs but exhibit extensive response variability due to milieu and behavioral state changes and are exquisitely sensitive to agents that increase or inhibit GABA-mediated inhibition. Enhanced CMR neuronal responsiveness leads to expression of emergent properties--nonlinear events--resulting from network self-organization. Determining brain disorder mechanisms requires animals that model behaviors and neuroanatomical substrates of human disorders identified by neuroimaging. However, not all sites activated during network operation are requisite for that operation. Other active sites are ancillary, because their blockade does not alter network function. Requisite network sites exhibit emergent properties that are critical targets for pharmacological and stimulation therapies. Improved treatment of brain disorders should involve combined pharmacological and stimulation therapies, guided by neuroimaging, to correct network malfunctions by targeting specific network neurons.

  4. Microbial Light-Activatable Proton Pumps as Neuronal Inhibitors to Functionally Dissect Neuronal Networks in C. elegans

    PubMed Central

    Husson, Steven J.; Liewald, Jana F.; Schultheis, Christian; Stirman, Jeffrey N.; Lu, Hang; Gottschalk, Alexander

    2012-01-01

    Essentially any behavior in simple and complex animals depends on neuronal network function. Currently, the best-defined system to study neuronal circuits is the nematode Caenorhabditis elegans, as the connectivity of its 302 neurons is exactly known. Individual neurons can be activated by photostimulation of Channelrhodopsin-2 (ChR2) using blue light, allowing to directly probe the importance of a particular neuron for the respective behavioral output of the network under study. In analogy, other excitable cells can be inhibited by expressing Halorhodopsin from Natronomonas pharaonis (NpHR) and subsequent illumination with yellow light. However, inhibiting C. elegans neurons using NpHR is difficult. Recently, proton pumps from various sources were established as valuable alternative hyperpolarizers. Here we show that archaerhodopsin-3 (Arch) from Halorubrum sodomense and a proton pump from the fungus Leptosphaeria maculans (Mac) can be utilized to effectively inhibit excitable cells in C. elegans. Arch is the most powerful hyperpolarizer when illuminated with yellow or green light while the action spectrum of Mac is more blue-shifted, as analyzed by light-evoked behaviors and electrophysiology. This allows these tools to be combined in various ways with ChR2 to analyze different subsets of neurons within a circuit. We exemplify this by means of the polymodal aversive sensory ASH neurons, and the downstream command interneurons to which ASH neurons signal to trigger a reversal followed by a directional turn. Photostimulating ASH and subsequently inhibiting command interneurons using two-color illumination of different body segments, allows investigating temporal aspects of signaling downstream of ASH. PMID:22815873

  5. On controllability of neuronal networks with constraints on the average of control gains.

    PubMed

    Tang, Yang; Wang, Zidong; Gao, Huijun; Qiao, Hong; Kurths, Jürgen

    2014-12-01

    Control gains play an important role in the control of a natural or a technical system since they reflect how much resource is required to optimize a certain control objective. This paper is concerned with the controllability of neuronal networks with constraints on the average value of the control gains injected in driver nodes, which are in accordance with engineering and biological backgrounds. In order to deal with the constraints on control gains, the controllability problem is transformed into a constrained optimization problem (COP). The introduction of the constraints on the control gains unavoidably leads to substantial difficulty in finding feasible as well as refining solutions. As such, a modified dynamic hybrid framework (MDyHF) is developed to solve this COP, based on an adaptive differential evolution and the concept of Pareto dominance. By comparing with statistical methods and several recently reported constrained optimization evolutionary algorithms (COEAs), we show that our proposed MDyHF is competitive and promising in studying the controllability of neuronal networks. Based on the MDyHF, we proceed to show the controlling regions under different levels of constraints. It is revealed that we should allocate the control gains economically when strong constraints are considered. In addition, it is found that as the constraints become more restrictive, the driver nodes are more likely to be selected from the nodes with a large degree. The results and methods presented in this paper will provide useful insights into developing new techniques to control a realistic complex network efficiently.

  6. Altering neuronal excitability to preserve network connectivity in a computational model of Alzheimer's disease.

    PubMed

    de Haan, Willem; van Straaten, Elisabeth C W; Gouw, Alida A; Stam, Cornelis J

    2017-09-01

    Neuronal hyperactivity and hyperexcitability of the cerebral cortex and hippocampal region is an increasingly observed phenomenon in preclinical Alzheimer's disease (AD). In later stages, oscillatory slowing and loss of functional connectivity are ubiquitous. Recent evidence suggests that neuronal dynamics have a prominent role in AD pathophysiology, making it a potentially interesting therapeutic target. However, although neuronal activity can be manipulated by various (non-)pharmacological means, intervening in a highly integrated system that depends on complex dynamics can produce counterintuitive and adverse effects. Computational dynamic network modeling may serve as a virtual test ground for developing effective interventions. To explore this approach, a previously introduced large-scale neural mass network with human brain topology was used to simulate the temporal evolution of AD-like, activity-dependent network degeneration. In addition, six defense strategies that either enhanced or diminished neuronal excitability were tested against the degeneration process, targeting excitatory and inhibitory neurons combined or separately. Outcome measures described oscillatory, connectivity and topological features of the damaged networks. Over time, the various interventions produced diverse large-scale network effects. Contrary to our hypothesis, the most successful strategy was a selective stimulation of all excitatory neurons in the network; it substantially prolonged the preservation of network integrity. The results of this study imply that functional network damage due to pathological neuronal activity can be opposed by targeted adjustment of neuronal excitability levels. The present approach may help to explore therapeutic effects aimed at preserving or restoring neuronal network integrity and contribute to better-informed intervention choices in future clinical trials in AD.

  7. When Networks Disagree: Ensemble Methods for Hybrid Neural Networks

    DTIC Science & Technology

    1992-10-27

    takes the form of repeated on-line stochastic gradient descent of randomly initialized nets. However, unlike the combination process in parametric ... estimation which usually takes the form of a simple average in parameter space, the parameters in a neural network take the form of neuronal weights which

  8. The Role of Adult-Born Neurons in the Constantly Changing Olfactory Bulb Network

    PubMed Central

    Malvaut, Sarah; Saghatelyan, Armen

    2016-01-01

    The adult mammalian brain is remarkably plastic and constantly undergoes structurofunctional modifications in response to environmental stimuli. In many regions plasticity is manifested by modifications in the efficacy of existing synaptic connections or synapse formation and elimination. In a few regions, however, plasticity is brought by the addition of new neurons that integrate into established neuronal networks. This type of neuronal plasticity is particularly prominent in the olfactory bulb (OB) where thousands of neuronal progenitors are produced on a daily basis in the subventricular zone (SVZ) and migrate along the rostral migratory stream (RMS) towards the OB. In the OB, these neuronal precursors differentiate into local interneurons, mature, and functionally integrate into the bulbar network by establishing output synapses with principal neurons. Despite continuous progress, it is still not well understood how normal functioning of the OB is preserved in the constantly remodelling bulbar network and what role adult-born neurons play in odor behaviour. In this review we will discuss different levels of morphofunctional plasticity effected by adult-born neurons and their functional role in the adult OB and also highlight the possibility that different subpopulations of adult-born cells may fulfill distinct functions in the OB neuronal network and odor behaviour. PMID:26839709

  9. The Role of Adult-Born Neurons in the Constantly Changing Olfactory Bulb Network.

    PubMed

    Malvaut, Sarah; Saghatelyan, Armen

    2016-01-01

    The adult mammalian brain is remarkably plastic and constantly undergoes structurofunctional modifications in response to environmental stimuli. In many regions plasticity is manifested by modifications in the efficacy of existing synaptic connections or synapse formation and elimination. In a few regions, however, plasticity is brought by the addition of new neurons that integrate into established neuronal networks. This type of neuronal plasticity is particularly prominent in the olfactory bulb (OB) where thousands of neuronal progenitors are produced on a daily basis in the subventricular zone (SVZ) and migrate along the rostral migratory stream (RMS) towards the OB. In the OB, these neuronal precursors differentiate into local interneurons, mature, and functionally integrate into the bulbar network by establishing output synapses with principal neurons. Despite continuous progress, it is still not well understood how normal functioning of the OB is preserved in the constantly remodelling bulbar network and what role adult-born neurons play in odor behaviour. In this review we will discuss different levels of morphofunctional plasticity effected by adult-born neurons and their functional role in the adult OB and also highlight the possibility that different subpopulations of adult-born cells may fulfill distinct functions in the OB neuronal network and odor behaviour.

  10. Transition from double coherence resonances to single coherence resonance in a neuronal network with phase noise.

    PubMed

    Jia, Yanbing; Gu, Huaguang

    2015-12-01

    The effect of phase noise on the coherence dynamics of a neuronal network composed of FitzHugh-Nagumo (FHN) neurons is investigated. Phase noise can induce dissimilar coherence resonance (CR) effects for different coupling strength regimes. When the coupling strength is small, phase noise can induce double CRs. One corresponds to the average frequency of phase noise, and the other corresponds to the intrinsic firing frequency of the FHN neuron. When the coupling strength is large enough, phase noise can only induce single CR, and the CR corresponds to the intrinsic firing frequency of the FHN neuron. The results show a transition from double CRs to single CR with the increase in the coupling strength. The transition can be well interpreted based on the dynamics of a single neuron stimulated by both phase noise and the coupling current. When the coupling strength is small, the coupling current is weak, and phase noise mainly determines the dynamics of the neuron. Moreover, the phase-noise-induced double CRs in the neuronal network are similar to the phase-noise-induced double CRs in an isolated FHN neuron. When the coupling strength is large enough, the coupling current is strong and plays a key role in the occurrence of the single CR in the network. The results provide a novel phenomenon and may have important implications in understanding the dynamics of neuronal networks.

  11. Intrinsic neuronal properties switch the mode of information transmission in networks.

    PubMed

    Gjorgjieva, Julijana; Mease, Rebecca A; Moody, William J; Fairhall, Adrienne L

    2014-12-01

    Diverse ion channels and their dynamics endow single neurons with complex biophysical properties. These properties determine the heterogeneity of cell types that make up the brain, as constituents of neural circuits tuned to perform highly specific computations. How do biophysical properties of single neurons impact network function? We study a set of biophysical properties that emerge in cortical neurons during the first week of development, eventually allowing these neurons to adaptively scale the gain of their response to the amplitude of the fluctuations they encounter. During the same time period, these same neurons participate in large-scale waves of spontaneously generated electrical activity. We investigate the potential role of experimentally observed changes in intrinsic neuronal properties in determining the ability of cortical networks to propagate waves of activity. We show that such changes can strongly affect the ability of multi-layered feedforward networks to represent and transmit information on multiple timescales. With properties modeled on those observed at early stages of development, neurons are relatively insensitive to rapid fluctuations and tend to fire synchronously in response to wave-like events of large amplitude. Following developmental changes in voltage-dependent conductances, these same neurons become efficient encoders of fast input fluctuations over few layers, but lose the ability to transmit slower, population-wide input variations across many layers. Depending on the neurons' intrinsic properties, noise plays different roles in modulating neuronal input-output curves, which can dramatically impact network transmission. The developmental change in intrinsic properties supports a transformation of a networks function from the propagation of network-wide information to one in which computations are scaled to local activity. This work underscores the significance of simple changes in conductance parameters in governing how neurons

  12. Spatially resolved non-invasive chemical stimulation for modulation of signalling in reconstructed neuronal networks.

    PubMed

    Mourzina, Yulia; Steffen, Alfred; Kaliaguine, Dmitri; Wolfrum, Bernhard; Schulte, Petra; Böcker-Meffert, Simone; Offenhäusser, Andreas

    2006-04-22

    Functional coupling of reconstructed neuronal networks with microelectronic circuits has potential for the development of bioelectronic devices, pharmacological assays and medical engineering. Modulation of the signal processing properties of on-chip reconstructed neuronal networks is an important aspect in such applications. It may be achieved by controlling the biochemical environment, preferably with cellular resolution. In this work, we attempt to design cell-cell and cell-medium interactions in confined geometries with the aim to manipulate non-invasively the activity pattern of an individual neuron in neuronal networks for long-term modulation. Therefore, we have developed a biohybrid system in which neuronal networks are reconstructed on microstructured silicon chips and interfaced to a microfluidic system. A high degree of geometrical control over the network architecture and alignment of the network with the substrate features has been achieved by means of aligned microcontact printing. Localized non-invasive on-chip chemical stimulation of micropatterned rat cortical neurons within a network has been demonstrated with an excitatory neurotransmitter glutamate. Our system will be useful for the investigation of the influence of localized chemical gradients on network formation and long-term modulation.

  13. Spontaneous Neuronal Activity in Developing Neocortical Networks: From Single Cells to Large-Scale Interactions

    PubMed Central

    Luhmann, Heiko J.; Sinning, Anne; Yang, Jenq-Wei; Reyes-Puerta, Vicente; Stüttgen, Maik C.; Kirischuk, Sergei; Kilb, Werner

    2016-01-01

    Neuronal activity has been shown to be essential for the proper formation of neuronal circuits, affecting developmental processes like neurogenesis, migration, programmed cell death, cellular differentiation, formation of local and long-range axonal connections, synaptic plasticity or myelination. Accordingly, neocortical areas reveal distinct spontaneous and sensory-driven neuronal activity patterns already at early phases of development. At embryonic stages, when immature neurons start to develop voltage-dependent channels, spontaneous activity is highly synchronized within small neuronal networks and governed by electrical synaptic transmission. Subsequently, spontaneous activity patterns become more complex, involve larger networks and propagate over several neocortical areas. The developmental shift from local to large-scale network activity is accompanied by a gradual shift from electrical to chemical synaptic transmission with an initial excitatory action of chloride-gated channels activated by GABA, glycine and taurine. Transient neuronal populations in the subplate (SP) support temporary circuits that play an important role in tuning early neocortical activity and the formation of mature neuronal networks. Thus, early spontaneous activity patterns control the formation of developing networks in sensory cortices, and disturbances of these activity patterns may lead to long-lasting neuronal deficits. PMID:27252626

  14. A distance constrained synaptic plasticity model of C. elegans neuronal network

    NASA Astrophysics Data System (ADS)

    Badhwar, Rahul; Bagler, Ganesh

    2017-03-01

    Brain research has been driven by enquiry for principles of brain structure organization and its control mechanisms. The neuronal wiring map of C. elegans, the only complete connectome available till date, presents an incredible opportunity to learn basic governing principles that drive structure and function of its neuronal architecture. Despite its apparently simple nervous system, C. elegans is known to possess complex functions. The nervous system forms an important underlying framework which specifies phenotypic features associated to sensation, movement, conditioning and memory. In this study, with the help of graph theoretical models, we investigated the C. elegans neuronal network to identify network features that are critical for its control. The 'driver neurons' are associated with important biological functions such as reproduction, signalling processes and anatomical structural development. We created 1D and 2D network models of C. elegans neuronal system to probe the role of features that confer controllability and small world nature. The simple 1D ring model is critically poised for the number of feed forward motifs, neuronal clustering and characteristic path-length in response to synaptic rewiring, indicating optimal rewiring. Using empirically observed distance constraint in the neuronal network as a guiding principle, we created a distance constrained synaptic plasticity model that simultaneously explains small world nature, saturation of feed forward motifs as well as observed number of driver neurons. The distance constrained model suggests optimum long distance synaptic connections as a key feature specifying control of the network.

  15. Human Neuron Cultures: Micropatterning Facilitates the Long-Term Growth and Analysis of iPSC-Derived Individual Human Neurons and Neuronal Networks (Adv. Healthcare Mater. 15/2016).

    PubMed

    Burbulla, Lena F; Beaumont, Kristin G; Mrksich, Milan; Krainc, Dimitri

    2016-08-01

    Dimitri Krainc, Milan Mrksich, and co-workers demonstrate the utility of microcontact printing technology for culturing of human neurons in defined patterns over extended periods of time on page 1894. This approach facilitates studies of neuronal development, cellular trafficking, and related mechanisms that require assessment of individual neurons and neuronal networks.

  16. Hybrid Fuzzy Wavelet Neural Networks Architecture Based on Polynomial Neural Networks and Fuzzy Set/Relation Inference-Based Wavelet Neurons.

    PubMed

    Huang, Wei; Oh, Sung-Kwun; Pedrycz, Witold

    2017-08-11

    This paper presents a hybrid fuzzy wavelet neural network (HFWNN) realized with the aid of polynomial neural networks (PNNs) and fuzzy inference-based wavelet neurons (FIWNs). Two types of FIWNs including fuzzy set inference-based wavelet neurons (FSIWNs) and fuzzy relation inference-based wavelet neurons (FRIWNs) are proposed. In particular, a FIWN without any fuzzy set component (viz., a premise part of fuzzy rule) becomes a wavelet neuron (WN). To alleviate the limitations of the conventional wavelet neural networks or fuzzy wavelet neural networks whose parameters are determined based on a purely random basis, the parameters of wavelet functions standing in FIWNs or WNs are initialized by using the C-Means clustering method. The overall architecture of the HFWNN is similar to the one of the typical PNNs. The main strategies in the design of HFWNN are developed as follows. First, the first layer of the network consists of FIWNs (e.g., FSIWN or FRIWN) that are used to reflect the uncertainty of data, while the second and higher layers consist of WNs, which exhibit a high level of flexibility and realize a linear combination of wavelet functions. Second, the parameters used in the design of the HFWNN are adjusted through genetic optimization. To evaluate the performance of the proposed HFWNN, several publicly available data are considered. Furthermore a thorough comparative analysis is covered.

  17. Effects of Aβ exposure on long-term associative memory and its neuronal mechanisms in a defined neuronal network.

    PubMed

    Ford, Lenzie; Crossley, Michael; Williams, Thomas; Thorpe, Julian R; Serpell, Louise C; Kemenes, György

    2015-05-29

    Amyloid beta (Aβ) induced neuronal death has been linked to memory loss, perhaps the most devastating symptom of Alzheimer's disease (AD). Although Aβ-induced impairment of synaptic or intrinsic plasticity is known to occur before any cell death, the links between these neurophysiological changes and the loss of specific types of behavioral memory are not fully understood. Here we used a behaviorally and physiologically tractable animal model to investigate Aβ-induced memory loss and electrophysiological changes in the absence of neuronal death in a defined network underlying associative memory. We found similar behavioral but different neurophysiological effects for Aβ 25-35 and Aβ 1-42 in the feeding circuitry of the snail Lymnaea stagnalis. Importantly, we also established that both the behavioral and neuronal effects were dependent upon the animals having been classically conditioned prior to treatment, since Aβ application before training caused neither memory impairment nor underlying neuronal changes over a comparable period of time following treatment.

  18. Detection of neuron membranes in electron microscopy images using a serial neural network architecture.

    PubMed

    Jurrus, Elizabeth; Paiva, Antonio R C; Watanabe, Shigeki; Anderson, James R; Jones, Bryan W; Whitaker, Ross T; Jorgensen, Erik M; Marc, Robert E; Tasdizen, Tolga

    2010-12-01

    Study of nervous systems via the connectome, the map of connectivities of all neurons in that system, is a challenging problem in neuroscience. Towards this goal, neurobiologists are acquiring large electron microscopy datasets. However, the shear volume of these datasets renders manual analysis infeasible. Hence, automated image analysis methods are required for reconstructing the connectome from these very large image collections. Segmentation of neurons in these images, an essential step of the reconstruction pipeline, is challenging because of noise, anisotropic shapes and brightness, and the presence of confounding structures. The method described in this paper uses a series of artificial neural networks (ANNs) in a framework combined with a feature vector that is composed of image intensities sampled over a stencil neighborhood. Several ANNs are applied in series allowing each ANN to use the classification context provided by the previous network to improve detection accuracy. We develop the method of serial ANNs and show that the learned context does improve detection over traditional ANNs. We also demonstrate advantages over previous membrane detection methods. The results are a significant step towards an automated system for the reconstruction of the connectome.

  19. Low-Density Neuronal Networks Cultured using Patterned Poly-L-Lysine on Microelectrode Arrays

    PubMed Central

    Jun, Sang Beom; Hynd, Matthew R.; Dowell-Mesfin, Natalie; Smith, Karen L.; Turner, James N.; Shain, William; Kim, Sung June

    2009-01-01

    Synaptic activity recorded from low-density networks of cultured rat hippocampal neurons was monitored using microelectrode arrays (MEAs). Neuronal networks were patterned with poly-L-lysine (PLL) using microcontact printing (µCP). Polydimethysiloxane (PDMS) stamps were fabricated with relief structures resulting in patterns of 2 µm-wide lines for directing process growth and 20 µm-diameter circles for cell soma attachment. These circles were aligned to electrode sites. Different densities of neurons were plated in order to assess the minimal neuron density required for development of an active network. Spontaneous activity was observed at 10–14 days in networks using neuron densities as low as 200 cells/mm2. Immunocytochemistry demonstrated the distribution of dendrites along the lines and the location of foci of the presynaptic protein, synaptophysin, on neuron somas and dendrites. Scanning electron microscopy demonstrated that single fluorescent tracks contained multiple processes. Evoked responses of selected portions of the networks were produced by stimulation of specific electrode sites. In addition, the neuronal excitability of the network was increased by the bath application of high K+ (10–12 mM). Application of DNQX, an AMPA antagonist, blocked all spontaneous activity, suggesting that the activity is excitatory and mediated through glutamate receptors. PMID:17049614

  20. Effects of distance-dependent delay on small-world neuronal networks.

    PubMed

    Zhu, Jinjie; Chen, Zhen; Liu, Xianbin

    2016-04-01

    We study firing behaviors and the transitions among them in small-world noisy neuronal networks with electrical synapses and information transmission delay. Each neuron is modeled by a two-dimensional Rulkov map neuron. The distance between neurons, which is a main source of the time delay, is taken into consideration. Through spatiotemporal patterns and interspike intervals as well as the interburst intervals, the collective behaviors are revealed. It is found that the networks switch from resting state into intermittent firing state under Gaussian noise excitation. Initially, noise-induced firing behaviors are disturbed by small time delays. Periodic firing behaviors with irregular zigzag patterns emerge with an increase of the delay and become progressively regular after a critical value is exceeded. More interestingly, in accordance with regular patterns, the spiking frequency doubles compared with the former stage for the spiking neuronal network. A growth of frequency persists for a larger delay and a transition to antiphase synchronization is observed. Furthermore, it is proved that these transitions are generic also for the bursting neuronal network and the FitzHugh-Nagumo neuronal network. We show these transitions due to the increase of time delay are robust to the noise strength, coupling strength, network size, and rewiring probability.

  1. Synaptic depression mediates bistability in neuronal networks with recurrent inhibitory connectivity.

    PubMed

    Manor, Y; Nadim, F

    2001-12-01

    When depressing synapses are embedded in a circuit composed of a pacemaker neuron and a neuron with no autorhythmic properties, the network can show two modes of oscillation. In one mode the synapses are mostly depressed, and the oscillations are dominated by the properties of the oscillating neuron. In the other mode, the synapses recover from depression, and the oscillations are primarily controlled by the synapses. We demonstrate the two modes of oscillation in a hybrid circuit consisting of a biological pacemaker and a model neuron, reciprocally coupled via model depressing synapses. We show that across a wide range of parameter values this network shows robust bistability of the oscillation mode and that it is possible to switch the network from one mode to the other by injection of a brief current pulse in either neuron. The underlying mechanism for bistability may be present in many types of circuits with reciprocal connections and synaptic depression.

  2. Investigating local and long-range neuronal network dynamics by simultaneous optogenetics, reverse microdialysis and silicon probe recordings in vivo

    PubMed Central

    Taylor, Hannah; Schmiedt, Joscha T.; Çarçak, Nihan; Onat, Filiz; Di Giovanni, Giuseppe; Lambert, Régis; Leresche, Nathalie; Crunelli, Vincenzo; David, Francois

    2014-01-01

    Background The advent of optogenetics has given neuroscientists the opportunity to excite or inhibit neuronal population activity with high temporal resolution and cellular selectivity. Thus, when combined with recordings of neuronal ensemble activity in freely moving animals optogenetics can provide an unprecedented snapshot of the contribution of neuronal assemblies to (patho)physiological conditions in vivo. Still, the combination of optogenetic and silicone probe (or tetrode) recordings does not allow investigation of the role played by voltage- and transmitter-gated channels of the opsin-transfected neurons and/or other adjacent neurons in controlling neuronal activity. New method and results We demonstrate that optogenetics and silicone probe recordings can be combined with intracerebral reverse microdialysis for the long-term delivery of neuroactive drugs around the optic fiber and silicone probe. In particular, we show the effect of antagonists of T-type Ca2+ channels, hyperpolarization-activated cyclic nucleotide-gated channels and metabotropic glutamate receptors on silicone probe-recorded activity of the local opsin-transfected neurons in the ventrobasal thalamus, and demonstrate the changes that the block of these thalamic channels/receptors brings about in the network dynamics of distant somatotopic cortical neuronal ensembles. Comparison with existing methods This is the first demonstration of successfully combining optogenetics and neuronal ensemble recordings with reverse microdialysis. This combination of techniques overcomes some of the disadvantages that are associated with the use of intracerebral injection of a drug-containing solution at the site of laser activation. Conclusions The combination of reverse microdialysis, silicone probe recordings and optogenetics can unravel the short and long-term effects of specific transmitter- and voltage-gated channels on laser-modulated firing at the site of optogenetic stimulation and the actions that

  3. A reanalysis of "Two types of asynchronous activity in networks of excitatory and inhibitory spiking neurons".

    PubMed

    Engelken, Rainer; Farkhooi, Farzad; Hansel, David; van Vreeswijk, Carl; Wolf, Fred

    2016-01-01

    Neuronal activity in the central nervous system varies strongly in time and across neuronal populations. It is a longstanding proposal that such fluctuations generically arise from chaotic network dynamics. Various theoretical studies predict that the rich dynamics of rate models operating in the chaotic regime can subserve circuit computation and learning. Neurons in the brain, however, communicate via spikes and it is a theoretical challenge to obtain similar rate fluctuations in networks of spiking neuron models. A recent study investigated spiking balanced networks of leaky integrate and fire (LIF) neurons and compared their dynamics to a matched rate network with identical topology, where single unit input-output functions were chosen from isolated LIF neurons receiving Gaussian white noise input. A mathematical analogy between the chaotic instability in networks of rate units and the spiking network dynamics was proposed. Here we revisit the behavior of the spiking LIF networks and these matched rate networks. We find expected hallmarks of a chaotic instability in the rate network: For supercritical coupling strength near the transition point, the autocorrelation time diverges. For subcritical coupling strengths, we observe critical slowing down in response to small external perturbations. In the spiking network, we found in contrast that the timescale of the autocorrelations is insensitive to the coupling strength and that rate deviations resulting from small input perturbations rapidly decay. The decay speed even accelerates for increasing coupling strength. In conclusion, our reanalysis demonstrates fundamental differences between the behavior of pulse-coupled spiking LIF networks and rate networks with matched topology and input-output function. In particular there is no indication of a corresponding chaotic instability in the spiking network.

  4. Size-dependent regulation of synchronized activity in living neuronal networks

    NASA Astrophysics Data System (ADS)

    Yamamoto, Hideaki; Kubota, Shigeru; Chida, Yudai; Morita, Mayu; Moriya, Satoshi; Akima, Hisanao; Sato, Shigeo; Hirano-Iwata, Ayumi; Tanii, Takashi; Niwano, Michio

    2016-07-01

    We study the effect of network size on synchronized activity in living neuronal networks. Dissociated cortical neurons form synaptic connections in culture and generate synchronized spontaneous activity within 10 days in vitro. Using micropatterned surfaces to extrinsically control the size of neuronal networks, we show that synchronized activity can emerge in a network as small as 12 cells. Furthermore, a detailed comparison of small (˜20 cells), medium (˜100 cells), and large (˜400 cells) networks reveal that synchronized activity becomes destabilized in the small networks. A computational modeling of neural activity is then employed to explore the underlying mechanism responsible for the size effect. We find that the generation and maintenance of the synchronized activity can be minimally described by: (1) the stochastic firing of each neuron in the network, (2) enhancement in the network activity in a positive feedback loop of excitatory synapses, and (3) Ca-dependent suppression of bursting activity. The model further shows that the decrease in total synaptic input to a neuron that drives the positive feedback amplification of correlated activity is a key factor underlying the destabilization of synchrony in smaller networks. Spontaneous neural activity plays a critical role in cortical information processing, and our work constructively clarifies an aspect of the structural basis behind this.

  5. Hopf bifurcation of an (n + 1) -neuron bidirectional associative memory neural network model with delays.

    PubMed

    Xiao, Min; Zheng, Wei Xing; Cao, Jinde

    2013-01-01

    Recent studies on Hopf bifurcations of neural networks with delays are confined to simplified neural network models consisting of only two, three, four, five, or six neurons. It is well known that neural networks are complex and large-scale nonlinear dynamical systems, so the dynamics of the delayed neural networks are very rich and complicated. Although discussing the dynamics of networks with a few neurons may help us to understand large-scale networks, there are inevitably some complicated problems that may be overlooked if simplified networks are carried over to large-scale networks. In this paper, a general delayed bidirectional associative memory neural network model with n + 1 neurons is considered. By analyzing the associated characteristic equation, the local stability of the trivial steady state is examined, and then the existence of the Hopf bifurcation at the trivial steady state is established. By applying the normal form theory and the center manifold reduction, explicit formulae are derived to determine the direction and stability of the bifurcating periodic solution. Furthermore, the paper highlights situations where the Hopf bifurcations are particularly critical, in the sense that the amplitude and the period of oscillations are very sensitive to errors due to tolerances in the implementation of neuron interconnections. It is shown that the sensitivity is crucially dependent on the delay and also significantly influenced by the feature of the number of neurons. Numerical simulations are carried out to illustrate the main results.

  6. Size-dependent regulation of synchronized activity in living neuronal networks.

    PubMed

    Yamamoto, Hideaki; Kubota, Shigeru; Chida, Yudai; Morita, Mayu; Moriya, Satoshi; Akima, Hisanao; Sato, Shigeo; Hirano-Iwata, Ayumi; Tanii, Takashi; Niwano, Michio

    2016-07-01

    We study the effect of network size on synchronized activity in living neuronal networks. Dissociated cortical neurons form synaptic connections in culture and generate synchronized spontaneous activity within 10 days in vitro. Using micropatterned surfaces to extrinsically control the size of neuronal networks, we show that synchronized activity can emerge in a network as small as 12 cells. Furthermore, a detailed comparison of small (∼20 cells), medium (∼100 cells), and large (∼400 cells) networks reveal that synchronized activity becomes destabilized in the small networks. A computational modeling of neural activity is then employed to explore the underlying mechanism responsible for the size effect. We find that the generation and maintenance of the synchronized activity can be minimally described by: (1) the stochastic firing of each neuron in the network, (2) enhancement in the network activity in a positive feedback loop of excitatory synapses, and (3) Ca-dependent suppression of bursting activity. The model further shows that the decrease in total synaptic input to a neuron that drives the positive feedback amplification of correlated activity is a key factor underlying the destabilization of synchrony in smaller networks. Spontaneous neural activity plays a critical role in cortical information processing, and our work constructively clarifies an aspect of the structural basis behind this.

  7. Use of Human Neurons Derived via Cellular Reprogramming Methods to Study Host-Parasite Interactions of Toxoplasma gondii in Neurons.

    PubMed

    Halonen, Sandra K

    2017-09-23

    Toxoplasma gondii is an intracellular protozoan parasite, with approximately one-third of the worlds' population chronically infected. In chronically infected individuals, the parasite resides in tissue cysts in neurons in the brain. The chronic infection in immunocompetant individuals has traditionally been considered to be asymptomatic, but increasing evidence indicates that chronic infection is associated with diverse neurological disorders such as schizophrenia, cryptogenic epilepsy, and Parkinson's Disease. The mechanisms by which the parasite exerts affects on behavior and other neuronal functions are not understood. Human neurons derived from cellular reprogramming methods offer the opportunity to develop better human neuronal models to study T. gondii in neurons. Results from two studies using human neurons derived via cellular reprogramming methods indicate these human neuronal models provide better in vitro models to study the effects of T. gondii on neurons and neurological functions. In this review, an overview of the current neural reprogramming methods will be given, followed by a summary of the studies using human induced pluripotent stem cell (hiPSC)-derived neurons and induced neurons (iNs) to study T. gondii in neurons. The potential of these neural reprogramming methods for further study of the host-parasite interactions of T. gondii in neurons will be discussed.

  8. In Vitro Reconstruction of Neuronal Networks Derived from Human iPS Cells Using Microfabricated Devices.

    PubMed

    Takayama, Yuzo; Kida, Yasuyuki S

    2016-01-01

    Morphology and function of the nervous system is maintained via well-coordinated processes both in central and peripheral nervous tissues, which govern the homeostasis of organs/tissues. Impairments of the nervous system induce neuronal disorders such as peripheral neuropathy or cardiac arrhythmia. Although further investigation is warranted to reveal the molecular mechanisms of progression in such diseases, appropriate model systems mimicking the patient-specific communication between neurons and organs are not established yet. In this study, we reconstructed the neuronal network in vitro either between neurons of the human induced pluripotent stem (iPS) cell derived peripheral nervous system (PNS) and central nervous system (CNS), or between PNS neurons and cardiac cells in a morphologically and functionally compartmentalized manner. Networks were constructed in photolithographically microfabricated devices with two culture compartments connected by 20 microtunnels. We confirmed that PNS and CNS neurons connected via synapses and formed a network. Additionally, calcium-imaging experiments showed that the bundles originating from the PNS neurons were functionally active and responded reproducibly to external stimuli. Next, we confirmed that CNS neurons showed an increase in calcium activity during electrical stimulation of networked bundles from PNS neurons in order to demonstrate the formation of functional cell-cell interactions. We also confirmed the formation of synapses between PNS neurons and mature cardiac cells. These results indicate that compartmentalized culture devices are promising tools for reconstructing network-wide connections between PNS neurons and various organs, and might help to understand patient-specific molecular and functional mechanisms under normal and pathological conditions.

  9. Model of intersegmental coordination in the leech heartbeat neuronal network.

    PubMed

    Hill, Andrew A V; Masino, Mark A; Calabrese, Ronald L

    2002-03-01

    We have created a computational model of the timing network that paces the heartbeat of the medicinal leech, Hirudo medicinalis. The rhythmic activity of this network originates from two segmental oscillators located in the third and fourth midbody ganglia. In the intact nerve cord, these segmental oscillators are mutually entrained to the same cycle period. Although experiments have shown that the segmental oscillators are coupled by inhibitory coordinating interneurons, the underlying mechanisms of intersegmental coordination have not yet been elucidated. To help understand this coordination, we have created a simple computational model with two variants: symmetric and asymmetric. In the symmetric model, neurons within each segmental oscillator called oscillator interneurons, inhibit the coordinating interneurons. In contrast, in the asymmetric model only the oscillator interneurons of one segmental oscillator inhibit the coordinating interneurons. In the symmetric model, when two segmental oscillators with different inherent periods are coupled, the faster one leads in phase, and the period of the coupled system is equal to the period of the faster oscillator. This behavior arises because, during each oscillation cycle, the oscillator interneurons of the faster segmental oscillator begin to burst before those of the slower oscillator, thereby terminating spike activity in the coordinating interneurons. Thus there is a brief period of time in each cycle when the oscillator interneurons of the slower segmental oscillator are relieved of inhibition from the coordinating interneurons. This "removal of synaptic inhibition" allows, within certain limits, the slower segmental oscillator to be sped to the period of the faster one. Thus the symmetric model demonstrates a plausible biophysical mechanism by which one segmental oscillator can entrain the other. In general the asymmetric model, in which only one segmental oscillator has the ability to inhibit the

  10. On the performance of voltage stepping for the simulation of adaptive, nonlinear integrate-and-fire neuronal networks.

    PubMed

    Kaabi, Mohamed Ghaith; Tonnelier, Arnaud; Martinez, Dominique

    2011-05-01

    In traditional event-driven strategies, spike timings are analytically given or calculated with arbitrary precision (up to machine precision). Exact computation is possible only for simplified neuron models, mainly the leaky integrate-and-fire model. In a recent paper, Zheng, Tonnelier, and Martinez (2009) introduced an approximate event-driven strategy, named voltage stepping, that allows the generic simulation of nonlinear spiking neurons. Promising results were achieved in the simulation of single quadratic integrate-and-fire neurons. Here, we assess the performance of voltage stepping in network simulations by considering more complex neurons (quadratic integrate-and-fire neurons with adaptation) coupled with multiple synapses. To handle the discrete nature of synaptic interactions, we recast voltage stepping in a general framework, the discrete event system specification. The efficiency of the method is assessed through simulations and comparisons with a modified time-stepping scheme of the Runge-Kutta type. We demonstrated numerically that the original order of voltage stepping is preserved when simulating connected spiking neurons, independent of the network activity and connectivity.

  11. Noise-induced spatiotemporal patterns in Hodgkin-Huxley neuronal network.

    PubMed

    Wu, Ying; Li, Jiajia; Liu, Shaobao; Pang, Jiazhi; Du, Mengmeng; Lin, Pan

    2013-10-01

    The effect of noise on the pattern selection in a regular network of Hodgkin-Huxley neurons is investigated, and the transition of pattern in the network is measured from subexcitable to excitable media. Extensive numerical results confirm that kinds of travelling wave such as spiral wave, circle wave and target wave could be developed and kept alive in the subexcitable network due to the noise. In the case of excitable media under noise, the developed spiral wave and target wave could coexist and new target-like wave is induced near to the border of media. The averaged membrane potentials over all neurons in the network are calculated to detect the periodicity of the time series and the generated traveling wave. Furthermore, the firing probabilities of neurons in networks are also calculated to analyze the collective behavior of networks.

  12. Generative models of rich clubs in Hebbian neuronal networks and large-scale human brain networks

    PubMed Central

    Vértes, Petra E.; Alexander-Bloch, Aaron; Bullmore, Edward T.

    2014-01-01

    Rich clubs arise when nodes that are ‘rich’ in connections also form an elite, densely connected ‘club’. In brain networks, rich clubs incur high physical connection costs but also appear to be especially valuable to brain function. However, little is known about the selection pressures that drive their formation. Here, we take two complementary approaches to this question: firstly we show, using generative modelling, that the emergence of rich clubs in large-scale human brain networks can be driven by an economic trade-off between connection costs and a second, competing topological term. Secondly we show, using simulated neural networks, that Hebbian learning rules also drive the emergence of rich clubs at the microscopic level, and that the prominence of these features increases with learning time. These results suggest that Hebbian learning may provide a neuronal mechanism for the selection of complex features such as rich clubs. The neural networks that we investigate are explicitly Hebbian, and we argue that the topological term in our model of large-scale brain connectivity may represent an analogous connection rule. This putative link between learning and rich clubs is also consistent with predictions that integrative aspects of brain network organization are especially important for adaptive behaviour. PMID:25180309

  13. Generative models of rich clubs in Hebbian neuronal networks and large-scale human brain networks.

    PubMed

    Vértes, Petra E; Alexander-Bloch, Aaron; Bullmore, Edward T

    2014-10-05

    Rich clubs arise when nodes that are 'rich' in connections also form an elite, densely connected 'club'. In brain networks, rich clubs incur high physical connection costs but also appear to be especially valuable to brain function. However, little is known about the selection pressures that drive their formation. Here, we take two complementary approaches to this question: firstly we show, using generative modelling, that the emergence of rich clubs in large-scale human brain networks can be driven by an economic trade-off between connection costs and a second, competing topological term. Secondly we show, using simulated neural networks, that Hebbian learning rules also drive the emergence of rich clubs at the microscopic level, and that the prominence of these features increases with learning time. These results suggest that Hebbian learning may provide a neuronal mechanism for the selection of complex features such as rich clubs. The neural networks that we investigate are explicitly Hebbian, and we argue that the topological term in our model of large-scale brain connectivity may represent an analogous connection rule. This putative link between learning and rich clubs is also consistent with predictions that integrative aspects of brain network organization are especially important for adaptive behaviour.

  14. Distinct synaptic dynamics of heterogeneous pacemaker neurons in an oscillatory network

    PubMed Central

    Rabbah, Pascale; Nadim, Farzan

    2008-01-01

    Many rhythmically active networks involve heterogeneous populations of pacemaker neurons with potentially distinct synaptic outputs that can be differentially targeted by extrinsic inputs or neuromodulators, thereby increasing possible network output patterns. In order to understand the roles of heterogeneous pacemaker neurons, we characterized differences in synaptic output from the anterior burster (AB) and pyloric dilator (PD) neurons in the lobster pyloric network. These intrinsically distinct neurons are strongly electrically coupled, co-active and constitute the pyloric pacemaker ensemble. During pyloric oscillations, the pacemaker neurons produce compound inhibitory synaptic connections to the follower lateral pyloric (LP) and pyloric constrictor (PY) neurons, which fire out of phase with AB/PD and with different delay times. Using pharmacological blockers, we separated the synapses originating from the AB and PD neurons and investigated their temporal dynamics. These synapses exhibited distinct short-term dynamics, depending on the presynaptic neuron type, and had different relative contributions to the total synaptic output depending on waveform shape and cycle frequency. However, paired comparisons revealed that the amplitude or dynamics of synapses from either the AB or PD neuron did not depend on the postsynaptic neuron type, LP or PY. To address the functional implications of these findings, we examined the correlation between synaptic inputs from the pacemakers and the burst onset phase of the LP and PY neurons in the ongoing pyloric rhythm. These comparisons showed that the activity of the LP and PY neurons are influenced by the peak phase and amplitude of the synaptic inputs from the pacemaker neurons. PMID:17202242

  15. Discontinuous Galerkin finite element method for solving population density functions of cortical pyramidal and thalamic neuronal populations.

    PubMed

    Huang, Chih-Hsu; Lin, Chou-Ching K; Ju, Ming-Shaung

    2015-02-01

    Compared with the Monte Carlo method, the population density method is efficient for modeling collective dynamics of neuronal populations in human brain. In this method, a population density function describes the probabilistic distribution of states of all neurons in the population and it is governed by a hyperbolic partial differential equation. In the past, the problem was mainly solved by using the finite difference method. In a previous study, a continuous Galerkin finite element method was found better than the finite difference method for solving the hyperbolic partial differential equation; however, the population density function often has discontinuity and both methods suffer from a numerical stability problem. The goal of this study is to improve the numerical stability of the solution using discontinuous Galerkin finite element method. To test the performance of the new approach, interaction of a population of cortical pyramidal neurons and a population of thalamic neurons was simulated. The numerical results showed good agreement between results of discontinuous Galerkin finite element and Monte Carlo methods. The convergence and accuracy of the solutions are excellent. The numerical stability problem could be resolved using the discontinuous Galerkin finite element method which has total-variation-diminishing property. The efficient approach will be employed to simulate the electroencephalogram or dynamics of thalamocortical network which involves three populations, namely, thalamic reticular neurons, thalamocortical neurons and cortical pyramidal neurons.

  16. Quantitative 3D investigation of Neuronal network in mouse spinal cord model

    NASA Astrophysics Data System (ADS)

    Bukreeva, I.; Campi, G.; Fratini, M.; Spanò, R.; Bucci, D.; Battaglia, G.; Giove, F.; Bravin, A.; Uccelli, A.; Venturi, C.; Mastrogiacomo, M.; Cedola, A.

    2017-01-01

    The investigation of the neuronal network in mouse spinal cord models represents the basis for the research on neurodegenerative diseases. In this framework, the quantitative analysis of the single elements in different districts is a crucial task. However, conventional 3D imaging techniques do not have enough spatial resolution and contrast to allow for a quantitative investigation of the neuronal network. Exploiting the high coherence and the high flux of synchrotron sources, X-ray Phase-Contrast multiscale-Tomography allows for the 3D investigation of the neuronal microanatomy without any aggressive sample preparation or sectioning. We investigated healthy-mouse neuronal architecture by imaging the 3D distribution of the neuronal-network with a spatial resolution of 640 nm. The high quality of the obtained images enables a quantitative study of the neuronal structure on a subject-by-subject basis. We developed and applied a spatial statistical analysis on the motor neurons to obtain quantitative information on their 3D arrangement in the healthy-mice spinal cord. Then, we compared the obtained results with a mouse model of multiple sclerosis. Our approach paves the way to the creation of a “database” for the characterization of the neuronal network main features for a comparative investigation of neurodegenerative diseases and therapies.

  17. Quantitative 3D investigation of Neuronal network in mouse spinal cord model

    PubMed Central

    Bukreeva, I.; Campi, G.; Fratini, M.; Spanò, R.; Bucci, D.; Battaglia, G.; Giove, F.; Bravin, A.; Uccelli, A.; Venturi, C.; Mastrogiacomo, M.; Cedola, A.

    2017-01-01

    The investigation of the neuronal network in mouse spinal cord models represents the basis for the research on neurodegenerative diseases. In this framework, the quantitative analysis of the single elements in different districts is a crucial task. However, conventional 3D imaging techniques do not have enough spatial resolution and contrast to allow for a quantitative investigation of the neuronal network. Exploiting the high coherence and the high flux of synchrotron sources, X-ray Phase-Contrast multiscale-Tomography allows for the 3D investigation of the neuronal microanatomy without any aggressive sample preparation or sectioning. We investigated healthy-mouse neuronal architecture by imaging the 3D distribution of the neuronal-network with a spatial resolution of 640 nm. The high quality of the obtained images enables a quantitative study of the neuronal structure on a subject-by-subject basis. We developed and applied a spatial statistical analysis on the motor neurons to obtain quantitative information on their 3D arrangement in the healthy-mice spinal cord. Then, we compared the obtained results with a mouse model of multiple sclerosis. Our approach paves the way to the creation of a “database” for the characterization of the neuronal network main features for a comparative investigation of neurodegenerative diseases and therapies. PMID:28112212

  18. Quantitative 3D investigation of Neuronal network in mouse spinal cord model.

    PubMed

    Bukreeva, I; Campi, G; Fratini, M; Spanò, R; Bucci, D; Battaglia, G; Giove, F; Bravin, A; Uccelli, A; Venturi, C; Mastrogiacomo, M; Cedola, A

    2017-01-23

    The investigation of the neuronal network in mouse spinal cord models represents the basis for the research on neurodegenerative diseases. In this framework, the quantitative analysis of the single elements in different districts is a crucial task. However, conventional 3D imaging techniques do not have enough spatial resolution and contrast to allow for a quantitative investigation of the neuronal network. Exploiting the high coherence and the high flux of synchrotron sources, X-ray Phase-Contrast multiscale-Tomography allows for the 3D investigation of the neuronal microanatomy without any aggressive sample preparation or sectioning. We investigated healthy-mouse neuronal architecture by imaging the 3D distribution of the neuronal-network with a spatial resolution of 640 nm. The high quality of the obtained images enables a quantitative study of the neuronal structure on a subject-by-subject basis. We developed and applied a spatial statistical analysis on the motor neurons to obtain quantitative information on their 3D arrangement in the healthy-mice spinal cord. Then, we compared the obtained results with a mouse model of multiple sclerosis. Our approach paves the way to the creation of a "database" for the characterization of the neuronal network main features for a comparative investigation of neurodegenerative diseases and therapies.

  19. Matrix stiffness modulates formation and activity of neuronal networks of controlled architectures.

    PubMed

    Lantoine, Joséphine; Grevesse, Thomas; Villers, Agnès; Delhaye, Geoffrey; Mestdagh, Camille; Versaevel, Marie; Mohammed, Danahe; Bruyère, Céline; Alaimo, Laura; Lacour, Stéphanie P; Ris, Laurence; Gabriele, Sylvain

    2016-05-01

    The ability to construct easily in vitro networks of primary neurons organized with imposed topologies is required for neural tissue engineering as well as for the development of neuronal interfaces with desirable characteristics. However, accumulating evidence suggests that the mechanical properties of the culture matrix can modulate important neuronal functions such as growth, extension, branching and activity. Here we designed robust and reproducible laminin-polylysine grid micropatterns on cell culture substrates that have similar biochemical properties but a 100-fold difference in Young's modulus to investigate the role of the matrix rigidity on the formation and activity of cortical neuronal networks. We found that cell bodies of primary cortical neurons gradually accumulate in circular islands, whereas axonal extensions spread on linear tracks to connect circular islands. Our findings indicate that migration of cortical neurons is enhanced on soft substrates, leading to a faster formation of neuronal networks. Furthermore, the pre-synaptic density was two times higher on stiff substrates and consistently the number of action potentials and miniature synaptic currents was enhanced on stiff substrates. Taken together, our results provide compelling evidence to indicate that matrix stiffness is a key parameter to modulate the growth dynamics, synaptic density and electrophysiological activity of cortical neuronal networks, thus providing useful information on scaffold design for neural tissue engineering.

  20. The Timing for Neuronal Maturation in the Adult Hippocampus Is Modulated by Local Network Activity

    PubMed Central

    Piatti, Verónica C.; Davies-Sala, M. Georgina; Espósito, M. Soledad; Mongiat, Lucas A.; Trinchero, Mariela F.; Schinder, Alejandro F.

    2013-01-01

    The adult hippocampus continuously generates new cohorts of immature neurons with increased excitability and plasticity. The window for the expression of those unique properties in each cohort is determined by the time required to acquire a mature neuronal phenotype. Here, we show that local network activity regulates the rate of maturation of adult-born neurons along the septotemporal axis of the hippocampus. Confocal microscopy and patch-clamp recordings were combined to assess marker expression, morphological development, and functional properties in retrovirally labeled neurons over time. The septal dentate gyrus displayed higher levels of basal network activity and faster rates of newborn neuron maturation than the temporal region. Voluntary exercise enhanced network activity only in the temporal region and, in turn, accelerated neuronal development. Finally, neurons developing within a highly active environment exhibited a delayed maturation when their intrinsic electrical activity was reduced by the cell-autonomous overexpression of Kir2.1, an inward-rectifying potassium channel. Our findings reveal a novel type of activity-dependent plasticity acting on the timing of neuronal maturation and functional integration of newly generated neurons along the longitudinal axis of the adult hippocampus. PMID:21613484

  1. Intrinsic protective mechanisms of the neuron-glia network against glioma invasion.

    PubMed

    Iwadate, Yasuo; Fukuda, Kazumasa; Matsutani, Tomoo; Saeki, Naokatsu

    2016-04-01

    Gliomas arising in the brain parenchyma infiltrate into the surrounding brain and break down established complex neuron-glia networks. However, mounting evidence suggests that initially the network microenvironment of the adult central nervous system (CNS) is innately non-permissive to glioma cell invasion. The main players are inhibitory molecules in CNS myelin, as well as proteoglycans associated with astrocytes. Neural stem cells, and neurons themselves, possess inhibitory functions against neighboring tumor cells. These mechanisms have evolved to protect the established neuron-glia network, which is necessary for brain function. Greater insight into the interaction between glioma cells and the surrounding neuron-glia network is crucial for developing new therapies for treating these devastating tumors while preserving the important and complex neural functions of patients. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Self-organization of a neural network with heterogeneous neurons enhances coherence and stochastic resonance

    NASA Astrophysics Data System (ADS)

    Li, Xiumin; Zhang, Jie; Small, Michael

    2009-03-01

    Most network models for neural behavior assume a predefined network topology and consist of almost identical elements exhibiting little heterogeneity. In this paper, we propose a self-organized network consisting of heterogeneous neurons with different behaviors or degrees of excitability. The synaptic connections evolve according to the spike-timing dependent plasticity mechanism and finally a sparse and active-neuron-dominant structure is observed. That is, strong connections are mainly distributed to the synapses from active neurons to inactive ones. We argue that this self-emergent topology essentially reflects the competition of different neurons and encodes the heterogeneity. This structure is shown to significantly enhance the coherence resonance and stochastic resonance of the entire network, indicating its high efficiency in information processing.

  3. Physical and biological regulation of neuron regenerative growth and network formation on recombinant dragline silks.

    PubMed

    An, Bo; Tang-Schomer, Min D; Huang, Wenwen; He, Jiuyang; Jones, Justin A; Lewis, Randolph V; Kaplan, David L

    2015-04-01

    Recombinant spider silks produced in transgenic goat milk were studied as cell culture matrices for neuronal growth. Major ampullate spidroin 1 (MaSp1) supported neuronal growth, axon extension and network connectivity, with cell morphology comparable to the gold standard poly-lysine. In addition, neurons growing on MaSp1 films had increased neural cell adhesion molecule (NCAM) expression at both mRNA and protein levels. The results indicate that MaSp1 films present useful surface charge and substrate stiffness to support the growth of primary rat cortical neurons. Moreover, a putative neuron-specific surface binding sequence GRGGL within MaSp1 may contribute to the biological regulation of neuron growth. These findings indicate that MaSp1 could regulate neuron growth through its physical and biological features. This dual regulation mode of MaSp1 could provide an alternative strategy for generating functional silk materials for neural tissue engineering.

  4. Mapping cortical mesoscopic networks of single spiking cortical or sub-cortical neurons.

    PubMed

    Xiao, Dongsheng; Vanni, Matthieu P; Mitelut, Catalin C; Chan, Allen W; LeDue, Jeffrey M; Xie, Yicheng; Chen, Andrew Cn; Swindale, Nicholas V; Murphy, Timothy H

    2017-02-04

    Understanding the basis of brain function requires knowledge of cortical operations over wide-spatial scales, but also within the context of single neurons. In vivo, wide-field GCaMP imaging and sub-cortical/cortical cellular electrophysiology were used in mice to investigate relationships between spontaneous single neuron spiking and mesoscopic cortical activity. We make use of a rich set of cortical activity motifs that are present in spontaneous activity in anesthetized and awake animals. A mesoscale spike-triggered averaging procedure allowed the identification of motifs that are preferentially linked to individual spiking neurons by employing genetically targeted indicators of neuronal activity. Thalamic neurons predicted and reported specific cycles of wide-scale cortical inhibition/excitation. In contrast, spike-triggered maps derived from single cortical neurons yielded spatio-temporal maps expected for regional cortical consensus function. This approach can define network relationships between any point source of neuronal spiking and mesoscale cortical maps.

  5. Single-neuron discharge properties and network activity in dissociated cultures of neocortex.

    PubMed

    Giugliano, M; Darbon, P; Arsiero, M; Lüscher, H-R; Streit, J

    2004-08-01

    Cultures of neurons from rat neocortex exhibit spontaneous, temporally patterned, network activity. Such a distributed activity in vitro constitutes a possible framework for combining theoretical and experimental approaches, linking the single-neuron discharge properties to network phenomena. In this work, we addressed the issue of closing the loop, from the identification of the single-cell discharge properties to the prediction of collective network phenomena. Thus, we compared these predictions with the spontaneously emerging network activity in vitro, detected by substrate arrays of microelectrodes. Therefore, we characterized the single-cell discharge properties to Gauss-distributed noisy currents, under pharmacological blockade of the synaptic transmission. Such stochastic currents emulate a realistic input from the network. The mean (m) and variance (s(2)) of the injected current were varied independently, reminiscent of the extended mean-field description of a variety of possible presynaptic network organizations and mean activity levels, and the neuronal response was evaluated in terms of the steady-state mean firing rate (f). Experimental current-to-spike-rate responses f(m, s(2)) were similar to those of neurons in brain slices, and could be quantitatively described by leaky integrate-and-fire (IF) point neurons. The identified model parameters were then used in numerical simulations of a network of IF neurons. Such a network reproduced a collective activity, matching the spontaneous irregular population bursting, observed in cultured networks. We finally interpret such a collective activity and its link with model details by the mean-field theory. We conclude that the IF model is an adequate minimal description of synaptic integration and neuronal excitability, when collective network activities are considered in vitro.

  6. Sustained synchronized neuronal network activity in a human astrocyte co-culture system

    PubMed Central

    Kuijlaars, Jacobine; Oyelami, Tutu; Diels, Annick; Rohrbacher, Jutta; Versweyveld, Sofie; Meneghello, Giulia; Tuefferd, Marianne; Verstraelen, Peter; Detrez, Jan R.; Verschuuren, Marlies; De Vos, Winnok H.; Meert, Theo; Peeters, Pieter J.; Cik, Miroslav; Nuydens, Rony; Brône, Bert; Verheyen, An

    2016-01-01

    Impaired neuronal network function is a hallmark of neurodevelopmental and neurodegenerative disorders such as autism, schizophrenia, and Alzheimer’s disease and is typically studied using genetically modified cellular and animal models. Weak predictive capacity and poor translational value of these models urge for better human derived in vitro models. The implementation of human induced pluripotent stem cells (hiPSCs) allows studying pathologies in differentiated disease-relevant and patient-derived neuronal cells. However, the differentiation process and growth conditions of hiPSC-derived neurons are non-trivial. In order to study neuronal network formation and (mal)function in a fully humanized system, we have established an in vitro co-culture model of hiPSC-derived cortical neurons and human primary astrocytes that recapitulates neuronal network synchronization and connectivity within three to four weeks after final plating. Live cell calcium imaging, electrophysiology and high content image analyses revealed an increased maturation of network functionality and synchronicity over time for co-cultures compared to neuronal monocultures. The cells express GABAergic and glutamatergic markers and respond to inhibitors of both neurotransmitter pathways in a functional assay. The combination of this co-culture model with quantitative imaging of network morphofunction is amenable to high throughput screening for lead discovery and drug optimization for neurological diseases. PMID:27819315

  7. High-resolution CMOS MEA platform to study neurons at subcellular, cellular, and network levels.

    PubMed

    Müller, Jan; Ballini, Marco; Livi, Paolo; Chen, Yihui; Radivojevic, Milos; Shadmani, Amir; Viswam, Vijay; Jones, Ian L; Fiscella, Michele; Diggelmann, Roland; Stettler, Alexander; Frey, Urs; Bakkum, Douglas J; Hierlemann, Andreas

    2015-07-07

    Studies on information processing and learning properties of neuronal networks would benefit from simultaneous and parallel access to the activity of a large fraction of all neurons in such networks. Here, we present a CMOS-based device, capable of simultaneously recording the electrical activity of over a thousand cells in in vitro neuronal networks. The device provides sufficiently high spatiotemporal resolution to enable, at the same time, access to neuronal preparations on subcellular, cellular, and network level. The key feature is a rapidly reconfigurable array of 26 400 microelectrodes arranged at low pitch (17.5 μm) within a large overall sensing area (3.85 × 2.10 mm(2)). An arbitrary subset of the electrodes can be simultaneously connected to 1024 low-noise readout channels as well as 32 stimulation units. Each electrode or electrode subset can be used to electrically stimulate or record the signals of virtually any neuron on the array. We demonstrate the applicability and potential of this device for various different experimental paradigms: large-scale recordings from whole networks of neurons as well as investigations of axonal properties of individual neurons.

  8. Mirrored STDP Implements Autoencoder Learning in a Network of Spiking Neurons.

    PubMed

    Burbank, Kendra S

    2015-12-01

    The autoencoder algorithm is a simple but powerful unsupervised method for training neural networks. Autoencoder networks can learn sparse distributed codes similar to those seen in cortical sensory areas such as visual area V1, but they can also be stacked to learn increasingly abstract representations. Several computational neuroscience models of sensory areas, including Olshausen & Field's Sparse Coding algorithm, can be seen as autoencoder variants, and autoencoders have seen extensive use in the machine learning community. Despite their power and versatility, autoencoders have been difficult to implement in a biologically realistic fashion. The challenges include their need to calculate differences between two neuronal activities and their requirement for learning rules which lead to identical changes at feedforward and feedback connections. Here, we study a biologically realistic network of integrate-and-fire neurons with anatomical connectivity and synaptic plasticity that closely matches that observed in cortical sensory areas. Our choice of synaptic plasticity rules is inspired by recent experimental and theoretical results suggesting that learning at feedback connections may have a different form from learning at feedforward connections, and our results depend critically on this novel choice of plasticity rules. Specifically, we propose that plasticity rules at feedforward versus feedback connections are temporally opposed versions of spike-timing dependent plasticity (STDP), leading to a symmetric combined rule we call Mirrored STDP (mSTDP). We show that with mSTDP, our network follows a learning rule that approximately minimizes an autoencoder loss function. When trained with whitened natural image patches, the learned synaptic weights resemble the receptive fields seen in V1. Our results use realistic synaptic plasticity rules to show that the powerful autoencoder learning algorithm could be within the reach of real biological networks.

  9. Mirrored STDP Implements Autoencoder Learning in a Network of Spiking Neurons

    PubMed Central

    Burbank, Kendra S.

    2015-01-01

    The autoencoder algorithm is a simple but powerful unsupervised method for training neural networks. Autoencoder networks can learn sparse distributed codes similar to those seen in cortical sensory areas such as visual area V1, but they can also be stacked to learn increasingly abstract representations. Several computational neuroscience models of sensory areas, including Olshausen & Field’s Sparse Coding algorithm, can be seen as autoencoder variants, and autoencoders have seen extensive use in the machine learning community. Despite their power and versatility, autoencoders have been difficult to implement in a biologically realistic fashion. The challenges include their need to calculate differences between two neuronal activities and their requirement for learning rules which lead to identical changes at feedforward and feedback connections. Here, we study a biologically realistic network of integrate-and-fire neurons with anatomical connectivity and synaptic plasticity that closely matches that observed in cortical sensory areas. Our choice of synaptic plasticity rules is inspired by recent experimental and theoretical results suggesting that learning at feedback connections may have a different form from learning at feedforward connections, and our results depend critically on this novel choice of plasticity rules. Specifically, we propose that plasticity rules at feedforward versus feedback connections are temporally opposed versions of spike-timing dependent plasticity (STDP), leading to a symmetric combined rule we call Mirrored STDP (mSTDP). We show that with mSTDP, our network follows a learning rule that approximately minimizes an autoencoder loss function. When trained with whitened natural image patches, the learned synaptic weights resemble the receptive fields seen in V1. Our results use realistic synaptic plasticity rules to show that the powerful autoencoder learning algorithm could be within the reach of real biological networks. PMID:26633645

  10. Efficiency characterization of a large neuronal network: A causal information approach

    NASA Astrophysics Data System (ADS)

    Montani, Fernando; Deleglise, Emilia B.; Rosso, Osvaldo A.

    2014-05-01

    When inhibitory neurons constitute about 40% of neurons they could have an important antinociceptive role, as they would easily regulate the level of activity of other neurons. We consider a simple network of cortical spiking neurons with axonal conduction delays and spike timing dependent plasticity, representative of a cortical column or hypercolumn with a large proportion of inhibitory neurons. Each neuron fires following a Hodgkin-Huxley like dynamics and it is interconnected randomly to other neurons. The network dynamics is investigated estimating Bandt and Pompe probability distribution function associated to the interspike intervals and taking different degrees of interconnectivity across neurons. More specifically we take into account the fine temporal “structures” of the complex neuronal signals not just by using the probability distributions associated to the interspike intervals, but instead considering much more subtle measures accounting for their causal information: the Shannon permutation entropy, Fisher permutation information and permutation statistical complexity. This allows us to investigate how the information of the system might saturate to a finite value as the degree of interconnectivity across neurons grows, inferring the emergent dynamical properties of the system.

  11. Patterning human neuronal networks on photolithographically engineered silicon dioxide substrates functionalized with glial analogues.

    PubMed

    Hughes, Mark A; Brennan, Paul M; Bunting, Andrew S; Cameron, Katherine; Murray, Alan F; Shipston, Mike J

    2014-05-01

    Interfacing neurons with silicon semiconductors is a challenge being tackled through various bioengineering approaches. Such constructs inform our understanding of neuronal coding and learning and ultimately guide us toward creating intelligent neuroprostheses. A fundamental prerequisite is to dictate the spatial organization of neuronal cells. We sought to pattern neurons using photolithographically defined arrays of polymer parylene-C, activated with fetal calf serum. We used a purified human neuronal cell line [Lund human mesencephalic (LUHMES)] to establish whether neurons remain viable when isolated on-chip or whether they require a supporting cell substrate. When cultured in isolation, LUHMES neurons failed to pattern and did not show any morphological signs of differentiation. We therefore sought a cell type with which to prepattern parylene regions, hypothesizing that this cellular template would enable secondary neuronal adhesion and network formation. From a range of cell lines tested, human embryonal kidney (HEK) 293 cells patterned with highest accuracy. LUHMES neurons adhered to pre-established HEK 293 cell clusters and this coculture environment promoted morphological differentiation of neurons. Neurites extended between islands of adherent cell somata, creating an orthogonally arranged neuronal network. HEK 293 cells appear to fulfill a role analogous to glia, dictating cell adhesion, and generating an environment conducive to neuronal survival. We next replaced HEK 293 cells with slower growing glioma-derived precursors. These primary human cells patterned accurately on parylene and provided a similarly effective scaffold for neuronal adhesion. These findings advance the use of this microfabrication-compatible platform for neuronal patterning. Copyright © 2013 Wiley Periodicals, Inc.

  12. Patterning human neuronal networks on photolithographically engineered silicon dioxide substrates functionalized with glial analogues

    PubMed Central

    Hughes, Mark A; Brennan, Paul M; Bunting, Andrew S; Cameron, Katherine; Murray, Alan F; Shipston, Mike J

    2014-01-01

    Interfacing neurons with silicon semiconductors is a challenge being tackled through various bioengineering approaches. Such constructs inform our understanding of neuronal coding and learning and ultimately guide us toward creating intelligent neuroprostheses. A fundamental prerequisite is to dictate the spatial organization of neuronal cells. We sought to pattern neurons using photolithographically defined arrays of polymer parylene-C, activated with fetal calf serum. We used a purified human neuronal cell line [Lund human mesencephalic (LUHMES)] to establish whether neurons remain viable when isolated on-chip or whether they require a supporting cell substrate. When cultured in isolation, LUHMES neurons failed to pattern and did not show any morphological signs of differentiation. We therefore sought a cell type with which to prepattern parylene regions, hypothesizing that this cellular template would enable secondary neuronal adhesion and network formation. From a range of cell lines tested, human embryonal kidney (HEK) 293 cells patterned with highest accuracy. LUHMES neurons adhered to pre-established HEK 293 cell clusters and this coculture environment promoted morphological differentiation of neurons. Neurites extended between islands of adherent cell somata, creating an orthogonally arranged neuronal network. HEK 293 cells appear to fulfill a role analogous to glia, dictating cell adhesion, and generating an environment conducive to neuronal survival. We next replaced HEK 293 cells with slower growing glioma-derived precursors. These primary human cells patterned accurately on parylene and provided a similarly effective scaffold for neuronal adhesion. These findings advance the use of this microfabrication-compatible platform for neuronal patterning. © 2013 The Authors. Journal ofBiomedicalMaterials Research Part APublished byWiley Periodicals, Inc.Wiley Periodicals, Inc. J Biomed Mater Res Part A: 102A: 1350–1360, 2014. PMID:23733444

  13. DOC2B and Munc13-1 Differentially Regulate Neuronal Network Activity

    PubMed Central

    Lavi, Ayal; Sheinin, Anton; Shapira, Ronit; Zelmanoff, Daniel; Ashery, Uri

    2014-01-01

    Alterations in the levels of synaptic proteins affect synaptic transmission and synaptic plasticity. However, the precise effects on neuronal network activity are still enigmatic. Here, we utilized microelectrode array (MEA) to elucidate how manipulation of the presynaptic release process affects the activity of neuronal networks. By combining pharmacological tools and genetic manipulation of synaptic proteins, we show that overexpression of DOC2B and Munc13-1, proteins known to promote vesicular maturation and release, elicits opposite effects on the activity of the neuronal network. Although both cause an increase in the overall number of spikes, the distribution of spikes is different. While DOC2B enhances, Munc13-1 reduces the firing rate within bursts of spikes throughout the network; however, Munc13-1 increases the rate of network bursts. DOC2B's effects were mimicked by Strontium that elevates asynchronous release but not by a DOC2B mutant that enhances spontaneous release rate. This suggests for the first time that increased asynchronous release on the single-neuron level promotes bursting activity in the network level. This innovative study demonstrates the complementary role of the network level in explaining the physiological relevance of the cellular activity of presynaptic proteins and the transformation of synaptic release manipulation from the neuron to the network level. PMID:23537531

  14. DOC2B and Munc13-1 differentially regulate neuronal network activity.

    PubMed

    Lavi, Ayal; Sheinin, Anton; Shapira, Ronit; Zelmanoff, Daniel; Ashery, Uri

    2014-09-01

    Alterations in the levels of synaptic proteins affect synaptic transmission and synaptic plasticity. However, the precise effects on neuronal network activity are still enigmatic. Here, we utilized microelectrode array (MEA) to elucidate how manipulation of the presynaptic release process affects the activity of neuronal networks. By combining pharmacological tools and genetic manipulation of synaptic proteins, we show that overexpression of DOC2B and Munc13-1, proteins known to promote vesicular maturation and release, elicits opposite effects on the activity of the neuronal network. Although both cause an increase in the overall number of spikes, the distribution of spikes is different. While DOC2B enhances, Munc13-1 reduces the firing rate within bursts of spikes throughout the network; however, Munc13-1 increases the rate of network bursts. DOC2B's effects were mimicked by Strontium that elevates asynchronous release but not by a DOC2B mutant that enhances spontaneous release rate. This suggests for the first time that increased asynchronous release on the single-neuron level promotes bursting activity in the network level. This innovative study demonstrates the complementary role of the network level in explaining the physiological relevance of the cellular activity of presynaptic proteins and the transformation of synaptic release manipulation from the neuron to the network level. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  15. Stability Analysis of Asynchronous States in Neuronal Networks with Conductance-Based Inhibition

    NASA Astrophysics Data System (ADS)

    Leibold, Christian

    2004-11-01

    Oscillations in networks of inhibitory interneurons have been reported at various sites of the brain and are thought to play a fundamental role in neuronal processing. This Letter provides a self-contained analytical framework that allows numerically efficient calculations of the population activity of a network of conductance-based integrate-and-fire neurons that are coupled through inhibitory synapses. Based on a normalization equation this Letter introduces a novel stability criterion for a network state of asynchronous activity and discusses its perturbations. The analysis shows that, although often neglected, the reversal potential of synaptic inhibition has a strong influence on the stability as well as the frequency of network oscillations.

  16. Modeling of multisensory convergence with a network of spiking neurons: a reverse engineering approach.

    PubMed

    Lim, Hun Ki; Keniston, Leslie P; Cios, Krzysztof J

    2011-07-01

    Multisensory processing in the brain underlies a wide variety of perceptual phenomena, but little is known about the underlying mechanisms of how multisensory neurons are formed. This lack of knowledge is due to the difficulty for biological experiments to manipulate and test the parameters of multisensory convergence, the first and definitive step in the multisensory process. Therefore, by using a computational model of multisensory convergence, this study seeks to provide insight into the mechanisms of multisensory convergence. To reverse-engineer multisensory convergence, we used a biologically realistic neuron model and a biology-inspired plasticity rule, but did not make any a priori assumptions about multisensory properties of neurons in the network. The network consisted of two separate projection areas that converged upon neurons in a third area, and stimulation involved activation of one of the projection areas (or the other) or their combination. Experiments consisted of two parts: network training and multisensory simulation. Analyses were performed, first, to find multisensory properties in the simulated networks; second, to reveal properties of the network using graph theoretical approach; and third, to generate hypothesis related to the multisensory convergence. The results showed that the generation of multisensory neurons related to the topological properties of the network, in particular, the strengths of connections after training, was found to play an important role in forming and thus distinguishing multisensory neuron types.

  17. A microfluidic platform for controlled biochemical stimulation of twin neuronal networks.

    PubMed

    Biffi, Emilia; Piraino, Francesco; Pedrocchi, Alessandra; Fiore, Gianfranco B; Ferrigno, Giancarlo; Redaelli, Alberto; Menegon, Andrea; Rasponi, Marco

    2012-06-01

    Spatially and temporally resolved delivery of soluble factors is a key feature for pharmacological applications. In this framework, microfluidics coupled to multisite electrophysiology offers great advantages in neuropharmacology and toxicology. In this work, a microfluidic device for biochemical stimulation of neuronal networks was developed. A micro-chamber for cell culturing, previously developed and tested for long term neuronal growth by our group, was provided with a thin wall, which partially divided the cell culture region in two sub-compartments. The device was reversibly coupled to a flat micro electrode array and used to culture primary neurons in the same microenvironment. We demonstrated that the two fluidically connected compartments were able to originate two parallel neuronal networks with similar electrophysiological activity but functionally independent. Furthermore, the device allowed to connect the outlet port to a syringe pump and to transform the static culture chamber in a perfused one. At 14 days invitro, sub-networks were independently stimulated with a test molecule, tetrodotoxin, a neurotoxin known to block action potentials, by means of continuous delivery. Electrical activity recordings proved the ability of the device configuration to selectively stimulate each neuronal network individually. The proposed microfluidic approach represents an innovative methodology to perform biological, pharmacological, and electrophysiological experiments on neuronal networks. Indeed, it allows for controlled delivery of substances to cells, and it overcomes the limitations due to standard drug stimulation techniques. Finally, the twin network configuration reduces biological variability, which has important outcomes on pharmacological and drug screening.

  18. Dynamics and effective topology underlying synchronization in networks of cortical neurons.

    PubMed

    Eytan, Danny; Marom, Shimon

    2006-08-16

    Cognitive processes depend on synchronization and propagation of electrical activity within and between neuronal assemblies. In vivo measurements show that the size of individual assemblies depends on their function and varies considerably, but the timescale of assembly activation is in the range of 0.1-0.2 s and is primarily independent of assembly size. Here we use an in vitro experimental model of cortical assemblies to characterize the process underlying the timescale of synchronization, its relationship to the effective topology of connectivity within an assembly, and its impact on propagation of activity within and between assemblies. We show that the basic mode of assembly activation, "network spike," is a threshold-governed, synchronized population event of 0.1-0.2 s duration and follows the logistics of neuronal recruitment in an effectively scale-free connected network. Accordingly, the sequence of neuronal activation within a network spike is nonrandom and hierarchical; a small subset of neurons is consistently recruited tens of milliseconds before others. Theory predicts that scale-free topology allows for synchronization time that does not increase markedly with network size; our experiments with networks of different densities support this prediction. The activity of early-to-fire neurons reliably forecasts an upcoming network spike and provides means for expedited propagation between assemblies. We demonstrate this capacity by observing the dynamics of two artificially coupled assemblies in vitro, using neuronal activity of one as a trigger for electrical stimulation of the other.

  19. Fuzzy Neuron: Method and Hardware Realization

    NASA Technical Reports Server (NTRS)

    Krasowski, Michael J.; Prokop, Norman F.

    2014-01-01

    This innovation represents a method by which single-to-multi-input, single-to-many-output system transfer functions can be estimated from input/output data sets. This innovation can be run in the background while a system is operating under other means (e.g., through human operator effort), or may be utilized offline using data sets created from observations of the estimated system. It utilizes a set of fuzzy membership functions spanning the input space for each input variable. Linear combiners associated with combinations of input membership functions are used to create the output(s) of the estimator. Coefficients are adjusted online through the use of learning algorithms.

  20. Self-Organized Information Processing in Neuronal Networks: Replacing Layers in Deep Networks by Dynamics

    NASA Astrophysics Data System (ADS)

    Kirst, Christoph

    It is astonishing how the sub-parts of a brain co-act to produce coherent behavior. What are mechanism that coordinate information processing and communication and how can those be changed flexibly in order to cope with variable contexts? Here we show that when information is encoded in the deviations around a collective dynamical reference state of a recurrent network the propagation of these fluctuations is strongly dependent on precisely this underlying reference. Information here 'surfs' on top of the collective dynamics and switching between states enables fast and flexible rerouting of information. This in turn affects local processing and consequently changes in the global reference dynamics that re-regulate the distribution of information. This provides a generic mechanism for self-organized information processing as we demonstrate with an oscillatory Hopfield network that performs contextual pattern recognition. Deep neural networks have proven to be very successful recently. Here we show that generating information channels via collective reference dynamics can effectively compress a deep multi-layer architecture into a single layer making this mechanism a promising candidate for the organization of information processing in biological neuronal networks.