Sample records for triangular neuronal networks

  1. Axon Initial Segment Cytoskeleton: Architecture, Development, and Role in Neuron Polarity

    PubMed Central

    Svitkina, Tatyana M.

    2016-01-01

    The axon initial segment (AIS) is a specialized structure in neurons that resides in between axonal and somatodendritic domains. The localization of the AIS in neurons is ideal for its two major functions: it serves as the site of action potential firing and helps to maintain neuron polarity. It has become increasingly clear that the AIS cytoskeleton is fundamental to AIS functions. In this review, we discuss current understanding of the AIS cytoskeleton with particular interest in its unique architecture and role in maintenance of neuron polarity. The AIS cytoskeleton is divided into two parts, submembrane and cytoplasmic, based on localization, function, and molecular composition. Recent studies using electron and subdiffraction fluorescence microscopy indicate that submembrane cytoskeletal components (ankyrin G, βIV-spectrin, and actin filaments) form a sophisticated network in the AIS that is conceptually similar to the polygonal/triangular network of erythrocytes, with some important differences. Components of the AIS cytoplasmic cytoskeleton (microtubules, actin filaments, and neurofilaments) reside deeper within the AIS shaft and display structural features distinct from other neuronal domains. We discuss how the AIS submembrane and cytoplasmic cytoskeletons contribute to different aspects of AIS polarity function and highlight recent advances in understanding their AIS cytoskeletal assembly and stability. PMID:27493806

  2. A neural network technique for remeshing of bone microstructure.

    PubMed

    Fischer, Anath; Holdstein, Yaron

    2012-01-01

    Today, there is major interest within the biomedical community in developing accurate noninvasive means for the evaluation of bone microstructure and bone quality. Recent improvements in 3D imaging technology, among them development of micro-CT and micro-MRI scanners, allow in-vivo 3D high-resolution scanning and reconstruction of large specimens or even whole bone models. Thus, the tendency today is to evaluate bone features using 3D assessment techniques rather than traditional 2D methods. For this purpose, high-quality meshing methods are required. However, the 3D meshes produced from current commercial systems usually are of low quality with respect to analysis and rapid prototyping. 3D model reconstruction of bone is difficult due to the complexity of bone microstructure. The small bone features lead to a great deal of neighborhood ambiguity near each vertex. The relatively new neural network method for mesh reconstruction has the potential to create or remesh 3D models accurately and quickly. A neural network (NN), which resembles an artificial intelligence (AI) algorithm, is a set of interconnected neurons, where each neuron is capable of making an autonomous arithmetic calculation. Moreover, each neuron is affected by its surrounding neurons through the structure of the network. This paper proposes an extension of the growing neural gas (GNN) neural network technique for remeshing a triangular manifold mesh that represents bone microstructure. This method has the advantage of reconstructing the surface of a genus-n freeform object without a priori knowledge regarding the original object, its topology, or its shape.

  3. Logical spin-filtering in a triangular network of quantum nanorings with a Rashba spin-orbit interaction

    NASA Astrophysics Data System (ADS)

    Dehghan, E.; Sanavi Khoshnoud, D.; Naeimi, A. S.

    2018-01-01

    The spin-resolved electron transport through a triangular network of quantum nanorings is studied in the presence of Rashba spin-orbit interaction (RSOI) and a magnetic flux using quantum waveguide theory. This study illustrates that, by tuning Rashba constant, magnetic flux and incoming electron energy, the triangular network of quantum rings can act as a perfect logical spin-filtering with high efficiency. By changing in the energy of incoming electron, at a proper value of the Rashba constant and magnetic flux, a reverse in the direction of spin can take place in the triangular network of quantum nanorings. Furthermore, the triangular network of quantum nanorings can be designed as a device and shows several simultaneous spintronic properties such as spin-splitter and spin-inverter. This spin-splitting is dependent on the energy of the incoming electron. Additionally, different polarizations can be achieved in the two outgoing leads from an originally incoming spin state that simulates a Stern-Gerlach apparatus.

  4. Multidimensional density shaping by sigmoids.

    PubMed

    Roth, Z; Baram, Y

    1996-01-01

    An estimate of the probability density function of a random vector is obtained by maximizing the output entropy of a feedforward network of sigmoidal units with respect to the input weights. Classification problems can be solved by selecting the class associated with the maximal estimated density. Newton's optimization method, applied to the estimated density, yields a recursive estimator for a random variable or a random sequence. A constrained connectivity structure yields a linear estimator, which is particularly suitable for "real time" prediction. A Gaussian nonlinearity yields a closed-form solution for the network's parameters, which may also be used for initializing the optimization algorithm when other nonlinearities are employed. A triangular connectivity between the neurons and the input, which is naturally suggested by the statistical setting, reduces the number of parameters. Applications to classification and forecasting problems are demonstrated.

  5. On Modeling and Analysis of MIMO Wireless Mesh Networks with Triangular Overlay Topology

    DOE PAGES

    Cao, Zhanmao; Wu, Chase Q.; Zhang, Yuanping; ...

    2015-01-01

    Multiple input multiple output (MIMO) wireless mesh networks (WMNs) aim to provide the last-mile broadband wireless access to the Internet. Along with the algorithmic development for WMNs, some fundamental mathematical problems also emerge in various aspects such as routing, scheduling, and channel assignment, all of which require an effective mathematical model and rigorous analysis of network properties. In this paper, we propose to employ Cartesian product of graphs (CPG) as a multichannel modeling approach and explore a set of unique properties of triangular WMNs. In each layer of CPG with a single channel, we design a node coordinate scheme thatmore » retains the symmetric property of triangular meshes and develop a function for the assignment of node identity numbers based on their coordinates. We also derive a necessary-sufficient condition for interference-free links and combinatorial formulas to determine the number of the shortest paths for channel realization in triangular WMNs.« less

  6. Study on spin filtering and switching action in a double-triangular network chain

    NASA Astrophysics Data System (ADS)

    Zhang, Yongmei

    2018-04-01

    Spin transport properties of a double-triangular quantum network with local magnetic moment on backbones and magnetic flux penetrating the network plane are studied. Numerical simulation results show that such a quantum network will be a good candidate for spin filter and spin switch. Local dispersion and density of states are considered in the framework of tight-binding approximation. Transmission coefficients are calculated by the method of transfer matrix. Spin transmission is regulated by substrate magnetic moment and magnetic flux piercing those triangles. Experimental realization of such theoretical research will be conducive to designing of new spintronic devices.

  7. Neural network method for lossless two-conductor transmission line equations based on the IELM algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Yunlei; Hou, Muzhou; Luo, Jianshu; Liu, Taohua

    2018-06-01

    With the increasing demands for vast amounts of data and high-speed signal transmission, the use of multi-conductor transmission lines is becoming more common. The impact of transmission lines on signal transmission is thus a key issue affecting the performance of high-speed digital systems. To solve the problem of lossless two-conductor transmission line equations (LTTLEs), a neural network model and algorithm are explored in this paper. By selecting the product of two triangular basis functions as the activation function of hidden layer neurons, we can guarantee the separation of time, space, and phase orthogonality. By adding the initial condition to the neural network, an improved extreme learning machine (IELM) algorithm for solving the network weight is obtained. This is different to the traditional method for converting the initial condition into the iterative constraint condition. Calculation software for solving the LTTLEs based on the IELM algorithm is developed. Numerical experiments show that the results are consistent with those of the traditional method. The proposed neural network algorithm can find the terminal voltage of the transmission line and also the voltage of any observation point. It is possible to calculate the value at any given point by using the neural network model to solve the transmission line equation.

  8. The neuronal structure of paramamillary nuclei in Bison bonasus: Nissl and Golgi pictures.

    PubMed

    Robak, A; Szteyn, S; Równiak, M

    1998-01-01

    The studies were carried out on the hypothalamus of bison bonasus aged 2 and 3 months. Sections were made by means of Bagiński's technique and Nissl and Klüver-Barrera methods. Four types of neurons were distinguished in the paramamillary nuclei: nucleus supramamillaris (Sm) and nucleus tuberomammillaris pars posterior (Tmp). Type I, small and medium-size, triangular or fusiform cells, which have 2-3 slender, poorly ramified dendrites; typical leptodendritic neurons. Type II, medium size neurons with quadrangular or spindle-shaped perikaryons. Most of them have 3-4 thick dendritic trunks with ramifying relatively long dendrites. These cells show stalked-appearance and possess different appendages sparsely distributed. Type III is similar to type II, but is made of medium-size to large multipolar cells having quadrangular, triangular or fusiform perikaryons and relatively short dendrites. Type IV, small and medium-size, globular cells with 2 or 3 dendritic trunks, which dichotomously subdivide into quaternary dendrites. In all types of neurons, axons emerge from the perikaryon or initial portion of a dendritic trunk. Type I was found in both studied nuclei. Types II and III constitute mainly the nucleus tuberomamillaris pars posterior. Type IV preponderate in the nucleus supramamillaris. The characteristic feature of Tmp cells, in Nissl picture was irregular contour of their somas and clumps of rough Nisls granules, which appear to lie outside the perikaryons. In Sm there were also lightly stained small rounded cells having both small amount of the cytoplasm and tigroid matter.

  9. Transition-metal oxides with triangular lattices: generation of new magnetic and electronic properties.

    PubMed

    Maignan, A; Kobayashi, W; Hébert, S; Martinet, G; Pelloquin, D; Bellido, N; Simon, Ch

    2008-10-06

    The search for multifunctional materials as multiferroics to be applied in microelectronic or for new, chemically stable and nontoxic, thermoelectric materials to recover waste heat is showing a common interest in the oxides whose structures contain a triangular network of transition-metal cations. To illustrate this point, two ternary systems, Ba-Co-O and Ca-Co-O, have been chosen. It is shown that new phases with a complex triangular structure can be discovered, for instance, by introduction of Ga (3+) into the Ba-Co-O system to stabilize Ba 6Ga 2Co 11O 26 and Ba 2GaCo 8O 14, which both belong to a large family of compounds with formula [Ba(Co,Ga)O 3-delta] n [BaCo 8O 11]. In the latter, both sublattices contain triangular networks derived from the hexagonal perovskite and the spinel structure. Among the hexagonal perovskite, the Ca 3Co 2O 6 crystals give clear evidence where the coupling of charges and spins is at the origin of a magnetocapacitance effect. In particular, the ferrimagnetic to ferromagnetic transition, with a one-third plateau on the M( H) curve characteristic of triangular magnetism, is accompanied by a peak in the dielectric constant. A second class of cobaltites is the focus of much interest. Their 2D structure, containing CoO 2 planes isostructural to a CdI 2 slice that are stacked in an incommensurate way with rock salt type layers, is referred to misfit cobaltite. The 2D triangular network of edge-shared CoO 6 octahedra is believed to be responsible for large values of the Seebeck coefficient and low electrical resistivity. A clear relationship between the structuresincommensurability ratiosand the electronic properties is evidenced, showing that the charge carrier concentration can be tuned via the control of the ionic radius of the cations in the separating layers.

  10. Indoor Trajectory Tracking Scheme Based on Delaunay Triangulation and Heuristic Information in Wireless Sensor Networks.

    PubMed

    Qin, Junping; Sun, Shiwen; Deng, Qingxu; Liu, Limin; Tian, Yonghong

    2017-06-02

    Object tracking and detection is one of the most significant research areas for wireless sensor networks. Existing indoor trajectory tracking schemes in wireless sensor networks are based on continuous localization and moving object data mining. Indoor trajectory tracking based on the received signal strength indicator ( RSSI ) has received increased attention because it has low cost and requires no special infrastructure. However, RSSI tracking introduces uncertainty because of the inaccuracies of measurement instruments and the irregularities (unstable, multipath, diffraction) of wireless signal transmissions in indoor environments. Heuristic information includes some key factors for trajectory tracking procedures. This paper proposes a novel trajectory tracking scheme based on Delaunay triangulation and heuristic information (TTDH). In this scheme, the entire field is divided into a series of triangular regions. The common side of adjacent triangular regions is regarded as a regional boundary. Our scheme detects heuristic information related to a moving object's trajectory, including boundaries and triangular regions. Then, the trajectory is formed by means of a dynamic time-warping position-fingerprint-matching algorithm with heuristic information constraints. Field experiments show that the average error distance of our scheme is less than 1.5 m, and that error does not accumulate among the regions.

  11. The types of neurons of the somatic oculomotor nucleus in the European bison. Nissl and Golgi studies.

    PubMed

    Szteyn, S; Robak, A; Równiak, M

    1997-01-01

    The neuronal structure of the somatic oculomotor nucleus (SON) was studied on the basis of Nissl and Golgi preparations, obtained from mesencephalons of 4 European bisons. We distinguished four types of neurons in the investigated nucleus: 1. The large multipolar nerve cells with 5-8 thick dendritic trunks and a thin axon which emerges directly from the soma. These are the most numerous neurons in the SON. 2. The small multipolar neurons. These cells have 4-6 thick dendritic trunks. An axon arises mostly from initial segment of one of the dendrites. This type represents about 8% neurons of SON. 3. The triangular neurons. From perikaryon 3 thick dendritic trunks emerge. A thin axon arises directly from the cell body. These cells make about 10% neurons of SON. 4. The pear-shaped cells which have 1 or 2 dendritic trunks concentrate at one pole of the neurons. In the SON there are about 2% pear-shaped cells. Their features correspond to the features attributed by many authors to the interneurons.

  12. Septal projections to nucleus incertus in the rat: bidirectional pathways for modulation of hippocampal function.

    PubMed

    Sánchez-Pérez, Ana M; Arnal-Vicente, Isabel; Santos, Fabio N; Pereira, Celia W; ElMlili, Nisrin; Sanjuan, Julio; Ma, Sherie; Gundlach, Andrew L; Olucha-Bordonau, Francisco E

    2015-03-01

    Projections from the nucleus incertus (NI) to the septum have been implicated in the modulation of hippocampal theta rhythm. In this study we describe a previously uncharacterized projection from the septum to the NI, which may provide feedback modulation of the ascending circuitry. Fluorogold injections into the NI resulted in retrograde labeling in the septum that was concentrated in the horizontal diagonal band and areas of the posterior septum including the septofimbrial and triangular septal nuclei. Double-immunofluorescent staining indicated that the majority of NI-projecting septal neurons were calretinin-positive and some were parvalbumin-, calbindin-, or glutamic acid decarboxylase (GAD)-67-positive. Choline acetyltransferase-positive neurons were Fluorogold-negative. Injection of anterograde tracers into medial septum, or triangular septal and septofimbrial nuclei, revealed fibers descending to the supramammillary nucleus, median raphe, and the NI. These anterogradely labeled varicosities displayed synaptophysin immunoreactivity, indicating septal inputs form synapses on NI neurons. Anterograde tracer also colocalized with GAD-67-positive puncta in labeled fibers, which in some cases made close synaptic contact with GAD-67-labeled NI neurons. These data provide evidence for the existence of an inhibitory descending projection from medial and posterior septum to the NI that provides a "feedback loop" to modulate the comparatively more dense ascending NI projections to medial septum and hippocampus. Neural processes and associated behaviors activated or modulated by changes in hippocampal theta rhythm may depend on reciprocal connections between ascending and descending pathways rather than on unidirectional regulation via the medial septum. © 2014 Wiley Periodicals, Inc.

  13. Network feedback regulates motor output across a range of modulatory neuron activity

    PubMed Central

    Spencer, Robert M.

    2016-01-01

    Modulatory projection neurons alter network neuron synaptic and intrinsic properties to elicit multiple different outputs. Sensory and other inputs elicit a range of modulatory neuron activity that is further shaped by network feedback, yet little is known regarding how the impact of network feedback on modulatory neurons regulates network output across a physiological range of modulatory neuron activity. Identified network neurons, a fully described connectome, and a well-characterized, identified modulatory projection neuron enabled us to address this issue in the crab (Cancer borealis) stomatogastric nervous system. The modulatory neuron modulatory commissural neuron 1 (MCN1) activates and modulates two networks that generate rhythms via different cellular mechanisms and at distinct frequencies. MCN1 is activated at rates of 5–35 Hz in vivo and in vitro. Additionally, network feedback elicits MCN1 activity time-locked to motor activity. We asked how network activation, rhythm speed, and neuron activity levels are regulated by the presence or absence of network feedback across a physiological range of MCN1 activity rates. There were both similarities and differences in responses of the two networks to MCN1 activity. Many parameters in both networks were sensitive to network feedback effects on MCN1 activity. However, for most parameters, MCN1 activity rate did not determine the extent to which network output was altered by the addition of network feedback. These data demonstrate that the influence of network feedback on modulatory neuron activity is an important determinant of network output and feedback can be effective in shaping network output regardless of the extent of network modulation. PMID:27030739

  14. Network feedback regulates motor output across a range of modulatory neuron activity.

    PubMed

    Spencer, Robert M; Blitz, Dawn M

    2016-06-01

    Modulatory projection neurons alter network neuron synaptic and intrinsic properties to elicit multiple different outputs. Sensory and other inputs elicit a range of modulatory neuron activity that is further shaped by network feedback, yet little is known regarding how the impact of network feedback on modulatory neurons regulates network output across a physiological range of modulatory neuron activity. Identified network neurons, a fully described connectome, and a well-characterized, identified modulatory projection neuron enabled us to address this issue in the crab (Cancer borealis) stomatogastric nervous system. The modulatory neuron modulatory commissural neuron 1 (MCN1) activates and modulates two networks that generate rhythms via different cellular mechanisms and at distinct frequencies. MCN1 is activated at rates of 5-35 Hz in vivo and in vitro. Additionally, network feedback elicits MCN1 activity time-locked to motor activity. We asked how network activation, rhythm speed, and neuron activity levels are regulated by the presence or absence of network feedback across a physiological range of MCN1 activity rates. There were both similarities and differences in responses of the two networks to MCN1 activity. Many parameters in both networks were sensitive to network feedback effects on MCN1 activity. However, for most parameters, MCN1 activity rate did not determine the extent to which network output was altered by the addition of network feedback. These data demonstrate that the influence of network feedback on modulatory neuron activity is an important determinant of network output and feedback can be effective in shaping network output regardless of the extent of network modulation. Copyright © 2016 the American Physiological Society.

  15. Fabrication of triangular nanobeam waveguide networks in bulk diamond using single-crystal silicon hard masks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bayn, I.; Mouradian, S.; Li, L.

    2014-11-24

    A scalable approach for integrated photonic networks in single-crystal diamond using triangular etching of bulk samples is presented. We describe designs of high quality factor (Q = 2.51 × 10{sup 6}) photonic crystal cavities with low mode volume (V{sub m} = 1.062 × (λ/n){sup 3}), which are connected via waveguides supported by suspension structures with predicted transmission loss of only 0.05 dB. We demonstrate the fabrication of these structures using transferred single-crystal silicon hard masks and angular dry etching, yielding photonic crystal cavities in the visible spectrum with measured quality factors in excess of Q = 3 × 10{sup 3}.

  16. Mean-field equations for neuronal networks with arbitrary degree distributions.

    PubMed

    Nykamp, Duane Q; Friedman, Daniel; Shaker, Sammy; Shinn, Maxwell; Vella, Michael; Compte, Albert; Roxin, Alex

    2017-04-01

    The emergent dynamics in networks of recurrently coupled spiking neurons depends on the interplay between single-cell dynamics and network topology. Most theoretical studies on network dynamics have assumed simple topologies, such as connections that are made randomly and independently with a fixed probability (Erdös-Rényi network) (ER) or all-to-all connected networks. However, recent findings from slice experiments suggest that the actual patterns of connectivity between cortical neurons are more structured than in the ER random network. Here we explore how introducing additional higher-order statistical structure into the connectivity can affect the dynamics in neuronal networks. Specifically, we consider networks in which the number of presynaptic and postsynaptic contacts for each neuron, the degrees, are drawn from a joint degree distribution. We derive mean-field equations for a single population of homogeneous neurons and for a network of excitatory and inhibitory neurons, where the neurons can have arbitrary degree distributions. Through analysis of the mean-field equations and simulation of networks of integrate-and-fire neurons, we show that such networks have potentially much richer dynamics than an equivalent ER network. Finally, we relate the degree distributions to so-called cortical motifs.

  17. Mean-field equations for neuronal networks with arbitrary degree distributions

    NASA Astrophysics Data System (ADS)

    Nykamp, Duane Q.; Friedman, Daniel; Shaker, Sammy; Shinn, Maxwell; Vella, Michael; Compte, Albert; Roxin, Alex

    2017-04-01

    The emergent dynamics in networks of recurrently coupled spiking neurons depends on the interplay between single-cell dynamics and network topology. Most theoretical studies on network dynamics have assumed simple topologies, such as connections that are made randomly and independently with a fixed probability (Erdös-Rényi network) (ER) or all-to-all connected networks. However, recent findings from slice experiments suggest that the actual patterns of connectivity between cortical neurons are more structured than in the ER random network. Here we explore how introducing additional higher-order statistical structure into the connectivity can affect the dynamics in neuronal networks. Specifically, we consider networks in which the number of presynaptic and postsynaptic contacts for each neuron, the degrees, are drawn from a joint degree distribution. We derive mean-field equations for a single population of homogeneous neurons and for a network of excitatory and inhibitory neurons, where the neurons can have arbitrary degree distributions. Through analysis of the mean-field equations and simulation of networks of integrate-and-fire neurons, we show that such networks have potentially much richer dynamics than an equivalent ER network. Finally, we relate the degree distributions to so-called cortical motifs.

  18. Rhythmogenic neuronal networks, emergent leaders, and k-cores.

    PubMed

    Schwab, David J; Bruinsma, Robijn F; Feldman, Jack L; Levine, Alex J

    2010-11-01

    Neuronal network behavior results from a combination of the dynamics of individual neurons and the connectivity of the network that links them together. We study a simplified model, based on the proposal of Feldman and Del Negro (FDN) [Nat. Rev. Neurosci. 7, 232 (2006)], of the preBötzinger Complex, a small neuronal network that participates in the control of the mammalian breathing rhythm through periodic firing bursts. The dynamics of this randomly connected network of identical excitatory neurons differ from those of a uniformly connected one. Specifically, network connectivity determines the identity of emergent leader neurons that trigger the firing bursts. When neuronal desensitization is controlled by the number of input signals to the neurons (as proposed by FDN), the network's collective desensitization--required for successful burst termination--is mediated by k-core clusters of neurons.

  19. Dynamical estimation of neuron and network properties III: network analysis using neuron spike times.

    PubMed

    Knowlton, Chris; Meliza, C Daniel; Margoliash, Daniel; Abarbanel, Henry D I

    2014-06-01

    Estimating the behavior of a network of neurons requires accurate models of the individual neurons along with accurate characterizations of the connections among them. Whereas for a single cell, measurements of the intracellular voltage are technically feasible and sufficient to characterize a useful model of its behavior, making sufficient numbers of simultaneous intracellular measurements to characterize even small networks is infeasible. This paper builds on prior work on single neurons to explore whether knowledge of the time of spiking of neurons in a network, once the nodes (neurons) have been characterized biophysically, can provide enough information to usefully constrain the functional architecture of the network: the existence of synaptic links among neurons and their strength. Using standardized voltage and synaptic gating variable waveforms associated with a spike, we demonstrate that the functional architecture of a small network of model neurons can be established.

  20. Synchronization properties of heterogeneous neuronal networks with mixed excitability type

    NASA Astrophysics Data System (ADS)

    Leone, Michael J.; Schurter, Brandon N.; Letson, Benjamin; Booth, Victoria; Zochowski, Michal; Fink, Christian G.

    2015-03-01

    We study the synchronization of neuronal networks with dynamical heterogeneity, showing that network structures with the same propensity for synchronization (as quantified by master stability function analysis) may develop dramatically different synchronization properties when heterogeneity is introduced with respect to neuronal excitability type. Specifically, we investigate networks composed of neurons with different types of phase response curves (PRCs), which characterize how oscillating neurons respond to excitatory perturbations. Neurons exhibiting type 1 PRC respond exclusively with phase advances, while neurons exhibiting type 2 PRC respond with either phase delays or phase advances, depending on when the perturbation occurs. We find that Watts-Strogatz small world networks transition to synchronization gradually as the proportion of type 2 neurons increases, whereas scale-free networks may transition gradually or rapidly, depending upon local correlations between node degree and excitability type. Random placement of type 2 neurons results in gradual transition to synchronization, whereas placement of type 2 neurons as hubs leads to a much more rapid transition, showing that type 2 hub cells easily "hijack" neuronal networks to synchronization. These results underscore the fact that the degree of synchronization observed in neuronal networks is determined by a complex interplay between network structure and the dynamical properties of individual neurons, indicating that efforts to recover structural connectivity from dynamical correlations must in general take both factors into account.

  1. Simulation of Code Spectrum and Code Flow of Cultured Neuronal Networks.

    PubMed

    Tamura, Shinichi; Nishitani, Yoshi; Hosokawa, Chie; Miyoshi, Tomomitsu; Sawai, Hajime

    2016-01-01

    It has been shown that, in cultured neuronal networks on a multielectrode, pseudorandom-like sequences (codes) are detected, and they flow with some spatial decay constant. Each cultured neuronal network is characterized by a specific spectrum curve. That is, we may consider the spectrum curve as a "signature" of its associated neuronal network that is dependent on the characteristics of neurons and network configuration, including the weight distribution. In the present study, we used an integrate-and-fire model of neurons with intrinsic and instantaneous fluctuations of characteristics for performing a simulation of a code spectrum from multielectrodes on a 2D mesh neural network. We showed that it is possible to estimate the characteristics of neurons such as the distribution of number of neurons around each electrode and their refractory periods. Although this process is a reverse problem and theoretically the solutions are not sufficiently guaranteed, the parameters seem to be consistent with those of neurons. That is, the proposed neural network model may adequately reflect the behavior of a cultured neuronal network. Furthermore, such prospect is discussed that code analysis will provide a base of communication within a neural network that will also create a base of natural intelligence.

  2. Identification of neuronal network properties from the spectral analysis of calcium imaging signals in neuronal cultures.

    PubMed

    Tibau, Elisenda; Valencia, Miguel; Soriano, Jordi

    2013-01-01

    Neuronal networks in vitro are prominent systems to study the development of connections in living neuronal networks and the interplay between connectivity, activity and function. These cultured networks show a rich spontaneous activity that evolves concurrently with the connectivity of the underlying network. In this work we monitor the development of neuronal cultures, and record their activity using calcium fluorescence imaging. We use spectral analysis to characterize global dynamical and structural traits of the neuronal cultures. We first observe that the power spectrum can be used as a signature of the state of the network, for instance when inhibition is active or silent, as well as a measure of the network's connectivity strength. Second, the power spectrum identifies prominent developmental changes in the network such as GABAA switch. And third, the analysis of the spatial distribution of the spectral density, in experiments with a controlled disintegration of the network through CNQX, an AMPA-glutamate receptor antagonist in excitatory neurons, reveals the existence of communities of strongly connected, highly active neurons that display synchronous oscillations. Our work illustrates the interest of spectral analysis for the study of in vitro networks, and its potential use as a network-state indicator, for instance to compare healthy and diseased neuronal networks.

  3. Identical Aftershocks from the Main Rupture Zone 10 Months After the Mw=7.6 September 5, 2012, Nicoya, Costa Rica, Earthquake

    NASA Astrophysics Data System (ADS)

    Protti, M.; Alfaro-Diaz, R.; Brenn, G. R.; Fasola, S.; Murillo, A.; Marshall, J. S.; Gardner, T. W.

    2013-12-01

    Over a two weeks period and as part of a Keck Geology Consortium summer research project, we installed a dense broad band seismic array directly over the rupture zone of the Nicoya, September 5th, 2012, Mw=7.6 earthquake. The network consisted of 5 Trillium compact seismometers and Taurus digitizers from Nanometrics, defining a triangular area of ~20 km per side. Also located within this area are 3 stations of the Nicoya permanent broadband network. One side of the triangular area, along the west coast of the Nicoya peninsula, is parallel to the trench and the apex lies 15 km landward. The plate interface and rupture zone of the Nicoya 2012 earthquake are located 16 km below the trench-parallel side and 25 km below the apex of this triangular footprint. Station spacing ranged from 3 to 14 km. This dense array operated from July 2nd to July 17th, 2013. On June 23rd, eight days before we installed this array, an Mw=5.4 aftershock (one of the only 5 aftershocks of the Nicoya Mw=7.6 earthquake with magnitudes above 5.0) occurred directly beneath the area of our temporary network. Preliminary analysis of the data shows that we recorded several identical aftershocks with magnitudes below 1.0 that locate some 18 km below our network. We will present detailed locations of these small aftershocks and their relationship with the June 23rd, 2013 aftershock and the September 5th, 2012, mainshock.

  4. Computational properties of networks of synchronous groups of spiking neurons.

    PubMed

    Dayhoff, Judith E

    2007-09-01

    We demonstrate a model in which synchronously firing ensembles of neurons are networked to produce computational results. Each ensemble is a group of biological integrate-and-fire spiking neurons, with probabilistic interconnections between groups. An analogy is drawn in which each individual processing unit of an artificial neural network corresponds to a neuronal group in a biological model. The activation value of a unit in the artificial neural network corresponds to the fraction of active neurons, synchronously firing, in a biological neuronal group. Weights of the artificial neural network correspond to the product of the interconnection density between groups, the group size of the presynaptic group, and the postsynaptic potential heights in the synchronous group model. All three of these parameters can modulate connection strengths between neuronal groups in the synchronous group models. We give an example of nonlinear classification (XOR) and a function approximation example in which the capability of the artificial neural network can be captured by a neural network model with biological integrate-and-fire neurons configured as a network of synchronously firing ensembles of such neurons. We point out that the general function approximation capability proven for feedforward artificial neural networks appears to be approximated by networks of neuronal groups that fire in synchrony, where the groups comprise integrate-and-fire neurons. We discuss the advantages of this type of model for biological systems, its possible learning mechanisms, and the associated timing relationships.

  5. Network reconfiguration and neuronal plasticity in rhythm-generating networks.

    PubMed

    Koch, Henner; Garcia, Alfredo J; Ramirez, Jan-Marino

    2011-12-01

    Neuronal networks are highly plastic and reconfigure in a state-dependent manner. The plasticity at the network level emerges through multiple intrinsic and synaptic membrane properties that imbue neurons and their interactions with numerous nonlinear properties. These properties are continuously regulated by neuromodulators and homeostatic mechanisms that are critical to maintain not only network stability and also adapt networks in a short- and long-term manner to changes in behavioral, developmental, metabolic, and environmental conditions. This review provides concrete examples from neuronal networks in invertebrates and vertebrates, and illustrates that the concepts and rules that govern neuronal networks and behaviors are universal.

  6. Intrinsic Cellular Properties and Connectivity Density Determine Variable Clustering Patterns in Randomly Connected Inhibitory Neural Networks

    PubMed Central

    Rich, Scott; Booth, Victoria; Zochowski, Michal

    2016-01-01

    The plethora of inhibitory interneurons in the hippocampus and cortex play a pivotal role in generating rhythmic activity by clustering and synchronizing cell firing. Results of our simulations demonstrate that both the intrinsic cellular properties of neurons and the degree of network connectivity affect the characteristics of clustered dynamics exhibited in randomly connected, heterogeneous inhibitory networks. We quantify intrinsic cellular properties by the neuron's current-frequency relation (IF curve) and Phase Response Curve (PRC), a measure of how perturbations given at various phases of a neurons firing cycle affect subsequent spike timing. We analyze network bursting properties of networks of neurons with Type I or Type II properties in both excitability and PRC profile; Type I PRCs strictly show phase advances and IF curves that exhibit frequencies arbitrarily close to zero at firing threshold while Type II PRCs display both phase advances and delays and IF curves that have a non-zero frequency at threshold. Type II neurons whose properties arise with or without an M-type adaptation current are considered. We analyze network dynamics under different levels of cellular heterogeneity and as intrinsic cellular firing frequency and the time scale of decay of synaptic inhibition are varied. Many of the dynamics exhibited by these networks diverge from the predictions of the interneuron network gamma (ING) mechanism, as well as from results in all-to-all connected networks. Our results show that randomly connected networks of Type I neurons synchronize into a single cluster of active neurons while networks of Type II neurons organize into two mutually exclusive clusters segregated by the cells' intrinsic firing frequencies. Networks of Type II neurons containing the adaptation current behave similarly to networks of either Type I or Type II neurons depending on network parameters; however, the adaptation current creates differences in the cluster dynamics compared to those in networks of Type I or Type II neurons. To understand these results, we compute neuronal PRCs calculated with a perturbation matching the profile of the synaptic current in our networks. Differences in profiles of these PRCs across the different neuron types reveal mechanisms underlying the divergent network dynamics. PMID:27812323

  7. Aberrant within- and between-network connectivity of the mirror neuron system network and the mentalizing network in first episode psychosis.

    PubMed

    Choe, Eugenie; Lee, Tae Young; Kim, Minah; Hur, Ji-Won; Yoon, Youngwoo Bryan; Cho, Kang-Ik K; Kwon, Jun Soo

    2018-03-26

    It has been suggested that the mentalizing network and the mirror neuron system network support important social cognitive processes that are impaired in schizophrenia. However, the integrity and interaction of these two networks have not been sufficiently studied, and their effects on social cognition in schizophrenia remain unclear. Our study included 26 first-episode psychosis (FEP) patients and 26 healthy controls. We utilized resting-state functional connectivity to examine the a priori-defined mirror neuron system network and the mentalizing network and to assess the within- and between-network connectivities of the networks in FEP patients. We also assessed the correlation between resting-state functional connectivity measures and theory of mind performance. FEP patients showed altered within-network connectivity of the mirror neuron system network, and aberrant between-network connectivity between the mirror neuron system network and the mentalizing network. The within-network connectivity of the mirror neuron system network was noticeably correlated with theory of mind task performance in FEP patients. The integrity and interaction of the mirror neuron system network and the mentalizing network may be altered during the early stages of psychosis. Additionally, this study suggests that alterations in the integrity of the mirror neuron system network are highly related to deficient theory of mind in schizophrenia, and this problem would be present from the early stage of psychosis. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. Computational exploration of neuron and neural network models in neurobiology.

    PubMed

    Prinz, Astrid A

    2007-01-01

    The electrical activity of individual neurons and neuronal networks is shaped by the complex interplay of a large number of non-linear processes, including the voltage-dependent gating of ion channels and the activation of synaptic receptors. These complex dynamics make it difficult to understand how individual neuron or network parameters-such as the number of ion channels of a given type in a neuron's membrane or the strength of a particular synapse-influence neural system function. Systematic exploration of cellular or network model parameter spaces by computational brute force can overcome this difficulty and generate comprehensive data sets that contain information about neuron or network behavior for many different combinations of parameters. Searching such data sets for parameter combinations that produce functional neuron or network output provides insights into how narrowly different neural system parameters have to be tuned to produce a desired behavior. This chapter describes the construction and analysis of databases of neuron or neuronal network models and describes some of the advantages and downsides of such exploration methods.

  9. Uniting functional network topology and oscillations in the fronto-parietal single unit network of behaving primates.

    PubMed

    Dann, Benjamin; Michaels, Jonathan A; Schaffelhofer, Stefan; Scherberger, Hansjörg

    2016-08-15

    The functional communication of neurons in cortical networks underlies higher cognitive processes. Yet, little is known about the organization of the single neuron network or its relationship to the synchronization processes that are essential for its formation. Here, we show that the functional single neuron network of three fronto-parietal areas during active behavior of macaque monkeys is highly complex. The network was closely connected (small-world) and consisted of functional modules spanning these areas. Surprisingly, the importance of different neurons to the network was highly heterogeneous with a small number of neurons contributing strongly to the network function (hubs), which were in turn strongly inter-connected (rich-club). Examination of the network synchronization revealed that the identified rich-club consisted of neurons that were synchronized in the beta or low frequency range, whereas other neurons were mostly non-oscillatory synchronized. Therefore, oscillatory synchrony may be a central communication mechanism for highly organized functional spiking networks.

  10. Neuronal avalanches of a self-organized neural network with active-neuron-dominant structure.

    PubMed

    Li, Xiumin; Small, Michael

    2012-06-01

    Neuronal avalanche is a spontaneous neuronal activity which obeys a power-law distribution of population event sizes with an exponent of -3/2. It has been observed in the superficial layers of cortex both in vivo and in vitro. In this paper, we analyze the information transmission of a novel self-organized neural network with active-neuron-dominant structure. Neuronal avalanches can be observed in this network with appropriate input intensity. We find that the process of network learning via spike-timing dependent plasticity dramatically increases the complexity of network structure, which is finally self-organized to be active-neuron-dominant connectivity. Both the entropy of activity patterns and the complexity of their resulting post-synaptic inputs are maximized when the network dynamics are propagated as neuronal avalanches. This emergent topology is beneficial for information transmission with high efficiency and also could be responsible for the large information capacity of this network compared with alternative archetypal networks with different neural connectivity.

  11. The Drosophila Clock Neuron Network Features Diverse Coupling Modes and Requires Network-wide Coherence for Robust Circadian Rhythms.

    PubMed

    Yao, Zepeng; Bennett, Amelia J; Clem, Jenna L; Shafer, Orie T

    2016-12-13

    In animals, networks of clock neurons containing molecular clocks orchestrate daily rhythms in physiology and behavior. However, how various types of clock neurons communicate and coordinate with one another to produce coherent circadian rhythms is not well understood. Here, we investigate clock neuron coupling in the brain of Drosophila and demonstrate that the fly's various groups of clock neurons display unique and complex coupling relationships to core pacemaker neurons. Furthermore, we find that coordinated free-running rhythms require molecular clock synchrony not only within the well-characterized lateral clock neuron classes but also between lateral clock neurons and dorsal clock neurons. These results uncover unexpected patterns of coupling in the clock neuron network and reveal that robust free-running behavioral rhythms require a coherence of molecular oscillations across most of the fly's clock neuron network. Copyright © 2016 The Author(s). Published by Elsevier Inc. All rights reserved.

  12. Relationship between inter-stimulus-intervals and intervals of autonomous activities in a neuronal network.

    PubMed

    Ito, Hidekatsu; Minoshima, Wataru; Kudoh, Suguru N

    2015-08-01

    To investigate relationships between neuronal network activity and electrical stimulus, we analyzed autonomous activity before and after electrical stimulus. Recordings of autonomous activity were performed using dissociated culture of rat hippocampal neurons on a multi-electrodes array (MEA) dish. Single stimulus and pared stimuli were applied to a cultured neuronal network. Single stimulus was applied every 1 min, and paired stimuli was performed by two sequential stimuli every 1 min. As a result, the patterns of synchronized activities of a neuronal network were changed after stimulus. Especially, long range synchronous activities were induced by paired stimuli. When 1 s inter-stimulus-intervals (ISI) and 1.5 s ISI paired stimuli are applied to a neuronal network, relatively long range synchronous activities expressed in case of 1.5 s ISI. Temporal synchronous activity of neuronal network is changed according to inter-stimulus-intervals (ISI) of electrical stimulus. In other words, dissociated neuronal network can maintain given information in temporal pattern and a certain type of an information maintenance mechanism was considered to be implemented in a semi-artificial dissociated neuronal network. The result is useful toward manipulation technology of neuronal activity in a brain system.

  13. Inhibition linearizes firing rate responses in human motor units: implications for the role of persistent inward currents.

    PubMed

    Revill, Ann L; Fuglevand, Andrew J

    2017-01-01

    Motor neurons are the output neurons of the central nervous system and are responsible for controlling muscle contraction. When initially activated during voluntary contraction, firing rates of motor neurons increase steeply but then level out at modest rates. Activation of an intrinsic source of excitatory current at recruitment onset may underlie the initial steep increase in firing rate in motor neurons. We attempted to disable this intrinsic excitatory current by artificially activating an inhibitory reflex. When motor neuron activity was recorded while the inhibitory reflex was engaged, firing rates no longer increased steeply, suggesting that the intrinsic excitatory current was probably responsible for the initial sharp rise in motor neuron firing rate. During graded isometric contractions, motor unit (MU) firing rates increase steeply upon recruitment but then level off at modest rates even though muscle force continues to increase. The mechanisms underlying such firing behaviour are not known although activation of persistent inward currents (PICs) might be involved. PICs are intrinsic, voltage-dependent currents that activate strongly when motor neurons (MNs) are first recruited. Such activation might cause a sharp escalation in depolarizing current and underlie the steep initial rise in MU firing rate. Because PICs can be disabled with synaptic inhibition, we hypothesized that artificial activation of an inhibitory pathway might curb this initial steep rise in firing rate. To test this, human subjects performed slow triangular ramp contractions of the ankle dorsiflexors in the absence and presence of tonic synaptic inhibition delivered to tibialis anterior (TA) MNs by sural nerve stimulation. Firing rate profiles (expressed as a function of contraction force) of TA MUs recorded during these tasks were compared for control and stimulation conditions. Under control conditions, during the ascending phase of the triangular contractions, 93% of the firing rate profiles were best fitted by rising exponential functions. With stimulation, however, firing rate profiles were best fitted with linear functions or with less steeply rising exponentials. Firing rate profiles for the descending phases of the contractions were best fitted with linear functions for both control and stimulation conditions. These results seem consistent with the idea that PICs contribute to non-linear firing rate profiles during ascending but not descending phases of contractions. © 2016 The Authors. The Journal of Physiology © 2016 The Physiological Society.

  14. Human embryonic stem cell-derived neurons adopt and regulate the activity of an established neural network

    PubMed Central

    Weick, Jason P.; Liu, Yan; Zhang, Su-Chun

    2011-01-01

    Whether hESC-derived neurons can fully integrate with and functionally regulate an existing neural network remains unknown. Here, we demonstrate that hESC-derived neurons receive unitary postsynaptic currents both in vitro and in vivo and adopt the rhythmic firing behavior of mouse cortical networks via synaptic integration. Optical stimulation of hESC-derived neurons expressing Channelrhodopsin-2 elicited both inhibitory and excitatory postsynaptic currents and triggered network bursting in mouse neurons. Furthermore, light stimulation of hESC-derived neurons transplanted to the hippocampus of adult mice triggered postsynaptic currents in host pyramidal neurons in acute slice preparations. Thus, hESC-derived neurons can participate in and modulate neural network activity through functional synaptic integration, suggesting they are capable of contributing to neural network information processing both in vitro and in vivo. PMID:22106298

  15. Cultured Neuronal Networks Express Complex Patterns of Activity and Morphological Memory

    NASA Astrophysics Data System (ADS)

    Raichman, Nadav; Rubinsky, Liel; Shein, Mark; Baruchi, Itay; Volman, Vladislav; Ben-Jacob, Eshel

    The following sections are included: * Cultured Neuronal Networks * Recording the Network Activity * Network Engineering * The Formation of Synchronized Bursting Events * The Characterization of the SBEs * Highly-Active Neurons * Function-Form Relations in Cultured Networks * Analyzing the SBEs Motifs * Network Repertoire * Network under Hypothermia * Summary * Acknowledgments * References

  16. Three-dimensional neural cultures produce networks that mimic native brain activity.

    PubMed

    Bourke, Justin L; Quigley, Anita F; Duchi, Serena; O'Connell, Cathal D; Crook, Jeremy M; Wallace, Gordon G; Cook, Mark J; Kapsa, Robert M I

    2018-02-01

    Development of brain function is critically dependent on neuronal networks organized through three dimensions. Culture of central nervous system neurons has traditionally been limited to two dimensions, restricting growth patterns and network formation to a single plane. Here, with the use of multichannel extracellular microelectrode arrays, we demonstrate that neurons cultured in a true three-dimensional environment recapitulate native neuronal network formation and produce functional outcomes more akin to in vivo neuronal network activity. Copyright © 2017 John Wiley & Sons, Ltd.

  17. Numerical simulation of coherent resonance in a model network of Rulkov neurons

    NASA Astrophysics Data System (ADS)

    Andreev, Andrey V.; Runnova, Anastasia E.; Pisarchik, Alexander N.

    2018-04-01

    In this paper we study the spiking behaviour of a neuronal network consisting of Rulkov elements. We find that the regularity of this behaviour maximizes at a certain level of environment noise. This effect referred to as coherence resonance is demonstrated in a random complex network of Rulkov neurons. An external stimulus added to some of neurons excites them, and then activates other neurons in the network. The network coherence is also maximized at the certain stimulus amplitude.

  18. Uniting functional network topology and oscillations in the fronto-parietal single unit network of behaving primates

    PubMed Central

    Dann, Benjamin; Michaels, Jonathan A; Schaffelhofer, Stefan; Scherberger, Hansjörg

    2016-01-01

    The functional communication of neurons in cortical networks underlies higher cognitive processes. Yet, little is known about the organization of the single neuron network or its relationship to the synchronization processes that are essential for its formation. Here, we show that the functional single neuron network of three fronto-parietal areas during active behavior of macaque monkeys is highly complex. The network was closely connected (small-world) and consisted of functional modules spanning these areas. Surprisingly, the importance of different neurons to the network was highly heterogeneous with a small number of neurons contributing strongly to the network function (hubs), which were in turn strongly inter-connected (rich-club). Examination of the network synchronization revealed that the identified rich-club consisted of neurons that were synchronized in the beta or low frequency range, whereas other neurons were mostly non-oscillatory synchronized. Therefore, oscillatory synchrony may be a central communication mechanism for highly organized functional spiking networks. DOI: http://dx.doi.org/10.7554/eLife.15719.001 PMID:27525488

  19. Solving Constraint Satisfaction Problems with Networks of Spiking Neurons

    PubMed Central

    Jonke, Zeno; Habenschuss, Stefan; Maass, Wolfgang

    2016-01-01

    Network of neurons in the brain apply—unlike processors in our current generation of computer hardware—an event-based processing strategy, where short pulses (spikes) are emitted sparsely by neurons to signal the occurrence of an event at a particular point in time. Such spike-based computations promise to be substantially more power-efficient than traditional clocked processing schemes. However, it turns out to be surprisingly difficult to design networks of spiking neurons that can solve difficult computational problems on the level of single spikes, rather than rates of spikes. We present here a new method for designing networks of spiking neurons via an energy function. Furthermore, we show how the energy function of a network of stochastically firing neurons can be shaped in a transparent manner by composing the networks of simple stereotypical network motifs. We show that this design approach enables networks of spiking neurons to produce approximate solutions to difficult (NP-hard) constraint satisfaction problems from the domains of planning/optimization and verification/logical inference. The resulting networks employ noise as a computational resource. Nevertheless, the timing of spikes plays an essential role in their computations. Furthermore, networks of spiking neurons carry out for the Traveling Salesman Problem a more efficient stochastic search for good solutions compared with stochastic artificial neural networks (Boltzmann machines) and Gibbs sampling. PMID:27065785

  20. Hybrid Scheme for Modeling Local Field Potentials from Point-Neuron Networks.

    PubMed

    Hagen, Espen; Dahmen, David; Stavrinou, Maria L; Lindén, Henrik; Tetzlaff, Tom; van Albada, Sacha J; Grün, Sonja; Diesmann, Markus; Einevoll, Gaute T

    2016-12-01

    With rapidly advancing multi-electrode recording technology, the local field potential (LFP) has again become a popular measure of neuronal activity in both research and clinical applications. Proper understanding of the LFP requires detailed mathematical modeling incorporating the anatomical and electrophysiological features of neurons near the recording electrode, as well as synaptic inputs from the entire network. Here we propose a hybrid modeling scheme combining efficient point-neuron network models with biophysical principles underlying LFP generation by real neurons. The LFP predictions rely on populations of network-equivalent multicompartment neuron models with layer-specific synaptic connectivity, can be used with an arbitrary number of point-neuron network populations, and allows for a full separation of simulated network dynamics and LFPs. We apply the scheme to a full-scale cortical network model for a ∼1 mm 2 patch of primary visual cortex, predict laminar LFPs for different network states, assess the relative LFP contribution from different laminar populations, and investigate effects of input correlations and neuron density on the LFP. The generic nature of the hybrid scheme and its public implementation in hybridLFPy form the basis for LFP predictions from other and larger point-neuron network models, as well as extensions of the current application with additional biological detail. © The Author 2016. Published by Oxford University Press.

  1. Computational model of electrically coupled, intrinsically distinct pacemaker neurons.

    PubMed

    Soto-Treviño, Cristina; Rabbah, Pascale; Marder, Eve; Nadim, Farzan

    2005-07-01

    Electrical coupling between neurons with similar properties is often studied. Nonetheless, the role of electrical coupling between neurons with widely different intrinsic properties also occurs, but is less well understood. Inspired by the pacemaker group of the crustacean pyloric network, we developed a multicompartment, conductance-based model of a small network of intrinsically distinct, electrically coupled neurons. In the pyloric network, a small intrinsically bursting neuron, through gap junctions, drives 2 larger, tonically spiking neurons to reliably burst in-phase with it. Each model neuron has 2 compartments, one responsible for spike generation and the other for producing a slow, large-amplitude oscillation. We illustrate how these compartments interact and determine the dynamics of the model neurons. Our model captures the dynamic oscillation range measured from the isolated and coupled biological neurons. At the network level, we explore the range of coupling strengths for which synchronous bursting oscillations are possible. The spatial segregation of ionic currents significantly enhances the ability of the 2 neurons to burst synchronously, and the oscillation range of the model pacemaker network depends not only on the strength of the electrical synapse but also on the identity of the neuron receiving inputs. We also compare the activity of the electrically coupled, distinct neurons with that of a network of coupled identical bursting neurons. For small to moderate coupling strengths, the network of identical elements, when receiving asymmetrical inputs, can have a smaller dynamic range of oscillation than that of its constituent neurons in isolation.

  2. A Newly Defined Area of the Mouse Anterior Hypothalamus Involved in Septohypothalamic Circuit: Perifornical Area of the Anterior Hypothalamus, PeFAH.

    PubMed

    Horii-Hayashi, Noriko; Nishi, Mayumi

    2018-02-27

    Although the hypothalamus is classified into more than 10 compartments, it still contains uncharacterized areas. In this study, we identified a new triangular-shaped area between the paraventricular hypothalamic nucleus (PVN) and the fornix area in the mouse anterior hypothalamus, which is enriched in chondroitin sulfate proteoglycans (CSPGs). We designated this region as the perifornical area of the anterior hypothalamus (PeFAH) based on its anatomical location. As evidenced by Nissl staining, the PeFAH was distinguishable as an area of relatively low density. Immunohistochemical and DNA microarray analyses indicated that PeFAH contains sparsely distributed calretinin-positive neurons and densely clustered enkephalin-positive neurons. Furthermore, the PeFAH was shown to have bidirectional neural connections with the lateral septum. Indeed, we confirmed enkephalinergic projections from PeFAH neurons to the lateral septum, and inversely, calbindin-positive lateral septum neurons as afferents to the PeFAH. Finally, c-Fos expression analysis revealed that the activity of certain PeFAH neuronal populations tended to be increased by psychological stressors, but not that of enkephalinergic neurons. We proposed PeFAH as a new region in the AH.

  3. Shaping Neuronal Network Activity by Presynaptic Mechanisms

    PubMed Central

    Ashery, Uri

    2015-01-01

    Neuronal microcircuits generate oscillatory activity, which has been linked to basic functions such as sleep, learning and sensorimotor gating. Although synaptic release processes are well known for their ability to shape the interaction between neurons in microcircuits, most computational models do not simulate the synaptic transmission process directly and hence cannot explain how changes in synaptic parameters alter neuronal network activity. In this paper, we present a novel neuronal network model that incorporates presynaptic release mechanisms, such as vesicle pool dynamics and calcium-dependent release probability, to model the spontaneous activity of neuronal networks. The model, which is based on modified leaky integrate-and-fire neurons, generates spontaneous network activity patterns, which are similar to experimental data and robust under changes in the model's primary gain parameters such as excitatory postsynaptic potential and connectivity ratio. Furthermore, it reliably recreates experimental findings and provides mechanistic explanations for data obtained from microelectrode array recordings, such as network burst termination and the effects of pharmacological and genetic manipulations. The model demonstrates how elevated asynchronous release, but not spontaneous release, synchronizes neuronal network activity and reveals that asynchronous release enhances utilization of the recycling vesicle pool to induce the network effect. The model further predicts a positive correlation between vesicle priming at the single-neuron level and burst frequency at the network level; this prediction is supported by experimental findings. Thus, the model is utilized to reveal how synaptic release processes at the neuronal level govern activity patterns and synchronization at the network level. PMID:26372048

  4. Connexin-Dependent Neuroglial Networking as a New Therapeutic Target.

    PubMed

    Charvériat, Mathieu; Naus, Christian C; Leybaert, Luc; Sáez, Juan C; Giaume, Christian

    2017-01-01

    Astrocytes and neurons dynamically interact during physiological processes, and it is now widely accepted that they are both organized in plastic and tightly regulated networks. Astrocytes are connected through connexin-based gap junction channels, with brain region specificities, and those networks modulate neuronal activities, such as those involved in sleep-wake cycle, cognitive, or sensory functions. Additionally, astrocyte domains have been involved in neurogenesis and neuronal differentiation during development; they participate in the "tripartite synapse" with both pre-synaptic and post-synaptic neurons by tuning down or up neuronal activities through the control of neuronal synaptic strength. Connexin-based hemichannels are also involved in those regulations of neuronal activities, however, this feature will not be considered in the present review. Furthermore, neuronal processes, transmitting electrical signals to chemical synapses, stringently control astroglial connexin expression, and channel functions. Long-range energy trafficking toward neurons through connexin-coupled astrocytes and plasticity of those networks are hence largely dependent on neuronal activity. Such reciprocal interactions between neurons and astrocyte networks involve neurotransmitters, cytokines, endogenous lipids, and peptides released by neurons but also other brain cell types, including microglial and endothelial cells. Over the past 10 years, knowledge about neuroglial interactions has widened and now includes effects of CNS-targeting drugs such as antidepressants, antipsychotics, psychostimulants, or sedatives drugs as potential modulators of connexin function and thus astrocyte networking activity. In physiological situations, neuroglial networking is consequently resulting from a two-way interaction between astrocyte gap junction-mediated networks and those made by neurons. As both cell types are modulated by CNS drugs we postulate that neuroglial networking may emerge as new therapeutic targets in neurological and psychiatric disorders.

  5. Can simple rules control development of a pioneer vertebrate neuronal network generating behavior?

    PubMed

    Roberts, Alan; Conte, Deborah; Hull, Mike; Merrison-Hort, Robert; al Azad, Abul Kalam; Buhl, Edgar; Borisyuk, Roman; Soffe, Stephen R

    2014-01-08

    How do the pioneer networks in the axial core of the vertebrate nervous system first develop? Fundamental to understanding any full-scale neuronal network is knowledge of the constituent neurons, their properties, synaptic interconnections, and normal activity. Our novel strategy uses basic developmental rules to generate model networks that retain individual neuron and synapse resolution and are capable of reproducing correct, whole animal responses. We apply our developmental strategy to young Xenopus tadpoles, whose brainstem and spinal cord share a core vertebrate plan, but at a tractable complexity. Following detailed anatomical and physiological measurements to complete a descriptive library of each type of spinal neuron, we build models of their axon growth controlled by simple chemical gradients and physical barriers. By adding dendrites and allowing probabilistic formation of synaptic connections, we reconstruct network connectivity among up to 2000 neurons. When the resulting "network" is populated by model neurons and synapses, with properties based on physiology, it can respond to sensory stimulation by mimicking tadpole swimming behavior. This functioning model represents the most complete reconstruction of a vertebrate neuronal network that can reproduce the complex, rhythmic behavior of a whole animal. The findings validate our novel developmental strategy for generating realistic networks with individual neuron- and synapse-level resolution. We use it to demonstrate how early functional neuronal connectivity and behavior may in life result from simple developmental "rules," which lay out a scaffold for the vertebrate CNS without specific neuron-to-neuron recognition.

  6. Constrained synaptic connectivity in functional mammalian neuronal networks grown on patterned surfaces.

    PubMed

    Wyart, Claire; Ybert, Christophe; Bourdieu, Laurent; Herr, Catherine; Prinz, Christelle; Chatenay, Didier

    2002-06-30

    The use of ordered neuronal networks in vitro is a promising approach to study the development and the activity of small neuronal assemblies. However, in previous attempts, sufficient growth control and physiological maturation of neurons could not be achieved. Here we describe an original protocol in which polylysine patterns confine the adhesion of cellular bodies to prescribed spots and the neuritic growth to thin lines. Hippocampal neurons in these networks are maintained healthy in serum free medium up to 5 weeks in vitro. Electrophysiology and immunochemistry show that neurons exhibit mature excitatory and inhibitory synapses and calcium imaging reveals spontaneous activity of neurons in isolated networks. We demonstrate that neurons in these geometrical networks form functional synapses preferentially to their first neighbors. We have, therefore, established a simple and robust protocol to constrain both the location of neuronal cell bodies and their pattern of connectivity. Moreover, the long term maintenance of the geometry and the physiology of the networks raises the possibility of new applications for systematic screening of pharmacological agents and for electronic to neuron devices.

  7. Uncovering Neuronal Networks Defined by Consistent Between-Neuron Spike Timing from Neuronal Spike Recordings

    PubMed Central

    2018-01-01

    Abstract It is widely assumed that distributed neuronal networks are fundamental to the functioning of the brain. Consistent spike timing between neurons is thought to be one of the key principles for the formation of these networks. This can involve synchronous spiking or spiking with time delays, forming spike sequences when the order of spiking is consistent. Finding networks defined by their sequence of time-shifted spikes, denoted here as spike timing networks, is a tremendous challenge. As neurons can participate in multiple spike sequences at multiple between-spike time delays, the possible complexity of networks is prohibitively large. We present a novel approach that is capable of (1) extracting spike timing networks regardless of their sequence complexity, and (2) that describes their spiking sequences with high temporal precision. We achieve this by decomposing frequency-transformed neuronal spiking into separate networks, characterizing each network’s spike sequence by a time delay per neuron, forming a spike sequence timeline. These networks provide a detailed template for an investigation of the experimental relevance of their spike sequences. Using simulated spike timing networks, we show network extraction is robust to spiking noise, spike timing jitter, and partial occurrences of the involved spike sequences. Using rat multineuron recordings, we demonstrate the approach is capable of revealing real spike timing networks with sub-millisecond temporal precision. By uncovering spike timing networks, the prevalence, structure, and function of complex spike sequences can be investigated in greater detail, allowing us to gain a better understanding of their role in neuronal functioning. PMID:29789811

  8. Evaluation of Motor Neuron Excitability by CMAP Scanning with Electric Modulated Current

    PubMed Central

    Araújo, Tiago; Candeias, Rui; Nunes, Neuza; Gamboa, Hugo

    2015-01-01

    Introduction. Compound Muscle Action Potential (CMAP) scan is a noninvasive promissory technique for neurodegenerative pathologies diagnosis. In this work new CMAP scan protocols were implemented to study the influence of electrical pulse waveform on peripheral nerve excitability. Methods. A total of 13 healthy subjects were tested. Stimulation was performed with an increasing intensities range from 4 to 30 mA. The procedure was repeated 4 times per subject, using a different single pulse stimulation waveform: monophasic square and triangular and quadratic and biphasic square. Results. Different waveforms elicit different intensity-response amplitude curves. The square pulse needs less current to generate the same response amplitude regarding the other waves and this effect is gradually decreasing for the triangular, quadratic, and biphasic pulse, respectively. Conclusion. The stimulation waveform has a direct influence on the stimulus-response slope and consequently on the motoneurons excitability. This can be a new prognostic parameter for neurodegenerative disorders. PMID:26413499

  9. Phase-space networks of geometrically frustrated systems.

    PubMed

    Han, Yilong

    2009-11-01

    We illustrate a network approach to the phase-space study by using two geometrical frustration models: antiferromagnet on triangular lattice and square ice. Their highly degenerated ground states are mapped as discrete networks such that the quantitative network analysis can be applied to phase-space studies. The resulting phase spaces share some comon features and establish a class of complex networks with unique Gaussian spectral densities. Although phase-space networks are heterogeneously connected, the systems are still ergodic due to the random Poisson processes. This network approach can be generalized to phase spaces of some other complex systems.

  10. A principle of economy predicts the functional architecture of grid cells.

    PubMed

    Wei, Xue-Xin; Prentice, Jason; Balasubramanian, Vijay

    2015-09-03

    Grid cells in the brain respond when an animal occupies a periodic lattice of 'grid fields' during navigation. Grids are organized in modules with different periodicity. We propose that the grid system implements a hierarchical code for space that economizes the number of neurons required to encode location with a given resolution across a range equal to the largest period. This theory predicts that (i) grid fields should lie on a triangular lattice, (ii) grid scales should follow a geometric progression, (iii) the ratio between adjacent grid scales should be √e for idealized neurons, and lie between 1.4 and 1.7 for realistic neurons, (iv) the scale ratio should vary modestly within and between animals. These results explain the measured grid structure in rodents. We also predict optimal organization in one and three dimensions, the number of modules, and, with added assumptions, the ratio between grid periods and field widths.

  11. Simplicity and efficiency of integrate-and-fire neuron models.

    PubMed

    Plesser, Hans E; Diesmann, Markus

    2009-02-01

    Lovelace and Cios (2008) recently proposed a very simple spiking neuron (VSSN) model for simulations of large neuronal networks as an efficient replacement for the integrate-and-fire neuron model. We argue that the VSSN model falls behind key advances in neuronal network modeling over the past 20 years, in particular, techniques that permit simulators to compute the state of the neuron without repeated summation over the history of input spikes and to integrate the subthreshold dynamics exactly. State-of-the-art solvers for networks of integrate-and-fire model neurons are substantially more efficient than the VSSN simulator and allow routine simulations of networks of some 10(5) neurons and 10(9) connections on moderate computer clusters.

  12. Network and neuronal membrane properties in hybrid networks reciprocally regulate selectivity to rapid thalamocortical inputs.

    PubMed

    Pesavento, Michael J; Pinto, David J

    2012-11-01

    Rapidly changing environments require rapid processing from sensory inputs. Varying deflection velocities of a rodent's primary facial vibrissa cause varying temporal neuronal activity profiles within the ventral posteromedial thalamic nucleus. Local neuron populations in a single somatosensory layer 4 barrel transform sparsely coded input into a spike count based on the input's temporal profile. We investigate this transformation by creating a barrel-like hybrid network with whole cell recordings of in vitro neurons from a cortical slice preparation, embedding the biological neuron in the simulated network by presenting virtual synaptic conductances via a conductance clamp. Utilizing the hybrid network, we examine the reciprocal network properties (local excitatory and inhibitory synaptic convergence) and neuronal membrane properties (input resistance) by altering the barrel population response to diverse thalamic input. In the presence of local network input, neurons are more selective to thalamic input timing; this arises from strong feedforward inhibition. Strongly inhibitory (damping) network regimes are more selective to timing and less selective to the magnitude of input but require stronger initial input. Input selectivity relies heavily on the different membrane properties of excitatory and inhibitory neurons. When inhibitory and excitatory neurons had identical membrane properties, the sensitivity of in vitro neurons to temporal vs. magnitude features of input was substantially reduced. Increasing the mean leak conductance of the inhibitory cells decreased the network's temporal sensitivity, whereas increasing excitatory leak conductance enhanced magnitude sensitivity. Local network synapses are essential in shaping thalamic input, and differing membrane properties of functional classes reciprocally modulate this effect.

  13. Population coding in sparsely connected networks of noisy neurons.

    PubMed

    Tripp, Bryan P; Orchard, Jeff

    2012-01-01

    This study examines the relationship between population coding and spatial connection statistics in networks of noisy neurons. Encoding of sensory information in the neocortex is thought to require coordinated neural populations, because individual cortical neurons respond to a wide range of stimuli, and exhibit highly variable spiking in response to repeated stimuli. Population coding is rooted in network structure, because cortical neurons receive information only from other neurons, and because the information they encode must be decoded by other neurons, if it is to affect behavior. However, population coding theory has often ignored network structure, or assumed discrete, fully connected populations (in contrast with the sparsely connected, continuous sheet of the cortex). In this study, we modeled a sheet of cortical neurons with sparse, primarily local connections, and found that a network with this structure could encode multiple internal state variables with high signal-to-noise ratio. However, we were unable to create high-fidelity networks by instantiating connections at random according to spatial connection probabilities. In our models, high-fidelity networks required additional structure, with higher cluster factors and correlations between the inputs to nearby neurons.

  14. Constructing Precisely Computing Networks with Biophysical Spiking Neurons.

    PubMed

    Schwemmer, Michael A; Fairhall, Adrienne L; Denéve, Sophie; Shea-Brown, Eric T

    2015-07-15

    While spike timing has been shown to carry detailed stimulus information at the sensory periphery, its possible role in network computation is less clear. Most models of computation by neural networks are based on population firing rates. In equivalent spiking implementations, firing is assumed to be random such that averaging across populations of neurons recovers the rate-based approach. Recently, however, Denéve and colleagues have suggested that the spiking behavior of neurons may be fundamental to how neuronal networks compute, with precise spike timing determined by each neuron's contribution to producing the desired output (Boerlin and Denéve, 2011; Boerlin et al., 2013). By postulating that each neuron fires to reduce the error in the network's output, it was demonstrated that linear computations can be performed by networks of integrate-and-fire neurons that communicate through instantaneous synapses. This left open, however, the possibility that realistic networks, with conductance-based neurons with subthreshold nonlinearity and the slower timescales of biophysical synapses, may not fit into this framework. Here, we show how the spike-based approach can be extended to biophysically plausible networks. We then show that our network reproduces a number of key features of cortical networks including irregular and Poisson-like spike times and a tight balance between excitation and inhibition. Lastly, we discuss how the behavior of our model scales with network size or with the number of neurons "recorded" from a larger computing network. These results significantly increase the biological plausibility of the spike-based approach to network computation. We derive a network of neurons with standard spike-generating currents and synapses with realistic timescales that computes based upon the principle that the precise timing of each spike is important for the computation. We then show that our network reproduces a number of key features of cortical networks including irregular, Poisson-like spike times, and a tight balance between excitation and inhibition. These results significantly increase the biological plausibility of the spike-based approach to network computation, and uncover how several components of biological networks may work together to efficiently carry out computation. Copyright © 2015 the authors 0270-6474/15/3510112-23$15.00/0.

  15. Neural networks with local receptive fields and superlinear VC dimension.

    PubMed

    Schmitt, Michael

    2002-04-01

    Local receptive field neurons comprise such well-known and widely used unit types as radial basis function (RBF) neurons and neurons with center-surround receptive field. We study the Vapnik-Chervonenkis (VC) dimension of feedforward neural networks with one hidden layer of these units. For several variants of local receptive field neurons, we show that the VC dimension of these networks is superlinear. In particular, we establish the bound Omega(W log k) for any reasonably sized network with W parameters and k hidden nodes. This bound is shown to hold for discrete center-surround receptive field neurons, which are physiologically relevant models of cells in the mammalian visual system, for neurons computing a difference of gaussians, which are popular in computational vision, and for standard RBF neurons, a major alternative to sigmoidal neurons in artificial neural networks. The result for RBF neural networks is of particular interest since it answers a question that has been open for several years. The results also give rise to lower bounds for networks with fixed input dimension. Regarding constants, all bounds are larger than those known thus far for similar architectures with sigmoidal neurons. The superlinear lower bounds contrast with linear upper bounds for single local receptive field neurons also derived here.

  16. Results on a binding neuron model and their implications for modified hourglass model for neuronal network.

    PubMed

    Arunachalam, Viswanathan; Akhavan-Tabatabaei, Raha; Lopez, Cristina

    2013-01-01

    The classical models of single neuron like Hodgkin-Huxley point neuron or leaky integrate and fire neuron assume the influence of postsynaptic potentials to last till the neuron fires. Vidybida (2008) in a refreshing departure has proposed models for binding neurons in which the trace of an input is remembered only for a finite fixed period of time after which it is forgotten. The binding neurons conform to the behaviour of real neurons and are applicable in constructing fast recurrent networks for computer modeling. This paper develops explicitly several useful results for a binding neuron like the firing time distribution and other statistical characteristics. We also discuss the applicability of the developed results in constructing a modified hourglass network model in which there are interconnected neurons with excitatory as well as inhibitory inputs. Limited simulation results of the hourglass network are presented.

  17. Dynamical state of the network determines the efficacy of single neuron properties in shaping the network activity

    PubMed Central

    Sahasranamam, Ajith; Vlachos, Ioannis; Aertsen, Ad; Kumar, Arvind

    2016-01-01

    Spike patterns are among the most common electrophysiological descriptors of neuron types. Surprisingly, it is not clear how the diversity in firing patterns of the neurons in a network affects its activity dynamics. Here, we introduce the state-dependent stochastic bursting neuron model allowing for a change in its firing patterns independent of changes in its input-output firing rate relationship. Using this model, we show that the effect of single neuron spiking on the network dynamics is contingent on the network activity state. While spike bursting can both generate and disrupt oscillations, these patterns are ineffective in large regions of the network state space in changing the network activity qualitatively. Finally, we show that when single-neuron properties are made dependent on the population activity, a hysteresis like dynamics emerges. This novel phenomenon has important implications for determining the network response to time-varying inputs and for the network sensitivity at different operating points. PMID:27212008

  18. Dynamical state of the network determines the efficacy of single neuron properties in shaping the network activity.

    PubMed

    Sahasranamam, Ajith; Vlachos, Ioannis; Aertsen, Ad; Kumar, Arvind

    2016-05-23

    Spike patterns are among the most common electrophysiological descriptors of neuron types. Surprisingly, it is not clear how the diversity in firing patterns of the neurons in a network affects its activity dynamics. Here, we introduce the state-dependent stochastic bursting neuron model allowing for a change in its firing patterns independent of changes in its input-output firing rate relationship. Using this model, we show that the effect of single neuron spiking on the network dynamics is contingent on the network activity state. While spike bursting can both generate and disrupt oscillations, these patterns are ineffective in large regions of the network state space in changing the network activity qualitatively. Finally, we show that when single-neuron properties are made dependent on the population activity, a hysteresis like dynamics emerges. This novel phenomenon has important implications for determining the network response to time-varying inputs and for the network sensitivity at different operating points.

  19. Storage and computationally efficient permutations of factorized covariance and square-root information arrays

    NASA Technical Reports Server (NTRS)

    Muellerschoen, R. J.

    1988-01-01

    A unified method to permute vector stored Upper triangular Diagonal factorized covariance and vector stored upper triangular Square Root Information arrays is presented. The method involves cyclic permutation of the rows and columns of the arrays and retriangularization with fast (slow) Givens rotations (reflections). Minimal computation is performed, and a one dimensional scratch array is required. To make the method efficient for large arrays on a virtual memory machine, computations are arranged so as to avoid expensive paging faults. This method is potentially important for processing large volumes of radio metric data in the Deep Space Network.

  20. Synchronization in a chaotic neural network with time delay depending on the spatial distance between neurons

    NASA Astrophysics Data System (ADS)

    Tang, Guoning; Xu, Kesheng; Jiang, Luoluo

    2011-10-01

    The synchronization is investigated in a two-dimensional Hindmarsh-Rose neuronal network by introducing a global coupling scheme with time delay, where the length of time delay is proportional to the spatial distance between neurons. We find that the time delay always disturbs synchronization of the neuronal network. When both the coupling strength and length of time delay per unit distance (i.e., enlargement factor) are large enough, the time delay induces the abnormal membrane potential oscillations in neurons. Specifically, the abnormal membrane potential oscillations for the symmetrically placed neurons form an antiphase, so that the large coupling strength and enlargement factor lead to the desynchronization of the neuronal network. The complete and intermittently complete synchronization of the neuronal network are observed for the right choice of parameters. The physical mechanism underlying these phenomena is analyzed.

  1. Transition to subthreshold activity with the use of phase shifting in a model thalamic network

    NASA Astrophysics Data System (ADS)

    Thomas, Elizabeth; Grisar, Thierry

    1997-05-01

    Absence epilepsy involves a state of low frequency synchronous oscillations by the involved neuronal networks. These oscillations may be either above or subthreshold. In this investigation, we studied the methods which could be utilized to transform the threshold activity of neurons in the network to a subthreshold state. A model thalamic network was constructed using the Hodgkin Huxley framework. Subthreshold activity was achieved by the application of stimuli to the network which caused phase shifts in the oscillatory activity of selected neurons in the network. In some instances the stimulus was a periodic pulse train of low frequency to the reticular thalamic neurons of the network while in others, it was a constant hyperpolarizing current applied to the thalamocortical neurons.

  2. Self-Consistent Scheme for Spike-Train Power Spectra in Heterogeneous Sparse Networks.

    PubMed

    Pena, Rodrigo F O; Vellmer, Sebastian; Bernardi, Davide; Roque, Antonio C; Lindner, Benjamin

    2018-01-01

    Recurrent networks of spiking neurons can be in an asynchronous state characterized by low or absent cross-correlations and spike statistics which resemble those of cortical neurons. Although spatial correlations are negligible in this state, neurons can show pronounced temporal correlations in their spike trains that can be quantified by the autocorrelation function or the spike-train power spectrum. Depending on cellular and network parameters, correlations display diverse patterns (ranging from simple refractory-period effects and stochastic oscillations to slow fluctuations) and it is generally not well-understood how these dependencies come about. Previous work has explored how the single-cell correlations in a homogeneous network (excitatory and inhibitory integrate-and-fire neurons with nearly balanced mean recurrent input) can be determined numerically from an iterative single-neuron simulation. Such a scheme is based on the fact that every neuron is driven by the network noise (i.e., the input currents from all its presynaptic partners) but also contributes to the network noise, leading to a self-consistency condition for the input and output spectra. Here we first extend this scheme to homogeneous networks with strong recurrent inhibition and a synaptic filter, in which instabilities of the previous scheme are avoided by an averaging procedure. We then extend the scheme to heterogeneous networks in which (i) different neural subpopulations (e.g., excitatory and inhibitory neurons) have different cellular or connectivity parameters; (ii) the number and strength of the input connections are random (Erdős-Rényi topology) and thus different among neurons. In all heterogeneous cases, neurons are lumped in different classes each of which is represented by a single neuron in the iterative scheme; in addition, we make a Gaussian approximation of the input current to the neuron. These approximations seem to be justified over a broad range of parameters as indicated by comparison with simulation results of large recurrent networks. Our method can help to elucidate how network heterogeneity shapes the asynchronous state in recurrent neural networks.

  3. Medullary neurons in the core white matter of the olfactory bulb: a new cell type.

    PubMed

    Paredes, Raúl G; Larriva-Sahd, Jorge

    2010-02-01

    The structure of a new cell type, termed the medullary neuron (MN) because of its intimate association with the rostral migratory stream (RMS) in the bulbar core, is described in the adult rat olfactory bulb. The MN is a triangular or polygonal interneuron whose soma lies between the cellular clusters of the RMS or, less frequently, among the neuron progenitors therein. MNs are easily distinguished from adjacent cells by their large size and differentiated structure. Two MN subtypes have been categorized by the Golgi technique: spiny pyramidal neurons and aspiny neurons. Both MN subtypes bear a large dendritic field impinged upon by axons in the core bulbar white matter. A set of collaterals from the adjacent axons appears to terminate on the MN dendrites. The MN axon passes in close apposition to adjacent neuron progenitors in the RMS. MNs are immunoreactive with antisera raised against gamma-aminobutyric acid and glutamate decarboxylase 65/67. Electron-microscopic observations confirm that MNs correspond to fully differentiated, mature neurons. MNs seem to be highly conserved among macrosmatic species as they occur in Nissl-stained brain sections from mouse, guinea pig, and hedgehog. Although the functional role of MNs remains to be determined, we suggest that MNs represent a cellular interface between endogenous olfactory activity and the differentiation of new neurons generated during adulthood.

  4. Multiplex Networks of Cortical and Hippocampal Neurons Revealed at Different Timescales

    PubMed Central

    Timme, Nicholas; Ito, Shinya; Myroshnychenko, Maxym; Yeh, Fang-Chin; Hiolski, Emma; Hottowy, Pawel; Beggs, John M.

    2014-01-01

    Recent studies have emphasized the importance of multiplex networks – interdependent networks with shared nodes and different types of connections – in systems primarily outside of neuroscience. Though the multiplex properties of networks are frequently not considered, most networks are actually multiplex networks and the multiplex specific features of networks can greatly affect network behavior (e.g. fault tolerance). Thus, the study of networks of neurons could potentially be greatly enhanced using a multiplex perspective. Given the wide range of temporally dependent rhythms and phenomena present in neural systems, we chose to examine multiplex networks of individual neurons with time scale dependent connections. To study these networks, we used transfer entropy – an information theoretic quantity that can be used to measure linear and nonlinear interactions – to systematically measure the connectivity between individual neurons at different time scales in cortical and hippocampal slice cultures. We recorded the spiking activity of almost 12,000 neurons across 60 tissue samples using a 512-electrode array with 60 micrometer inter-electrode spacing and 50 microsecond temporal resolution. To the best of our knowledge, this preparation and recording method represents a superior combination of number of recorded neurons and temporal and spatial recording resolutions to any currently available in vivo system. We found that highly connected neurons (“hubs”) were localized to certain time scales, which, we hypothesize, increases the fault tolerance of the network. Conversely, a large proportion of non-hub neurons were not localized to certain time scales. In addition, we found that long and short time scale connectivity was uncorrelated. Finally, we found that long time scale networks were significantly less modular and more disassortative than short time scale networks in both tissue types. As far as we are aware, this analysis represents the first systematic study of temporally dependent multiplex networks among individual neurons. PMID:25536059

  5. Computational Models of Neuron-Astrocyte Interactions Lead to Improved Efficacy in the Performance of Neural Networks

    PubMed Central

    Alvarellos-González, Alberto; Pazos, Alejandro; Porto-Pazos, Ana B.

    2012-01-01

    The importance of astrocytes, one part of the glial system, for information processing in the brain has recently been demonstrated. Regarding information processing in multilayer connectionist systems, it has been shown that systems which include artificial neurons and astrocytes (Artificial Neuron-Glia Networks) have well-known advantages over identical systems including only artificial neurons. Since the actual impact of astrocytes in neural network function is unknown, we have investigated, using computational models, different astrocyte-neuron interactions for information processing; different neuron-glia algorithms have been implemented for training and validation of multilayer Artificial Neuron-Glia Networks oriented toward classification problem resolution. The results of the tests performed suggest that all the algorithms modelling astrocyte-induced synaptic potentiation improved artificial neural network performance, but their efficacy depended on the complexity of the problem. PMID:22649480

  6. Computational models of neuron-astrocyte interactions lead to improved efficacy in the performance of neural networks.

    PubMed

    Alvarellos-González, Alberto; Pazos, Alejandro; Porto-Pazos, Ana B

    2012-01-01

    The importance of astrocytes, one part of the glial system, for information processing in the brain has recently been demonstrated. Regarding information processing in multilayer connectionist systems, it has been shown that systems which include artificial neurons and astrocytes (Artificial Neuron-Glia Networks) have well-known advantages over identical systems including only artificial neurons. Since the actual impact of astrocytes in neural network function is unknown, we have investigated, using computational models, different astrocyte-neuron interactions for information processing; different neuron-glia algorithms have been implemented for training and validation of multilayer Artificial Neuron-Glia Networks oriented toward classification problem resolution. The results of the tests performed suggest that all the algorithms modelling astrocyte-induced synaptic potentiation improved artificial neural network performance, but their efficacy depended on the complexity of the problem.

  7. A modeling comparison of projection neuron- and neuromodulator-elicited oscillations in a central pattern generating network.

    PubMed

    Kintos, Nickolas; Nusbaum, Michael P; Nadim, Farzan

    2008-06-01

    Many central pattern generating networks are influenced by synaptic input from modulatory projection neurons. The network response to a projection neuron is sometimes mimicked by bath applying the neuronally-released modulator, despite the absence of network interactions with the projection neuron. One interesting example occurs in the crab stomatogastric ganglion (STG), where bath applying the neuropeptide pyrokinin (PK) elicits a gastric mill rhythm which is similar to that elicited by the projection neuron modulatory commissural neuron 1 (MCN1), despite the absence of PK in MCN1 and the fact that MCN1 is not active during the PK-elicited rhythm. MCN1 terminals have fast and slow synaptic actions on the gastric mill network and are presynaptically inhibited by this network in the STG. These local connections are inactive in the PK-elicited rhythm, and the mechanism underlying this rhythm is unknown. We use mathematical and biophysically-realistic modeling to propose potential mechanisms by which PK can elicit a gastric mill rhythm that is similar to the MCN1-elicited rhythm. We analyze slow-wave network oscillations using simplified mathematical models and, in parallel, develop biophysically-realistic models that account for fast, action potential-driven oscillations and some spatial structure of the network neurons. Our results illustrate how the actions of bath-applied neuromodulators can mimic those of descending projection neurons through mathematically similar but physiologically distinct mechanisms.

  8. Simulating synchronization in neuronal networks

    NASA Astrophysics Data System (ADS)

    Fink, Christian G.

    2016-06-01

    We discuss several techniques used in simulating neuronal networks by exploring how a network's connectivity structure affects its propensity for synchronous spiking. Network connectivity is generated using the Watts-Strogatz small-world algorithm, and two key measures of network structure are described. These measures quantify structural characteristics that influence collective neuronal spiking, which is simulated using the leaky integrate-and-fire model. Simulations show that adding a small number of random connections to an otherwise lattice-like connectivity structure leads to a dramatic increase in neuronal synchronization.

  9. Impact of Partial Time Delay on Temporal Dynamics of Watts-Strogatz Small-World Neuronal Networks

    NASA Astrophysics Data System (ADS)

    Yan, Hao; Sun, Xiaojuan

    2017-06-01

    In this paper, we mainly discuss effects of partial time delay on temporal dynamics of Watts-Strogatz (WS) small-world neuronal networks by controlling two parameters. One is the time delay τ and the other is the probability of partial time delay pdelay. Temporal dynamics of WS small-world neuronal networks are discussed with the aid of temporal coherence and mean firing rate. With the obtained simulation results, it is revealed that for small time delay τ, the probability pdelay could weaken temporal coherence and increase mean firing rate of neuronal networks, which indicates that it could improve neuronal firings of the neuronal networks while destroying firing regularity. For large time delay τ, temporal coherence and mean firing rate do not have great changes with respect to pdelay. Time delay τ always has great influence on both temporal coherence and mean firing rate no matter what is the value of pdelay. Moreover, with the analysis of spike trains and histograms of interspike intervals of neurons inside neuronal networks, it is found that the effects of partial time delays on temporal coherence and mean firing rate could be the result of locking between the period of neuronal firing activities and the value of time delay τ. In brief, partial time delay could have great influence on temporal dynamics of the neuronal networks.

  10. Emergent Oscillations in Networks of Stochastic Spiking Neurons

    PubMed Central

    van Drongelen, Wim; Cowan, Jack D.

    2011-01-01

    Networks of neurons produce diverse patterns of oscillations, arising from the network's global properties, the propensity of individual neurons to oscillate, or a mixture of the two. Here we describe noisy limit cycles and quasi-cycles, two related mechanisms underlying emergent oscillations in neuronal networks whose individual components, stochastic spiking neurons, do not themselves oscillate. Both mechanisms are shown to produce gamma band oscillations at the population level while individual neurons fire at a rate much lower than the population frequency. Spike trains in a network undergoing noisy limit cycles display a preferred period which is not found in the case of quasi-cycles, due to the even faster decay of phase information in quasi-cycles. These oscillations persist in sparsely connected networks, and variation of the network's connectivity results in variation of the oscillation frequency. A network of such neurons behaves as a stochastic perturbation of the deterministic Wilson-Cowan equations, and the network undergoes noisy limit cycles or quasi-cycles depending on whether these have limit cycles or a weakly stable focus. These mechanisms provide a new perspective on the emergence of rhythmic firing in neural networks, showing the coexistence of population-level oscillations with very irregular individual spike trains in a simple and general framework. PMID:21573105

  11. Identified Serotonergic Modulatory Neurons Have Heterogeneous Synaptic Connectivity within the Olfactory System of Drosophila.

    PubMed

    Coates, Kaylynn E; Majot, Adam T; Zhang, Xiaonan; Michael, Cole T; Spitzer, Stacy L; Gaudry, Quentin; Dacks, Andrew M

    2017-08-02

    Modulatory neurons project widely throughout the brain, dynamically altering network processing based on an animal's physiological state. The connectivity of individual modulatory neurons can be complex, as they often receive input from a variety of sources and are diverse in their physiology, structure, and gene expression profiles. To establish basic principles about the connectivity of individual modulatory neurons, we examined a pair of identified neurons, the "contralaterally projecting, serotonin-immunoreactive deutocerebral neurons" (CSDns), within the olfactory system of Drosophila Specifically, we determined the neuronal classes providing synaptic input to the CSDns within the antennal lobe (AL), an olfactory network targeted by the CSDns, and the degree to which CSDn active zones are uniformly distributed across the AL. Using anatomical techniques, we found that the CSDns received glomerulus-specific input from olfactory receptor neurons (ORNs) and projection neurons (PNs), and networkwide input from local interneurons (LNs). Furthermore, we quantified the number of CSDn active zones in each glomerulus and found that CSDn output is not uniform, but rather heterogeneous, across glomeruli and stereotyped from animal to animal. Finally, we demonstrate that the CSDns synapse broadly onto LNs and PNs throughout the AL but do not synapse upon ORNs. Our results demonstrate that modulatory neurons do not necessarily provide purely top-down input but rather receive neuron class-specific input from the networks that they target, and that even a two cell modulatory network has highly heterogeneous, yet stereotyped, pattern of connectivity. SIGNIFICANCE STATEMENT Modulatory neurons often project broadly throughout the brain to alter processing based on physiological state. However, the connectivity of individual modulatory neurons to their target networks is not well understood, as modulatory neuron populations are heterogeneous in their physiology, morphology, and gene expression. In this study, we use a pair of identified serotonergic neurons within the Drosophila olfactory system as a model to establish a framework for modulatory neuron connectivity. We demonstrate that individual modulatory neurons can integrate neuron class-specific input from their target network, which is often nonreciprocal. Additionally, modulatory neuron output can be stereotyped, yet nonuniform, across network regions. Our results provide new insight into the synaptic relationships that underlie network function of modulatory neurons. Copyright © 2017 the authors 0270-6474/17/377318-14$15.00/0.

  12. A real-time hybrid neuron network for highly parallel cognitive systems.

    PubMed

    Christiaanse, Gerrit Jan; Zjajo, Amir; Galuzzi, Carlo; van Leuken, Rene

    2016-08-01

    For comprehensive understanding of how neurons communicate with each other, new tools need to be developed that can accurately mimic the behaviour of such neurons and neuron networks under `real-time' constraints. In this paper, we propose an easily customisable, highly pipelined, neuron network design, which executes optimally scheduled floating-point operations for maximal amount of biophysically plausible neurons per FPGA family type. To reduce the required amount of resources without adverse effect on the calculation latency, a single exponent instance is used for multiple neuron calculation operations. Experimental results indicate that the proposed network design allows the simulation of up to 1188 neurons on Virtex7 (XC7VX550T) device in brain real-time yielding a speed-up of x12.4 compared to the state-of-the art.

  13. An FPGA-Based Silicon Neuronal Network with Selectable Excitability Silicon Neurons

    PubMed Central

    Li, Jing; Katori, Yuichi; Kohno, Takashi

    2012-01-01

    This paper presents a digital silicon neuronal network which simulates the nerve system in creatures and has the ability to execute intelligent tasks, such as associative memory. Two essential elements, the mathematical-structure-based digital spiking silicon neuron (DSSN) and the transmitter release based silicon synapse, allow us to tune the excitability of silicon neurons and are computationally efficient for hardware implementation. We adopt mixed pipeline and parallel structure and shift operations to design a sufficient large and complex network without excessive hardware resource cost. The network with 256 full-connected neurons is built on a Digilent Atlys board equipped with a Xilinx Spartan-6 LX45 FPGA. Besides, a memory control block and USB control block are designed to accomplish the task of data communication between the network and the host PC. This paper also describes the mechanism of associative memory performed in the silicon neuronal network. The network is capable of retrieving stored patterns if the inputs contain enough information of them. The retrieving probability increases with the similarity between the input and the stored pattern increasing. Synchronization of neurons is observed when the successful stored pattern retrieval occurs. PMID:23269911

  14. Triangular Quantum Loop Topography for Machine Learning

    NASA Astrophysics Data System (ADS)

    Zhang, Yi; Kim, Eun-Ah

    Despite rapidly growing interest in harnessing machine learning in the study of quantum many-body systems there has been little success in training neural networks to identify topological phases. The key challenge is in efficiently extracting essential information from the many-body Hamiltonian or wave function and turning the information into an image that can be fed into a neural network. When targeting topological phases, this task becomes particularly challenging as topological phases are defined in terms of non-local properties. Here we introduce triangular quantum loop (TQL) topography: a procedure of constructing a multi-dimensional image from the ''sample'' Hamiltonian or wave function using two-point functions that form triangles. Feeding the TQL topography to a fully-connected neural network with a single hidden layer, we demonstrate that the architecture can be effectively trained to distinguish Chern insulator and fractional Chern insulator from trivial insulators with high fidelity. Given the versatility of the TQL topography procedure that can handle different lattice geometries, disorder, interaction and even degeneracy our work paves the route towards powerful applications of machine learning in the study of topological quantum matters.

  15. Intrinsically active and pacemaker neurons in pluripotent stem cell-derived neuronal populations.

    PubMed

    Illes, Sebastian; Jakab, Martin; Beyer, Felix; Gelfert, Renate; Couillard-Despres, Sébastien; Schnitzler, Alfons; Ritter, Markus; Aigner, Ludwig

    2014-03-11

    Neurons generated from pluripotent stem cells (PSCs) self-organize into functional neuronal assemblies in vitro, generating synchronous network activities. Intriguingly, PSC-derived neuronal assemblies develop spontaneous activities that are independent of external stimulation, suggesting the presence of thus far undetected intrinsically active neurons (IANs). Here, by using mouse embryonic stem cells, we provide evidence for the existence of IANs in PSC-neuronal networks based on extracellular multielectrode array and intracellular patch-clamp recordings. IANs remain active after pharmacological inhibition of fast synaptic communication and possess intrinsic mechanisms required for autonomous neuronal activity. PSC-derived IANs are functionally integrated in PSC-neuronal populations, contribute to synchronous network bursting, and exhibit pacemaker properties. The intrinsic activity and pacemaker properties of the neuronal subpopulation identified herein may be particularly relevant for interventions involving transplantation of neural tissues. IANs may be a key element in the regulation of the functional activity of grafted as well as preexisting host neuronal networks.

  16. Intrinsically Active and Pacemaker Neurons in Pluripotent Stem Cell-Derived Neuronal Populations

    PubMed Central

    Illes, Sebastian; Jakab, Martin; Beyer, Felix; Gelfert, Renate; Couillard-Despres, Sébastien; Schnitzler, Alfons; Ritter, Markus; Aigner, Ludwig

    2014-01-01

    Summary Neurons generated from pluripotent stem cells (PSCs) self-organize into functional neuronal assemblies in vitro, generating synchronous network activities. Intriguingly, PSC-derived neuronal assemblies develop spontaneous activities that are independent of external stimulation, suggesting the presence of thus far undetected intrinsically active neurons (IANs). Here, by using mouse embryonic stem cells, we provide evidence for the existence of IANs in PSC-neuronal networks based on extracellular multielectrode array and intracellular patch-clamp recordings. IANs remain active after pharmacological inhibition of fast synaptic communication and possess intrinsic mechanisms required for autonomous neuronal activity. PSC-derived IANs are functionally integrated in PSC-neuronal populations, contribute to synchronous network bursting, and exhibit pacemaker properties. The intrinsic activity and pacemaker properties of the neuronal subpopulation identified herein may be particularly relevant for interventions involving transplantation of neural tissues. IANs may be a key element in the regulation of the functional activity of grafted as well as preexisting host neuronal networks. PMID:24672755

  17. A spatially resolved network spike in model neuronal cultures reveals nucleation centers, circular traveling waves and drifting spiral waves.

    PubMed

    Paraskevov, A V; Zendrikov, D K

    2017-03-23

    We show that in model neuronal cultures, where the probability of interneuronal connection formation decreases exponentially with increasing distance between the neurons, there exists a small number of spatial nucleation centers of a network spike, from where the synchronous spiking activity starts propagating in the network typically in the form of circular traveling waves. The number of nucleation centers and their spatial locations are unique and unchanged for a given realization of neuronal network but are different for different networks. In contrast, if the probability of interneuronal connection formation is independent of the distance between neurons, then the nucleation centers do not arise and the synchronization of spiking activity during a network spike occurs spatially uniform throughout the network. Therefore one can conclude that spatial proximity of connections between neurons is important for the formation of nucleation centers. It is also shown that fluctuations of the spatial density of neurons at their random homogeneous distribution typical for the experiments in vitro do not determine the locations of the nucleation centers. The simulation results are qualitatively consistent with the experimental observations.

  18. A spatially resolved network spike in model neuronal cultures reveals nucleation centers, circular traveling waves and drifting spiral waves

    NASA Astrophysics Data System (ADS)

    Paraskevov, A. V.; Zendrikov, D. K.

    2017-04-01

    We show that in model neuronal cultures, where the probability of interneuronal connection formation decreases exponentially with increasing distance between the neurons, there exists a small number of spatial nucleation centers of a network spike, from where the synchronous spiking activity starts propagating in the network typically in the form of circular traveling waves. The number of nucleation centers and their spatial locations are unique and unchanged for a given realization of neuronal network but are different for different networks. In contrast, if the probability of interneuronal connection formation is independent of the distance between neurons, then the nucleation centers do not arise and the synchronization of spiking activity during a network spike occurs spatially uniform throughout the network. Therefore one can conclude that spatial proximity of connections between neurons is important for the formation of nucleation centers. It is also shown that fluctuations of the spatial density of neurons at their random homogeneous distribution typical for the experiments in vitro do not determine the locations of the nucleation centers. The simulation results are qualitatively consistent with the experimental observations.

  19. A new cross-correlation algorithm for the analysis of "in vitro" neuronal network activity aimed at pharmacological studies.

    PubMed

    Biffi, E; Menegon, A; Regalia, G; Maida, S; Ferrigno, G; Pedrocchi, A

    2011-08-15

    Modern drug discovery for Central Nervous System pathologies has recently focused its attention to in vitro neuronal networks as models for the study of neuronal activities. Micro Electrode Arrays (MEAs), a widely recognized tool for pharmacological investigations, enable the simultaneous study of the spiking activity of discrete regions of a neuronal culture, providing an insight into the dynamics of networks. Taking advantage of MEAs features and making the most of the cross-correlation analysis to assess internal parameters of a neuronal system, we provide an efficient method for the evaluation of comprehensive neuronal network activity. We developed an intra network burst correlation algorithm, we evaluated its sensitivity and we explored its potential use in pharmacological studies. Our results demonstrate the high sensitivity of this algorithm and the efficacy of this methodology in pharmacological dose-response studies, with the advantage of analyzing the effect of drugs on the comprehensive correlative properties of integrated neuronal networks. Copyright © 2011 Elsevier B.V. All rights reserved.

  20. Neural networks with multiple general neuron models: a hybrid computational intelligence approach using Genetic Programming.

    PubMed

    Barton, Alan J; Valdés, Julio J; Orchard, Robert

    2009-01-01

    Classical neural networks are composed of neurons whose nature is determined by a certain function (the neuron model), usually pre-specified. In this paper, a type of neural network (NN-GP) is presented in which: (i) each neuron may have its own neuron model in the form of a general function, (ii) any layout (i.e network interconnection) is possible, and (iii) no bias nodes or weights are associated to the connections, neurons or layers. The general functions associated to a neuron are learned by searching a function space. They are not provided a priori, but are rather built as part of an Evolutionary Computation process based on Genetic Programming. The resulting network solutions are evaluated based on a fitness measure, which may, for example, be based on classification or regression errors. Two real-world examples are presented to illustrate the promising behaviour on classification problems via construction of a low-dimensional representation of a high-dimensional parameter space associated to the set of all network solutions.

  1. Self-organization of synchronous activity propagation in neuronal networks driven by local excitation

    PubMed Central

    Bayati, Mehdi; Valizadeh, Alireza; Abbassian, Abdolhossein; Cheng, Sen

    2015-01-01

    Many experimental and theoretical studies have suggested that the reliable propagation of synchronous neural activity is crucial for neural information processing. The propagation of synchronous firing activity in so-called synfire chains has been studied extensively in feed-forward networks of spiking neurons. However, it remains unclear how such neural activity could emerge in recurrent neuronal networks through synaptic plasticity. In this study, we investigate whether local excitation, i.e., neurons that fire at a higher frequency than the other, spontaneously active neurons in the network, can shape a network to allow for synchronous activity propagation. We use two-dimensional, locally connected and heterogeneous neuronal networks with spike-timing dependent plasticity (STDP). We find that, in our model, local excitation drives profound network changes within seconds. In the emergent network, neural activity propagates synchronously through the network. This activity originates from the site of the local excitation and propagates through the network. The synchronous activity propagation persists, even when the local excitation is removed, since it derives from the synaptic weight matrix. Importantly, once this connectivity is established it remains stable even in the presence of spontaneous activity. Our results suggest that synfire-chain-like activity can emerge in a relatively simple way in realistic neural networks by locally exciting the desired origin of the neuronal sequence. PMID:26089794

  2. Synaptic Impairment and Robustness of Excitatory Neuronal Networks with Different Topologies

    PubMed Central

    Mirzakhalili, Ehsan; Gourgou, Eleni; Booth, Victoria; Epureanu, Bogdan

    2017-01-01

    Synaptic deficiencies are a known hallmark of neurodegenerative diseases, but the diagnosis of impaired synapses on the cellular level is not an easy task. Nonetheless, changes in the system-level dynamics of neuronal networks with damaged synapses can be detected using techniques that do not require high spatial resolution. This paper investigates how the structure/topology of neuronal networks influences their dynamics when they suffer from synaptic loss. We study different neuronal network structures/topologies by specifying their degree distributions. The modes of the degree distribution can be used to construct networks that consist of rich clubs and resemble small world networks, as well. We define two dynamical metrics to compare the activity of networks with different structures: persistent activity (namely, the self-sustained activity of the network upon removal of the initial stimulus) and quality of activity (namely, percentage of neurons that participate in the persistent activity of the network). Our results show that synaptic loss affects the persistent activity of networks with bimodal degree distributions less than it affects random networks. The robustness of neuronal networks enhances when the distance between the modes of the degree distribution increases, suggesting that the rich clubs of networks with distinct modes keep the whole network active. In addition, a tradeoff is observed between the quality of activity and the persistent activity. For a range of distributions, both of these dynamical metrics are considerably high for networks with bimodal degree distribution compared to random networks. We also propose three different scenarios of synaptic impairment, which may correspond to different pathological or biological conditions. Regardless of the network structure/topology, results demonstrate that synaptic loss has more severe effects on the activity of the network when impairments are correlated with the activity of the neurons. PMID:28659765

  3. PhotoMEA: an opto-electronic biosensor for monitoring in vitro neuronal network activity.

    PubMed

    Ghezzi, Diego; Pedrocchi, Alessandra; Menegon, Andrea; Mantero, Sara; Valtorta, Flavia; Ferrigno, Giancarlo

    2007-02-01

    PhotoMEA is a biosensor useful for the analysis of an in vitro neuronal network, fully based on optical methods. Its function is based on the stimulation of neurons with caged glutamate and the recording of neuronal activity by Voltage-Sensitive fluorescent Dyes (VSD). The main advantage is that it will be possible to stimulate even at sub-single neuron level and to record with high resolution the activity of the entire network in the culture. A large-scale view of neuronal intercommunications offers a unique opportunity for testing the ability of drugs to affect neuronal properties as well as alterations in the behaviour of the entire network. The concept and a prototype for validation is described here in detail.

  4. Synaptic dynamics regulation in response to high frequency stimulation in neuronal networks

    NASA Astrophysics Data System (ADS)

    Su, Fei; Wang, Jiang; Li, Huiyan; Wei, Xile; Yu, Haitao; Deng, Bin

    2018-02-01

    High frequency stimulation (HFS) has confirmed its ability in modulating the pathological neural activities. However its detailed mechanism is unclear. This study aims to explore the effects of HFS on neuronal networks dynamics. First, the two-neuron FitzHugh-Nagumo (FHN) networks with static coupling strength and the small-world FHN networks with spike-time-dependent plasticity (STDP) modulated synaptic coupling strength are constructed. Then, the multi-scale method is used to transform the network models into equivalent averaged models, where the HFS intensity is modeled as the ratio between stimulation amplitude and frequency. Results show that in static two-neuron networks, there is still synaptic current projected to the postsynaptic neuron even if the presynaptic neuron is blocked by the HFS. In the small-world networks, the effects of the STDP adjusting rate parameter on the inactivation ratio and synchrony degree increase with the increase of HFS intensity. However, only when the HFS intensity becomes very large can the STDP time window parameter affect the inactivation ratio and synchrony index. Both simulation and numerical analysis demonstrate that the effects of HFS on neuronal network dynamics are realized through the adjustment of synaptic variable and conductance.

  5. Training a Network of Electronic Neurons for Control of a Mobile Robot

    NASA Astrophysics Data System (ADS)

    Vromen, T. G. M.; Steur, E.; Nijmeijer, H.

    An adaptive training procedure is developed for a network of electronic neurons, which controls a mobile robot driving around in an unknown environment while avoiding obstacles. The neuronal network controls the angular velocity of the wheels of the robot based on the sensor readings. The nodes in the neuronal network controller are clusters of neurons rather than single neurons. The adaptive training procedure ensures that the input-output behavior of the clusters is identical, even though the constituting neurons are nonidentical and have, in isolation, nonidentical responses to the same input. In particular, we let the neurons interact via a diffusive coupling, and the proposed training procedure modifies the diffusion interaction weights such that the neurons behave synchronously with a predefined response. The working principle of the training procedure is experimentally validated and results of an experiment with a mobile robot that is completely autonomously driving in an unknown environment with obstacles are presented.

  6. Establishment of a Human Neuronal Network Assessment System by Using a Human Neuron/Astrocyte Co-Culture Derived from Fetal Neural Stem/Progenitor Cells.

    PubMed

    Fukushima, Kazuyuki; Miura, Yuji; Sawada, Kohei; Yamazaki, Kazuto; Ito, Masashi

    2016-01-01

    Using human cell models mimicking the central nervous system (CNS) provides a better understanding of the human CNS, and it is a key strategy to improve success rates in CNS drug development. In the CNS, neurons function as networks in which astrocytes play important roles. Thus, an assessment system of neuronal network functions in a co-culture of human neurons and astrocytes has potential to accelerate CNS drug development. We previously demonstrated that human hippocampus-derived neural stem/progenitor cells (HIP-009 cells) were a novel tool to obtain human neurons and astrocytes in the same culture. In this study, we applied HIP-009 cells to a multielectrode array (MEA) system to detect neuronal signals as neuronal network functions. We observed spontaneous firings of HIP-009 neurons, and validated functional formation of neuronal networks pharmacologically. By using this assay system, we investigated effects of several reference compounds, including agonists and antagonists of glutamate and γ-aminobutyric acid receptors, and sodium, potassium, and calcium channels, on neuronal network functions using firing and burst numbers, and synchrony as readouts. These results indicate that the HIP-009/MEA assay system is applicable to the pharmacological assessment of drug candidates affecting synaptic functions for CNS drug development. © 2015 Society for Laboratory Automation and Screening.

  7. Synaptic plasticity and neuronal refractory time cause scaling behaviour of neuronal avalanches

    NASA Astrophysics Data System (ADS)

    Michiels van Kessenich, L.; de Arcangelis, L.; Herrmann, H. J.

    2016-08-01

    Neuronal avalanches measured in vitro and in vivo in different cortical networks consistently exhibit power law behaviour for the size and duration distributions with exponents typical for a mean field self-organized branching process. These exponents are also recovered in neuronal network simulations implementing various neuronal dynamics on different network topologies. They can therefore be considered a very robust feature of spontaneous neuronal activity. Interestingly, this scaling behaviour is also observed on regular lattices in finite dimensions, which raises the question about the origin of the mean field behavior observed experimentally. In this study we provide an answer to this open question by investigating the effect of activity dependent plasticity in combination with the neuronal refractory time in a neuronal network. Results show that the refractory time hinders backward avalanches forcing a directed propagation. Hebbian plastic adaptation plays the role of sculpting these directed avalanche patterns into the topology of the network slowly changing it into a branched structure where loops are marginal.

  8. Synaptic plasticity and neuronal refractory time cause scaling behaviour of neuronal avalanches.

    PubMed

    Michiels van Kessenich, L; de Arcangelis, L; Herrmann, H J

    2016-08-18

    Neuronal avalanches measured in vitro and in vivo in different cortical networks consistently exhibit power law behaviour for the size and duration distributions with exponents typical for a mean field self-organized branching process. These exponents are also recovered in neuronal network simulations implementing various neuronal dynamics on different network topologies. They can therefore be considered a very robust feature of spontaneous neuronal activity. Interestingly, this scaling behaviour is also observed on regular lattices in finite dimensions, which raises the question about the origin of the mean field behavior observed experimentally. In this study we provide an answer to this open question by investigating the effect of activity dependent plasticity in combination with the neuronal refractory time in a neuronal network. Results show that the refractory time hinders backward avalanches forcing a directed propagation. Hebbian plastic adaptation plays the role of sculpting these directed avalanche patterns into the topology of the network slowly changing it into a branched structure where loops are marginal.

  9. Dynamic range in small-world networks of Hodgkin-Huxley neurons with chemical synapses

    NASA Astrophysics Data System (ADS)

    Batista, C. A. S.; Viana, R. L.; Lopes, S. R.; Batista, A. M.

    2014-09-01

    According to Stevens' law the relationship between stimulus and response is a power-law within an interval called the dynamic range. The dynamic range of sensory organs is found to be larger than that of a single neuron, suggesting that the network structure plays a key role in the behavior of both the scaling exponent and the dynamic range of neuron assemblies. In order to verify computationally the relationships between stimulus and response for spiking neurons, we investigate small-world networks of neurons described by the Hodgkin-Huxley equations connected by chemical synapses. We found that the dynamic range increases with the network size, suggesting that the enhancement of the dynamic range observed in sensory organs, with respect to single neurons, is an emergent property of complex network dynamics.

  10. The relevance of network micro-structure for neural dynamics.

    PubMed

    Pernice, Volker; Deger, Moritz; Cardanobile, Stefano; Rotter, Stefan

    2013-01-01

    The activity of cortical neurons is determined by the input they receive from presynaptic neurons. Many previous studies have investigated how specific aspects of the statistics of the input affect the spike trains of single neurons and neurons in recurrent networks. However, typically very simple random network models are considered in such studies. Here we use a recently developed algorithm to construct networks based on a quasi-fractal probability measure which are much more variable than commonly used network models, and which therefore promise to sample the space of recurrent networks in a more exhaustive fashion than previously possible. We use the generated graphs as the underlying network topology in simulations of networks of integrate-and-fire neurons in an asynchronous and irregular state. Based on an extensive dataset of networks and neuronal simulations we assess statistical relations between features of the network structure and the spiking activity. Our results highlight the strong influence that some details of the network structure have on the activity dynamics of both single neurons and populations, even if some global network parameters are kept fixed. We observe specific and consistent relations between activity characteristics like spike-train irregularity or correlations and network properties, for example the distributions of the numbers of in- and outgoing connections or clustering. Exploiting these relations, we demonstrate that it is possible to estimate structural characteristics of the network from activity data. We also assess higher order correlations of spiking activity in the various networks considered here, and find that their occurrence strongly depends on the network structure. These results provide directions for further theoretical studies on recurrent networks, as well as new ways to interpret spike train recordings from neural circuits.

  11. Probabilistic Inference in General Graphical Models through Sampling in Stochastic Networks of Spiking Neurons

    PubMed Central

    Pecevski, Dejan; Buesing, Lars; Maass, Wolfgang

    2011-01-01

    An important open problem of computational neuroscience is the generic organization of computations in networks of neurons in the brain. We show here through rigorous theoretical analysis that inherent stochastic features of spiking neurons, in combination with simple nonlinear computational operations in specific network motifs and dendritic arbors, enable networks of spiking neurons to carry out probabilistic inference through sampling in general graphical models. In particular, it enables them to carry out probabilistic inference in Bayesian networks with converging arrows (“explaining away”) and with undirected loops, that occur in many real-world tasks. Ubiquitous stochastic features of networks of spiking neurons, such as trial-to-trial variability and spontaneous activity, are necessary ingredients of the underlying computational organization. We demonstrate through computer simulations that this approach can be scaled up to neural emulations of probabilistic inference in fairly large graphical models, yielding some of the most complex computations that have been carried out so far in networks of spiking neurons. PMID:22219717

  12. Synaptic Plasticity and Spike Synchronisation in Neuronal Networks

    NASA Astrophysics Data System (ADS)

    Borges, Rafael R.; Borges, Fernando S.; Lameu, Ewandson L.; Protachevicz, Paulo R.; Iarosz, Kelly C.; Caldas, Iberê L.; Viana, Ricardo L.; Macau, Elbert E. N.; Baptista, Murilo S.; Grebogi, Celso; Batista, Antonio M.

    2017-12-01

    Brain plasticity, also known as neuroplasticity, is a fundamental mechanism of neuronal adaptation in response to changes in the environment or due to brain injury. In this review, we show our results about the effects of synaptic plasticity on neuronal networks composed by Hodgkin-Huxley neurons. We show that the final topology of the evolved network depends crucially on the ratio between the strengths of the inhibitory and excitatory synapses. Excitation of the same order of inhibition revels an evolved network that presents the rich-club phenomenon, well known to exist in the brain. For initial networks with considerably larger inhibitory strengths, we observe the emergence of a complex evolved topology, where neurons sparsely connected to other neurons, also a typical topology of the brain. The presence of noise enhances the strength of both types of synapses, but if the initial network has synapses of both natures with similar strengths. Finally, we show how the synchronous behaviour of the evolved network will reflect its evolved topology.

  13. Homeostatic Scaling of Excitability in Recurrent Neural Networks

    PubMed Central

    Remme, Michiel W. H.; Wadman, Wytse J.

    2012-01-01

    Neurons adjust their intrinsic excitability when experiencing a persistent change in synaptic drive. This process can prevent neural activity from moving into either a quiescent state or a saturated state in the face of ongoing plasticity, and is thought to promote stability of the network in which neurons reside. However, most neurons are embedded in recurrent networks, which require a delicate balance between excitation and inhibition to maintain network stability. This balance could be disrupted when neurons independently adjust their intrinsic excitability. Here, we study the functioning of activity-dependent homeostatic scaling of intrinsic excitability (HSE) in a recurrent neural network. Using both simulations of a recurrent network consisting of excitatory and inhibitory neurons that implement HSE, and a mean-field description of adapting excitatory and inhibitory populations, we show that the stability of such adapting networks critically depends on the relationship between the adaptation time scales of both neuron populations. In a stable adapting network, HSE can keep all neurons functioning within their dynamic range, while the network is undergoing several (patho)physiologically relevant types of plasticity, such as persistent changes in external drive, changes in connection strengths, or the loss of inhibitory cells from the network. However, HSE cannot prevent the unstable network dynamics that result when, due to such plasticity, recurrent excitation in the network becomes too strong compared to feedback inhibition. This suggests that keeping a neural network in a stable and functional state requires the coordination of distinct homeostatic mechanisms that operate not only by adjusting neural excitability, but also by controlling network connectivity. PMID:22570604

  14. Detection of 5-hydroxytryptamine (5-HT) in vitro using a hippocampal neuronal network-based biosensor with extracellular potential analysis of neurons.

    PubMed

    Hu, Liang; Wang, Qin; Qin, Zhen; Su, Kaiqi; Huang, Liquan; Hu, Ning; Wang, Ping

    2015-04-15

    5-hydroxytryptamine (5-HT) is an important neurotransmitter in regulating emotions and related behaviors in mammals. To detect and monitor the 5-HT, effective and convenient methods are demanded in investigation of neuronal network. In this study, hippocampal neuronal networks (HNNs) endogenously expressing 5-HT receptors were employed as sensing elements to build an in vitro neuronal network-based biosensor. The electrophysiological characteristics were analyzed in both neuron and network levels. The firing rates and amplitudes were derived from signal to determine the biosensor response characteristics. The experimental results demonstrate a dose-dependent inhibitory effect of 5-HT on hippocampal neuron activities, indicating the effectiveness of this hybrid biosensor in detecting 5-HT with a response range from 0.01μmol/L to 10μmol/L. In addition, the cross-correlation analysis of HNNs activities suggests 5-HT could weaken HNN connectivity reversibly, providing more specificity of this biosensor in detecting 5-HT. Moreover, 5-HT induced spatiotemporal firing pattern alterations could be monitored in neuron and network levels simultaneously by this hybrid biosensor in a convenient and direct way. With those merits, this neuronal network-based biosensor will be promising to be a valuable and utility platform for the study of neurotransmitter in vitro. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. Extensive excitatory network interactions shape temporal processing of communication signals in a model sensory system.

    PubMed

    Ma, Xiaofeng; Kohashi, Tsunehiko; Carlson, Bruce A

    2013-07-01

    Many sensory brain regions are characterized by extensive local network interactions. However, we know relatively little about the contribution of this microcircuitry to sensory coding. Detailed analyses of neuronal microcircuitry are usually performed in vitro, whereas sensory processing is typically studied by recording from individual neurons in vivo. The electrosensory pathway of mormyrid fish provides a unique opportunity to link in vitro studies of synaptic physiology with in vivo studies of sensory processing. These fish communicate by actively varying the intervals between pulses of electricity. Within the midbrain posterior exterolateral nucleus (ELp), the temporal filtering of afferent spike trains establishes interval tuning by single neurons. We characterized pairwise neuronal connectivity among ELp neurons with dual whole cell recording in an in vitro whole brain preparation. We found a densely connected network in which single neurons influenced the responses of other neurons throughout the network. Similarly tuned neurons were more likely to share an excitatory synaptic connection than differently tuned neurons, and synaptic connections between similarly tuned neurons were stronger than connections between differently tuned neurons. We propose a general model for excitatory network interactions in which strong excitatory connections both reinforce and adjust tuning and weak excitatory connections make smaller modifications to tuning. The diversity of interval tuning observed among this population of neurons can be explained, in part, by each individual neuron receiving a different complement of local excitatory inputs.

  16. Activity of cardiorespiratory networks revealed by transsynaptic virus expressing GFP.

    PubMed

    Irnaten, M; Neff, R A; Wang, J; Loewy, A D; Mettenleiter, T C; Mendelowitz, D

    2001-01-01

    A fluorescent transneuronal marker capable of labeling individual neurons in a central network while maintaining their normal physiology would permit functional studies of neurons within entire networks responsible for complex behaviors such as cardiorespiratory reflexes. The Bartha strain of pseudorabies virus (PRV), an attenuated swine alpha herpesvirus, can be used as a transsynaptic marker of neural circuits. Bartha PRV invades neuronal networks in the CNS through peripherally projecting axons, replicates in these parent neurons, and then travels transsynaptically to continue labeling the second- and higher-order neurons in a time-dependent manner. A Bartha PRV mutant that expresses green fluorescent protein (GFP) was used to visualize and record from neurons that determine the vagal motor outflow to the heart. Here we show that Bartha PRV-GFP-labeled neurons retain their normal electrophysiological properties and that the labeled baroreflex pathways that control heart rate are unaltered by the virus. This novel transynaptic virus permits in vitro studies of identified neurons within functionally defined neuronal systems including networks that mediate cardiovascular and respiratory function and interactions. We also demonstrate superior laryngeal motorneurons fire spontaneously and synapse on cardiac vagal neurons in the nucleus ambiguus. This cardiorespiratory pathway provides a neural basis of respiratory sinus arrhythmias.

  17. Self-Consistent Scheme for Spike-Train Power Spectra in Heterogeneous Sparse Networks

    PubMed Central

    Pena, Rodrigo F. O.; Vellmer, Sebastian; Bernardi, Davide; Roque, Antonio C.; Lindner, Benjamin

    2018-01-01

    Recurrent networks of spiking neurons can be in an asynchronous state characterized by low or absent cross-correlations and spike statistics which resemble those of cortical neurons. Although spatial correlations are negligible in this state, neurons can show pronounced temporal correlations in their spike trains that can be quantified by the autocorrelation function or the spike-train power spectrum. Depending on cellular and network parameters, correlations display diverse patterns (ranging from simple refractory-period effects and stochastic oscillations to slow fluctuations) and it is generally not well-understood how these dependencies come about. Previous work has explored how the single-cell correlations in a homogeneous network (excitatory and inhibitory integrate-and-fire neurons with nearly balanced mean recurrent input) can be determined numerically from an iterative single-neuron simulation. Such a scheme is based on the fact that every neuron is driven by the network noise (i.e., the input currents from all its presynaptic partners) but also contributes to the network noise, leading to a self-consistency condition for the input and output spectra. Here we first extend this scheme to homogeneous networks with strong recurrent inhibition and a synaptic filter, in which instabilities of the previous scheme are avoided by an averaging procedure. We then extend the scheme to heterogeneous networks in which (i) different neural subpopulations (e.g., excitatory and inhibitory neurons) have different cellular or connectivity parameters; (ii) the number and strength of the input connections are random (Erdős-Rényi topology) and thus different among neurons. In all heterogeneous cases, neurons are lumped in different classes each of which is represented by a single neuron in the iterative scheme; in addition, we make a Gaussian approximation of the input current to the neuron. These approximations seem to be justified over a broad range of parameters as indicated by comparison with simulation results of large recurrent networks. Our method can help to elucidate how network heterogeneity shapes the asynchronous state in recurrent neural networks. PMID:29551968

  18. Nanostructured superhydrophobic substrates trigger the development of 3D neuronal networks.

    PubMed

    Limongi, Tania; Cesca, Fabrizia; Gentile, Francesco; Marotta, Roberto; Ruffilli, Roberta; Barberis, Andrea; Dal Maschio, Marco; Petrini, Enrica Maria; Santoriello, Stefania; Benfenati, Fabio; Di Fabrizio, Enzo

    2013-02-11

    The generation of 3D networks of primary neurons is a big challenge in neuroscience. Here, a novel method is presented for a 3D neuronal culture on superhydrophobic (SH) substrates. How nano-patterned SH devices stimulate neurons to build 3D networks is investigated. Scanning electron microscopy and confocal imaging show that soon after plating neurites adhere to the nanopatterned pillar sidewalls and they are subsequently pulled between pillars in a suspended position. These neurons display an enhanced survival rate compared to standard cultures and develop mature networks with physiological excitability. These findings underline the importance of using nanostructured SH surfaces for directing 3D neuronal growth, as well as for the design of biomaterials for neuronal regeneration. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Self-Organized Supercriticality and Oscillations in Networks of Stochastic Spiking Neurons

    NASA Astrophysics Data System (ADS)

    Costa, Ariadne; Brochini, Ludmila; Kinouchi, Osame

    2017-08-01

    Networks of stochastic spiking neurons are interesting models in the area of Theoretical Neuroscience, presenting both continuous and discontinuous phase transitions. Here we study fully connected networks analytically, numerically and by computational simulations. The neurons have dynamic gains that enable the network to converge to a stationary slightly supercritical state (self-organized supercriticality or SOSC) in the presence of the continuous transition. We show that SOSC, which presents power laws for neuronal avalanches plus some large events, is robust as a function of the main parameter of the neuronal gain dynamics. We discuss the possible applications of the idea of SOSC to biological phenomena like epilepsy and dragon king avalanches. We also find that neuronal gains can produce collective oscillations that coexists with neuronal avalanches, with frequencies compatible with characteristic brain rhythms.

  20. [Morphological and laminar distribution of cholecystokinin-immunoreactive neurons in cortex of human inferior parietal lobe and their clinical significance].

    PubMed

    Puskas, Laslo; Draganić-Gajić, Saveta; Malobabić, Slobodan; Puskas, Nela; Krivokuća, Dragan; Stanković, Gordana

    2008-01-01

    Cholecystocinine is a neuropeptide whose function in the cortex has not yet been clarified, although its relation with some psychic disorders has been noticed. Previous studies have not provided detailed data about types, or arrangement of neurons that contain those neuropeptide in the cortex of human inferior parietal lobe. The aim of this study was to examine precisely the morphology and typography of neurons containing cholecytocinine in the human cortex of inferior parietal lobule. There were five human brains on which we did the immunocystochemical research of the shape and laminar distribution of cholecystocinine immunoreactive neurons on serial sections of supramarginal gyrus and angular gyrus. The morphological analysis of cholecystocinine-immunoreactive neurons was done on frozen sections using avidin-biotin technique, by antibody to cholecystocinine diluted in the proportion 1:6000 using diamine-benzedine. Cholecystocinine immunoreactive neurons were found in the first three layers of the cortex of inferior parietal lobule, and their densest concentration was in the 2nd and 3rd layer. The following types of neurons were found: bipolar neurons, then its fusiform subtype, Cajal-Retzius neurons (in the 1st layer), reverse pyramidal (triangular) and unipolar neurons. The diameters of some types of neurons were from 15 to 35 microm, and the diameters of dendritic arborization were from 85-207 microm. A special emphasis is put on the finding of Cajal-Retzius neurons that are immunoreactive to cholecystocinine, which demands further research. Bearing in mind numerous clinical studies pointing out the role of cholecystokinine in the pathogenesis of schizophrenia, the presence of a great number of cholecystokinine immunoreactive neurons in the cortex of inferior parietal lobule suggests their role in the pathogenesis of schizophrenia.

  1. Estimating network parameters from combined dynamics of firing rate and irregularity of single neurons.

    PubMed

    Hamaguchi, Kosuke; Riehle, Alexa; Brunel, Nicolas

    2011-01-01

    High firing irregularity is a hallmark of cortical neurons in vivo, and modeling studies suggest a balance of excitation and inhibition is necessary to explain this high irregularity. Such a balance must be generated, at least partly, from local interconnected networks of excitatory and inhibitory neurons, but the details of the local network structure are largely unknown. The dynamics of the neural activity depends on the local network structure; this in turn suggests the possibility of estimating network structure from the dynamics of the firing statistics. Here we report a new method to estimate properties of the local cortical network from the instantaneous firing rate and irregularity (CV(2)) under the assumption that recorded neurons are a part of a randomly connected sparse network. The firing irregularity, measured in monkey motor cortex, exhibits two features; many neurons show relatively stable firing irregularity in time and across different task conditions; the time-averaged CV(2) is widely distributed from quasi-regular to irregular (CV(2) = 0.3-1.0). For each recorded neuron, we estimate the three parameters of a local network [balance of local excitation-inhibition, number of recurrent connections per neuron, and excitatory postsynaptic potential (EPSP) size] that best describe the dynamics of the measured firing rates and irregularities. Our analysis shows that optimal parameter sets form a two-dimensional manifold in the three-dimensional parameter space that is confined for most of the neurons to the inhibition-dominated region. High irregularity neurons tend to be more strongly connected to the local network, either in terms of larger EPSP and inhibitory PSP size or larger number of recurrent connections, compared with the low irregularity neurons, for a given excitatory/inhibitory balance. Incorporating either synaptic short-term depression or conductance-based synapses leads many low CV(2) neurons to move to the excitation-dominated region as well as to an increase of EPSP size.

  2. High-resolution CMOS MEA platform to study neurons at subcellular, cellular, and network levels†

    PubMed Central

    Müller, Jan; Ballini, Marco; Livi, Paolo; Chen, Yihui; Radivojevic, Milos; Shadmani, Amir; Viswam, Vijay; Jones, Ian L.; Fiscella, Michele; Diggelmann, Roland; Stettler, Alexander; Frey, Urs; Bakkum, Douglas J.; Hierlemann, Andreas

    2017-01-01

    Studies on information processing and learning properties of neuronal networks would benefit from simultaneous and parallel access to the activity of a large fraction of all neurons in such networks. Here, we present a CMOS-based device, capable of simultaneously recording the electrical activity of over a thousand cells in in vitro neuronal networks. The device provides sufficiently high spatiotemporal resolution to enable, at the same time, access to neuronal preparations on subcellular, cellular, and network level. The key feature is a rapidly reconfigurable array of 26 400 microelectrodes arranged at low pitch (17.5 μm) within a large overall sensing area (3.85 × 2.10 mm2). An arbitrary subset of the electrodes can be simultaneously connected to 1024 low-noise readout channels as well as 32 stimulation units. Each electrode or electrode subset can be used to electrically stimulate or record the signals of virtually any neuron on the array. We demonstrate the applicability and potential of this device for various different experimental paradigms: large-scale recordings from whole networks of neurons as well as investigations of axonal properties of individual neurons. PMID:25973786

  3. High-resolution CMOS MEA platform to study neurons at subcellular, cellular, and network levels.

    PubMed

    Müller, Jan; Ballini, Marco; Livi, Paolo; Chen, Yihui; Radivojevic, Milos; Shadmani, Amir; Viswam, Vijay; Jones, Ian L; Fiscella, Michele; Diggelmann, Roland; Stettler, Alexander; Frey, Urs; Bakkum, Douglas J; Hierlemann, Andreas

    2015-07-07

    Studies on information processing and learning properties of neuronal networks would benefit from simultaneous and parallel access to the activity of a large fraction of all neurons in such networks. Here, we present a CMOS-based device, capable of simultaneously recording the electrical activity of over a thousand cells in in vitro neuronal networks. The device provides sufficiently high spatiotemporal resolution to enable, at the same time, access to neuronal preparations on subcellular, cellular, and network level. The key feature is a rapidly reconfigurable array of 26 400 microelectrodes arranged at low pitch (17.5 μm) within a large overall sensing area (3.85 × 2.10 mm(2)). An arbitrary subset of the electrodes can be simultaneously connected to 1024 low-noise readout channels as well as 32 stimulation units. Each electrode or electrode subset can be used to electrically stimulate or record the signals of virtually any neuron on the array. We demonstrate the applicability and potential of this device for various different experimental paradigms: large-scale recordings from whole networks of neurons as well as investigations of axonal properties of individual neurons.

  4. Phase synchronization of bursting neurons in clustered small-world networks

    NASA Astrophysics Data System (ADS)

    Batista, C. A. S.; Lameu, E. L.; Batista, A. M.; Lopes, S. R.; Pereira, T.; Zamora-López, G.; Kurths, J.; Viana, R. L.

    2012-07-01

    We investigate the collective dynamics of bursting neurons on clustered networks. The clustered network model is composed of subnetworks, each of them presenting the so-called small-world property. This model can also be regarded as a network of networks. In each subnetwork a neuron is connected to other ones with regular as well as random connections, the latter with a given intracluster probability. Moreover, in a given subnetwork each neuron has an intercluster probability to be connected to the other subnetworks. The local neuron dynamics has two time scales (fast and slow) and is modeled by a two-dimensional map. In such small-world network the neuron parameters are chosen to be slightly different such that, if the coupling strength is large enough, there may be synchronization of the bursting (slow) activity. We give bounds for the critical coupling strength to obtain global burst synchronization in terms of the network structure, that is, the probabilities of intracluster and intercluster connections. We find that, as the heterogeneity in the network is reduced, the network global synchronizability is improved. We show that the transitions to global synchrony may be abrupt or smooth depending on the intercluster probability.

  5. Effects of partial time delays on phase synchronization in Watts-Strogatz small-world neuronal networks

    NASA Astrophysics Data System (ADS)

    Sun, Xiaojuan; Perc, Matjaž; Kurths, Jürgen

    2017-05-01

    In this paper, we study effects of partial time delays on phase synchronization in Watts-Strogatz small-world neuronal networks. Our focus is on the impact of two parameters, namely the time delay τ and the probability of partial time delay pdelay, whereby the latter determines the probability with which a connection between two neurons is delayed. Our research reveals that partial time delays significantly affect phase synchronization in this system. In particular, partial time delays can either enhance or decrease phase synchronization and induce synchronization transitions with changes in the mean firing rate of neurons, as well as induce switching between synchronized neurons with period-1 firing to synchronized neurons with period-2 firing. Moreover, in comparison to a neuronal network where all connections are delayed, we show that small partial time delay probabilities have especially different influences on phase synchronization of neuronal networks.

  6. Effects of partial time delays on phase synchronization in Watts-Strogatz small-world neuronal networks.

    PubMed

    Sun, Xiaojuan; Perc, Matjaž; Kurths, Jürgen

    2017-05-01

    In this paper, we study effects of partial time delays on phase synchronization in Watts-Strogatz small-world neuronal networks. Our focus is on the impact of two parameters, namely the time delay τ and the probability of partial time delay p delay , whereby the latter determines the probability with which a connection between two neurons is delayed. Our research reveals that partial time delays significantly affect phase synchronization in this system. In particular, partial time delays can either enhance or decrease phase synchronization and induce synchronization transitions with changes in the mean firing rate of neurons, as well as induce switching between synchronized neurons with period-1 firing to synchronized neurons with period-2 firing. Moreover, in comparison to a neuronal network where all connections are delayed, we show that small partial time delay probabilities have especially different influences on phase synchronization of neuronal networks.

  7. Long-term optical stimulation of channelrhodopsin-expressing neurons to study network plasticity

    PubMed Central

    Lignani, Gabriele; Ferrea, Enrico; Difato, Francesco; Amarù, Jessica; Ferroni, Eleonora; Lugarà, Eleonora; Espinoza, Stefano; Gainetdinov, Raul R.; Baldelli, Pietro; Benfenati, Fabio

    2013-01-01

    Neuronal plasticity produces changes in excitability, synaptic transmission, and network architecture in response to external stimuli. Network adaptation to environmental conditions takes place in time scales ranging from few seconds to days, and modulates the entire network dynamics. To study the network response to defined long-term experimental protocols, we setup a system that combines optical and electrophysiological tools embedded in a cell incubator. Primary hippocampal neurons transduced with lentiviruses expressing channelrhodopsin-2/H134R were subjected to various photostimulation protocols in a time window in the order of days. To monitor the effects of light-induced gating of network activity, stimulated transduced neurons were simultaneously recorded using multi-electrode arrays (MEAs). The developed experimental model allows discerning short-term, long-lasting, and adaptive plasticity responses of the same neuronal network to distinct stimulation frequencies applied over different temporal windows. PMID:23970852

  8. Long-term optical stimulation of channelrhodopsin-expressing neurons to study network plasticity.

    PubMed

    Lignani, Gabriele; Ferrea, Enrico; Difato, Francesco; Amarù, Jessica; Ferroni, Eleonora; Lugarà, Eleonora; Espinoza, Stefano; Gainetdinov, Raul R; Baldelli, Pietro; Benfenati, Fabio

    2013-01-01

    Neuronal plasticity produces changes in excitability, synaptic transmission, and network architecture in response to external stimuli. Network adaptation to environmental conditions takes place in time scales ranging from few seconds to days, and modulates the entire network dynamics. To study the network response to defined long-term experimental protocols, we setup a system that combines optical and electrophysiological tools embedded in a cell incubator. Primary hippocampal neurons transduced with lentiviruses expressing channelrhodopsin-2/H134R were subjected to various photostimulation protocols in a time window in the order of days. To monitor the effects of light-induced gating of network activity, stimulated transduced neurons were simultaneously recorded using multi-electrode arrays (MEAs). The developed experimental model allows discerning short-term, long-lasting, and adaptive plasticity responses of the same neuronal network to distinct stimulation frequencies applied over different temporal windows.

  9. Improved Autoassociative Neural Networks

    NASA Technical Reports Server (NTRS)

    Hand, Charles

    2003-01-01

    Improved autoassociative neural networks, denoted nexi, have been proposed for use in controlling autonomous robots, including mobile exploratory robots of the biomorphic type. In comparison with conventional autoassociative neural networks, nexi would be more complex but more capable in that they could be trained to do more complex tasks. A nexus would use bit weights and simple arithmetic in a manner that would enable training and operation without a central processing unit, programs, weight registers, or large amounts of memory. Only a relatively small amount of memory (to hold the bit weights) and a simple logic application- specific integrated circuit would be needed. A description of autoassociative neural networks is prerequisite to a meaningful description of a nexus. An autoassociative network is a set of neurons that are completely connected in the sense that each neuron receives input from, and sends output to, all the other neurons. (In some instantiations, a neuron could also send output back to its own input terminal.) The state of a neuron is completely determined by the inner product of its inputs with weights associated with its input channel. Setting the weights sets the behavior of the network. The neurons of an autoassociative network are usually regarded as comprising a row or vector. Time is a quantized phenomenon for most autoassociative networks in the sense that time proceeds in discrete steps. At each time step, the row of neurons forms a pattern: some neurons are firing, some are not. Hence, the current state of an autoassociative network can be described with a single binary vector. As time goes by, the network changes the vector. Autoassociative networks move vectors over hyperspace landscapes of possibilities.

  10. Energetic Constraints Produce Self-sustained Oscillatory Dynamics in Neuronal Networks

    PubMed Central

    Burroni, Javier; Taylor, P.; Corey, Cassian; Vachnadze, Tengiz; Siegelmann, Hava T.

    2017-01-01

    Overview: We model energy constraints in a network of spiking neurons, while exploring general questions of resource limitation on network function abstractly. Background: Metabolic states like dietary ketosis or hypoglycemia have a large impact on brain function and disease outcomes. Glia provide metabolic support for neurons, among other functions. Yet, in computational models of glia-neuron cooperation, there have been no previous attempts to explore the effects of direct realistic energy costs on network activity in spiking neurons. Currently, biologically realistic spiking neural networks assume that membrane potential is the main driving factor for neural spiking, and do not take into consideration energetic costs. Methods: We define local energy pools to constrain a neuron model, termed Spiking Neuron Energy Pool (SNEP), which explicitly incorporates energy limitations. Each neuron requires energy to spike, and resources in the pool regenerate over time. Our simulation displays an easy-to-use GUI, which can be run locally in a web browser, and is freely available. Results: Energy dependence drastically changes behavior of these neural networks, causing emergent oscillations similar to those in networks of biological neurons. We analyze the system via Lotka-Volterra equations, producing several observations: (1) energy can drive self-sustained oscillations, (2) the energetic cost of spiking modulates the degree and type of oscillations, (3) harmonics emerge with frequencies determined by energy parameters, and (4) varying energetic costs have non-linear effects on energy consumption and firing rates. Conclusions: Models of neuron function which attempt biological realism may benefit from including energy constraints. Further, we assert that observed oscillatory effects of energy limitations exist in networks of many kinds, and that these findings generalize to abstract graphs and technological applications. PMID:28289370

  11. Developing a tissue-engineered neural-electrical relay using encapsulated neuronal constructs on conducting polymer fibers.

    PubMed

    Cullen, D Kacy; R Patel, Ankur; Doorish, John F; Smith, Douglas H; Pfister, Bryan J

    2008-12-01

    Neural-electrical interface platforms are being developed to extracellularly monitor neuronal population activity. Polyaniline-based electrically conducting polymer fibers are attractive substrates for sustained functional interfaces with neurons due to their flexibility, tailored geometry and controlled electro-conductive properties. In this study, we addressed the neurobiological considerations of utilizing small diameter (<400 microm) fibers consisting of a blend of electrically conductive polyaniline and polypropylene (PA-PP) as the backbone of encapsulated tissue-engineered neural-electrical relays. We devised new approaches to promote survival, adhesion and neurite outgrowth of primary dorsal root ganglion neurons on PA-PP fibers. We attained a greater than ten-fold increase in the density of viable neurons on fiber surfaces to approximately 700 neurons mm(-2) by manipulating surrounding surface charges to bias settling neuronal suspensions toward fibers coated with cell-adhesive ligands. This stark increase in neuronal density resulted in robust neuritic extension and network formation directly along the fibers. Additionally, we encapsulated these neuronal networks on PA-PP fibers using agarose to form a protective barrier while potentially facilitating network stability. Following encapsulation, the neuronal networks maintained integrity, high viability (>85%) and intimate adhesion to PA-PP fibers. These efforts accomplished key prerequisites for the establishment of functional electrical interfaces with neuronal populations using small diameter PA-PP fibers-specifically, improved neurocompatibility, high-density neuronal adhesion and neuritic network development directly on fiber surfaces.

  12. A reanalysis of "Two types of asynchronous activity in networks of excitatory and inhibitory spiking neurons".

    PubMed

    Engelken, Rainer; Farkhooi, Farzad; Hansel, David; van Vreeswijk, Carl; Wolf, Fred

    2016-01-01

    Neuronal activity in the central nervous system varies strongly in time and across neuronal populations. It is a longstanding proposal that such fluctuations generically arise from chaotic network dynamics. Various theoretical studies predict that the rich dynamics of rate models operating in the chaotic regime can subserve circuit computation and learning. Neurons in the brain, however, communicate via spikes and it is a theoretical challenge to obtain similar rate fluctuations in networks of spiking neuron models. A recent study investigated spiking balanced networks of leaky integrate and fire (LIF) neurons and compared their dynamics to a matched rate network with identical topology, where single unit input-output functions were chosen from isolated LIF neurons receiving Gaussian white noise input. A mathematical analogy between the chaotic instability in networks of rate units and the spiking network dynamics was proposed. Here we revisit the behavior of the spiking LIF networks and these matched rate networks. We find expected hallmarks of a chaotic instability in the rate network: For supercritical coupling strength near the transition point, the autocorrelation time diverges. For subcritical coupling strengths, we observe critical slowing down in response to small external perturbations. In the spiking network, we found in contrast that the timescale of the autocorrelations is insensitive to the coupling strength and that rate deviations resulting from small input perturbations rapidly decay. The decay speed even accelerates for increasing coupling strength. In conclusion, our reanalysis demonstrates fundamental differences between the behavior of pulse-coupled spiking LIF networks and rate networks with matched topology and input-output function. In particular there is no indication of a corresponding chaotic instability in the spiking network.

  13. Cultured neuronal networks as environmental biosensors.

    PubMed

    O'Shaughnessy, Thomas J; Gray, Samuel A; Pancrazio, Joseph J

    2004-01-01

    Contamination of water by toxins, either intentionally or unintentionally, is a growing concern for both military and civilian agencies and thus there is a need for systems capable of monitoring a wide range of natural and industrial toxicants. The EILATox-Oregon Workshop held in September 2002 provided an opportunity to test the capabilities of a prototype neuronal network-based biosensor with unknown contaminants in water samples. The biosensor is a portable device capable of recording the action potential activity from a network of mammalian neurons grown on glass microelectrode arrays. Changes in the action potential fi ring rate across the network are monitored to determine exposure to toxicants. A series of three neuronal networks derived from mice was used to test seven unknown samples. Two of these unknowns later were revealed to be blanks, to which the neuronal networks did not respond. Of the five remaining unknowns, a significant change in network activity was detected for four of the compounds at concentrations below a lethal level for humans: mercuric chloride, sodium arsenite, phosdrin and chlordimeform. These compounds--two heavy metals, an organophosphate and an insecticide--demonstrate the breadth of detection possible with neuronal networks. The results generated at the workshop show the promise of the neuronal network biosensor as an environmental detector but there is still considerable effort needed to produce a device suitable for routine environmental threat monitoring.

  14. Neurons from the adult human dentate nucleus: neural networks in the neuron classification.

    PubMed

    Grbatinić, Ivan; Marić, Dušica L; Milošević, Nebojša T

    2015-04-07

    Topological (central vs. border neuron type) and morphological classification of adult human dentate nucleus neurons according to their quantified histomorphological properties using neural networks on real and virtual neuron samples. In the real sample 53.1% and 14.1% of central and border neurons, respectively, are classified correctly with total of 32.8% of misclassified neurons. The most important result present 62.2% of misclassified neurons in border neurons group which is even greater than number of correctly classified neurons (37.8%) in that group, showing obvious failure of network to classify neurons correctly based on computational parameters used in our study. On the virtual sample 97.3% of misclassified neurons in border neurons group which is much greater than number of correctly classified neurons (2.7%) in that group, again confirms obvious failure of network to classify neurons correctly. Statistical analysis shows that there is no statistically significant difference in between central and border neurons for each measured parameter (p>0.05). Total of 96.74% neurons are morphologically classified correctly by neural networks and each one belongs to one of the four histomorphological types: (a) neurons with small soma and short dendrites, (b) neurons with small soma and long dendrites, (c) neuron with large soma and short dendrites, (d) neurons with large soma and long dendrites. Statistical analysis supports these results (p<0.05). Human dentate nucleus neurons can be classified in four neuron types according to their quantitative histomorphological properties. These neuron types consist of two neuron sets, small and large ones with respect to their perykarions with subtypes differing in dendrite length i.e. neurons with short vs. long dendrites. Besides confirmation of neuron classification on small and large ones, already shown in literature, we found two new subtypes i.e. neurons with small soma and long dendrites and with large soma and short dendrites. These neurons are most probably equally distributed throughout the dentate nucleus as no significant difference in their topological distribution is observed. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Connectomic constraints on computation in feedforward networks of spiking neurons.

    PubMed

    Ramaswamy, Venkatakrishnan; Banerjee, Arunava

    2014-10-01

    Several efforts are currently underway to decipher the connectome or parts thereof in a variety of organisms. Ascertaining the detailed physiological properties of all the neurons in these connectomes, however, is out of the scope of such projects. It is therefore unclear to what extent knowledge of the connectome alone will advance a mechanistic understanding of computation occurring in these neural circuits, especially when the high-level function of the said circuit is unknown. We consider, here, the question of how the wiring diagram of neurons imposes constraints on what neural circuits can compute, when we cannot assume detailed information on the physiological response properties of the neurons. We call such constraints-that arise by virtue of the connectome-connectomic constraints on computation. For feedforward networks equipped with neurons that obey a deterministic spiking neuron model which satisfies a small number of properties, we ask if just by knowing the architecture of a network, we can rule out computations that it could be doing, no matter what response properties each of its neurons may have. We show results of this form, for certain classes of network architectures. On the other hand, we also prove that with the limited set of properties assumed for our model neurons, there are fundamental limits to the constraints imposed by network structure. Thus, our theory suggests that while connectomic constraints might restrict the computational ability of certain classes of network architectures, we may require more elaborate information on the properties of neurons in the network, before we can discern such results for other classes of networks.

  16. Clique of Functional Hubs Orchestrates Population Bursts in Developmentally Regulated Neural Networks

    PubMed Central

    Luccioli, Stefano; Ben-Jacob, Eshel; Barzilai, Ari; Bonifazi, Paolo; Torcini, Alessandro

    2014-01-01

    It has recently been discovered that single neuron stimulation can impact network dynamics in immature and adult neuronal circuits. Here we report a novel mechanism which can explain in neuronal circuits, at an early stage of development, the peculiar role played by a few specific neurons in promoting/arresting the population activity. For this purpose, we consider a standard neuronal network model, with short-term synaptic plasticity, whose population activity is characterized by bursting behavior. The addition of developmentally inspired constraints and correlations in the distribution of the neuronal connectivities and excitabilities leads to the emergence of functional hub neurons, whose stimulation/deletion is critical for the network activity. Functional hubs form a clique, where a precise sequential activation of the neurons is essential to ignite collective events without any need for a specific topological architecture. Unsupervised time-lagged firings of supra-threshold cells, in connection with coordinated entrainments of near-threshold neurons, are the key ingredients to orchestrate population activity. PMID:25255443

  17. Revealing degree distribution of bursting neuron networks.

    PubMed

    Shen, Yu; Hou, Zhonghuai; Xin, Houwen

    2010-03-01

    We present a method to infer the degree distribution of a bursting neuron network from its dynamics. Burst synchronization (BS) of coupled Morris-Lecar neurons has been studied under the weak coupling condition. In the BS state, all the neurons start and end bursting almost simultaneously, while the spikes inside the burst are incoherent among the neurons. Interestingly, we find that the spike amplitude of a given neuron shows an excellent linear relationship with its degree, which makes it possible to estimate the degree distribution of the network by simple statistics of the spike amplitudes. We demonstrate the validity of this scheme on scale-free as well as small-world networks. The underlying mechanism of such a method is also briefly discussed.

  18. Recording axonal conduction to evaluate the integration of pluripotent cell-derived neurons into a neuronal network.

    PubMed

    Shimba, Kenta; Sakai, Koji; Takayama, Yuzo; Kotani, Kiyoshi; Jimbo, Yasuhiko

    2015-10-01

    Stem cell transplantation is a promising therapy to treat neurodegenerative disorders, and a number of in vitro models have been developed for studying interactions between grafted neurons and the host neuronal network to promote drug discovery. However, methods capable of evaluating the process by which stem cells integrate into the host neuronal network are lacking. In this study, we applied an axonal conduction-based analysis to a co-culture study of primary and differentiated neurons. Mouse cortical neurons and neuronal cells differentiated from P19 embryonal carcinoma cells, a model for early neural differentiation of pluripotent stem cells, were co-cultured in a microfabricated device. The somata of these cells were separated by the co-culture device, but their axons were able to elongate through microtunnels and then form synaptic contacts. Propagating action potentials were recorded from these axons by microelectrodes embedded at the bottom of the microtunnels and sorted into clusters representing individual axons. While the number of axons of cortical neurons increased until 14 days in vitro and then decreased, those of P19 neurons increased throughout the culture period. Network burst analysis showed that P19 neurons participated in approximately 80% of the bursting activity after 14 days in vitro. Interestingly, the axonal conduction delay of P19 neurons was significantly greater than that of cortical neurons, suggesting that there are some physiological differences in their axons. These results suggest that our method is feasible to evaluate the process by which stem cell-derived neurons integrate into a host neuronal network.

  19. Speed and segmentation control mechanisms characterized in rhythmically-active circuits created from spinal neurons produced from genetically-tagged embryonic stem cells

    PubMed Central

    Sternfeld, Matthew J; Hinckley, Christopher A; Moore, Niall J; Pankratz, Matthew T; Hilde, Kathryn L; Driscoll, Shawn P; Hayashi, Marito; Amin, Neal D; Bonanomi, Dario; Gifford, Wesley D; Sharma, Kamal; Goulding, Martyn; Pfaff, Samuel L

    2017-01-01

    Flexible neural networks, such as the interconnected spinal neurons that control distinct motor actions, can switch their activity to produce different behaviors. Both excitatory (E) and inhibitory (I) spinal neurons are necessary for motor behavior, but the influence of recruiting different ratios of E-to-I cells remains unclear. We constructed synthetic microphysical neural networks, called circuitoids, using precise combinations of spinal neuron subtypes derived from mouse stem cells. Circuitoids of purified excitatory interneurons were sufficient to generate oscillatory bursts with properties similar to in vivo central pattern generators. Inhibitory V1 neurons provided dual layers of regulation within excitatory rhythmogenic networks - they increased the rhythmic burst frequency of excitatory V3 neurons, and segmented excitatory motor neuron activity into sub-networks. Accordingly, the speed and pattern of spinal circuits that underlie complex motor behaviors may be regulated by quantitatively gating the intra-network cellular activity ratio of E-to-I neurons. DOI: http://dx.doi.org/10.7554/eLife.21540.001 PMID:28195039

  20. Orientation selectivity in inhibition-dominated networks of spiking neurons: effect of single neuron properties and network dynamics.

    PubMed

    Sadeh, Sadra; Rotter, Stefan

    2015-01-01

    The neuronal mechanisms underlying the emergence of orientation selectivity in the primary visual cortex of mammals are still elusive. In rodents, visual neurons show highly selective responses to oriented stimuli, but neighboring neurons do not necessarily have similar preferences. Instead of a smooth map, one observes a salt-and-pepper organization of orientation selectivity. Modeling studies have recently confirmed that balanced random networks are indeed capable of amplifying weakly tuned inputs and generating highly selective output responses, even in absence of feature-selective recurrent connectivity. Here we seek to elucidate the neuronal mechanisms underlying this phenomenon by resorting to networks of integrate-and-fire neurons, which are amenable to analytic treatment. Specifically, in networks of perfect integrate-and-fire neurons, we observe that highly selective and contrast invariant output responses emerge, very similar to networks of leaky integrate-and-fire neurons. We then demonstrate that a theory based on mean firing rates and the detailed network topology predicts the output responses, and explains the mechanisms underlying the suppression of the common-mode, amplification of modulation, and contrast invariance. Increasing inhibition dominance in our networks makes the rectifying nonlinearity more prominent, which in turn adds some distortions to the otherwise essentially linear prediction. An extension of the linear theory can account for all the distortions, enabling us to compute the exact shape of every individual tuning curve in our networks. We show that this simple form of nonlinearity adds two important properties to orientation selectivity in the network, namely sharpening of tuning curves and extra suppression of the modulation. The theory can be further extended to account for the nonlinearity of the leaky model by replacing the rectifier by the appropriate smooth input-output transfer function. These results are robust and do not depend on the state of network dynamics, and hold equally well for mean-driven and fluctuation-driven regimes of activity.

  1. Orientation Selectivity in Inhibition-Dominated Networks of Spiking Neurons: Effect of Single Neuron Properties and Network Dynamics

    PubMed Central

    Sadeh, Sadra; Rotter, Stefan

    2015-01-01

    The neuronal mechanisms underlying the emergence of orientation selectivity in the primary visual cortex of mammals are still elusive. In rodents, visual neurons show highly selective responses to oriented stimuli, but neighboring neurons do not necessarily have similar preferences. Instead of a smooth map, one observes a salt-and-pepper organization of orientation selectivity. Modeling studies have recently confirmed that balanced random networks are indeed capable of amplifying weakly tuned inputs and generating highly selective output responses, even in absence of feature-selective recurrent connectivity. Here we seek to elucidate the neuronal mechanisms underlying this phenomenon by resorting to networks of integrate-and-fire neurons, which are amenable to analytic treatment. Specifically, in networks of perfect integrate-and-fire neurons, we observe that highly selective and contrast invariant output responses emerge, very similar to networks of leaky integrate-and-fire neurons. We then demonstrate that a theory based on mean firing rates and the detailed network topology predicts the output responses, and explains the mechanisms underlying the suppression of the common-mode, amplification of modulation, and contrast invariance. Increasing inhibition dominance in our networks makes the rectifying nonlinearity more prominent, which in turn adds some distortions to the otherwise essentially linear prediction. An extension of the linear theory can account for all the distortions, enabling us to compute the exact shape of every individual tuning curve in our networks. We show that this simple form of nonlinearity adds two important properties to orientation selectivity in the network, namely sharpening of tuning curves and extra suppression of the modulation. The theory can be further extended to account for the nonlinearity of the leaky model by replacing the rectifier by the appropriate smooth input-output transfer function. These results are robust and do not depend on the state of network dynamics, and hold equally well for mean-driven and fluctuation-driven regimes of activity. PMID:25569445

  2. Widespread receptivity to neuropeptide PDF throughout the neuronal circadian clock network of Drosophila revealed by real-time cyclic AMP imaging.

    PubMed

    Shafer, Orie T; Kim, Dong Jo; Dunbar-Yaffe, Richard; Nikolaev, Viacheslav O; Lohse, Martin J; Taghert, Paul H

    2008-04-24

    The neuropeptide PDF is released by sixteen clock neurons in Drosophila and helps maintain circadian activity rhythms by coordinating a network of approximately 150 neuronal clocks. Whether PDF acts directly on elements of this neural network remains unknown. We address this question by adapting Epac1-camps, a genetically encoded cAMP FRET sensor, for use in the living brain. We find that a subset of the PDF-expressing neurons respond to PDF with long-lasting cAMP increases and confirm that such responses require the PDF receptor. In contrast, an unrelated Drosophila neuropeptide, DH31, stimulates large cAMP increases in all PDF-expressing clock neurons. Thus, the network of approximately 150 clock neurons displays widespread, though not uniform, PDF receptivity. This work introduces a sensitive means of measuring cAMP changes in a living brain with subcellular resolution. Specifically, it experimentally confirms the longstanding hypothesis that PDF is a direct modulator of most neurons in the Drosophila clock network.

  3. Dynamical analysis of Parkinsonian state emulated by hybrid Izhikevich neuron models

    NASA Astrophysics Data System (ADS)

    Liu, Chen; Wang, Jiang; Yu, Haitao; Deng, Bin; Wei, Xile; Li, Huiyan; Loparo, Kenneth A.; Fietkiewicz, Chris

    2015-11-01

    Computational models play a significant role in exploring novel theories to complement the findings of physiological experiments. Various computational models have been developed to reveal the mechanisms underlying brain functions. Particularly, in the development of therapies to modulate behavioral and pathological abnormalities, computational models provide the basic foundations to exhibit transitions between physiological and pathological conditions. Considering the significant roles of the intrinsic properties of the globus pallidus and the coupling connections between neurons in determining the firing patterns and the dynamical activities of the basal ganglia neuronal network, we propose a hypothesis that pathological behaviors under the Parkinsonian state may originate from combined effects of intrinsic properties of globus pallidus neurons and synaptic conductances in the whole neuronal network. In order to establish a computational efficient network model, hybrid Izhikevich neuron model is used due to its capacity of capturing the dynamical characteristics of the biological neuronal activities. Detailed analysis of the individual Izhikevich neuron model can assist in understanding the roles of model parameters, which then facilitates the establishment of the basal ganglia-thalamic network model, and contributes to a further exploration of the underlying mechanisms of the Parkinsonian state. Simulation results show that the hybrid Izhikevich neuron model is capable of capturing many of the dynamical properties of the basal ganglia-thalamic neuronal network, such as variations of the firing rates and emergence of synchronous oscillations under the Parkinsonian condition, despite the simplicity of the two-dimensional neuronal model. It may suggest that the computational efficient hybrid Izhikevich neuron model can be used to explore basal ganglia normal and abnormal functions. Especially it provides an efficient way of emulating the large-scale neuron network and potentially contributes to development of improved therapy for neurological disorders such as Parkinson's disease.

  4. Selective MBE growth of hexagonal networks of trapezoidal and triangular GaAs nanowires on patterned (1 1 1)B substrates

    NASA Astrophysics Data System (ADS)

    Tamai, Isao; Hasegawa, Hideki

    2007-04-01

    As a combination of novel hardware architecture and novel system architecture for future ultrahigh-density III-V nanodevice LSIs, the authors' group has recently proposed a hexagonal binary decision diagram (BDD) quantum circuit approach where gate-controlled path switching BDD node devices for a single or few electrons are laid out on a hexagonal nanowire network to realize a logic function. In this paper, attempts are made to establish a method to grow highly dense hexagonal nanowire networks for future BDD circuits by selective molecular beam epitaxy (MBE) on (1 1 1)B substrates. The (1 1 1)B orientation is suitable for BDD architecture because of the basic three-fold symmetry of the BDD node device. The growth experiments showed complex evolution of the cross-sectional structures, and it was explained in terms of kinetics determining facet boundaries. Straight arrays of triangular nanowires with 60 nm base width as well as hexagonal arrays of trapezoidal nanowires with a node density of 7.5×10 6 cm -2 were successfully grown with the aid of computer simulation. The result shows feasibility of growing high-density hexagonal networks of GaAs nanowires with precise control of the shape and size.

  5. Qualitative validation of the reduction from two reciprocally coupled neurons to one self-coupled neuron in a respiratory network model.

    PubMed

    Dunmyre, Justin R

    2011-06-01

    The pre-Bötzinger complex of the mammalian brainstem is a heterogeneous neuronal network, and individual neurons within the network have varying strengths of the persistent sodium and calcium-activated nonspecific cationic currents. Individually, these currents have been the focus of modeling efforts. Previously, Dunmyre et al. (J Comput Neurosci 1-24, 2011) proposed a model and studied the interactions of these currents within one self-coupled neuron. In this work, I consider two identical, reciprocally coupled model neurons and validate the reduction to the self-coupled case. I find that all of the dynamics of the two model neuron network and the regions of parameter space where these distinct dynamics are found are qualitatively preserved in the reduction to the self-coupled case.

  6. Structure-function analysis of genetically defined neuronal populations.

    PubMed

    Groh, Alexander; Krieger, Patrik

    2013-10-01

    Morphological and functional classification of individual neurons is a crucial aspect of the characterization of neuronal networks. Systematic structural and functional analysis of individual neurons is now possible using transgenic mice with genetically defined neurons that can be visualized in vivo or in brain slice preparations. Genetically defined neurons are useful for studying a particular class of neurons and also for more comprehensive studies of the neuronal content of a network. Specific subsets of neurons can be identified by fluorescence imaging of enhanced green fluorescent protein (eGFP) or another fluorophore expressed under the control of a cell-type-specific promoter. The advantages of such genetically defined neurons are not only their homogeneity and suitability for systematic descriptions of networks, but also their tremendous potential for cell-type-specific manipulation of neuronal networks in vivo. This article describes a selection of procedures for visualizing and studying the anatomy and physiology of genetically defined neurons in transgenic mice. We provide information about basic equipment, reagents, procedures, and analytical approaches for obtaining three-dimensional (3D) cell morphologies and determining the axonal input and output of genetically defined neurons. We exemplify with genetically labeled cortical neurons, but the procedures are applicable to other brain regions with little or no alterations.

  7. Developmental time windows for axon growth influence neuronal network topology.

    PubMed

    Lim, Sol; Kaiser, Marcus

    2015-04-01

    Early brain connectivity development consists of multiple stages: birth of neurons, their migration and the subsequent growth of axons and dendrites. Each stage occurs within a certain period of time depending on types of neurons and cortical layers. Forming synapses between neurons either by growing axons starting at similar times for all neurons (much-overlapped time windows) or at different time points (less-overlapped) may affect the topological and spatial properties of neuronal networks. Here, we explore the extreme cases of axon formation during early development, either starting at the same time for all neurons (parallel, i.e., maximally overlapped time windows) or occurring for each neuron separately one neuron after another (serial, i.e., no overlaps in time windows). For both cases, the number of potential and established synapses remained comparable. Topological and spatial properties, however, differed: Neurons that started axon growth early on in serial growth achieved higher out-degrees, higher local efficiency and longer axon lengths while neurons demonstrated more homogeneous connectivity patterns for parallel growth. Second, connection probability decreased more rapidly with distance between neurons for parallel growth than for serial growth. Third, bidirectional connections were more numerous for parallel growth. Finally, we tested our predictions with C. elegans data. Together, this indicates that time windows for axon growth influence the topological and spatial properties of neuronal networks opening up the possibility to a posteriori estimate developmental mechanisms based on network properties of a developed network.

  8. Spiking Neurons for Analysis of Patterns

    NASA Technical Reports Server (NTRS)

    Huntsberger, Terrance

    2008-01-01

    Artificial neural networks comprising spiking neurons of a novel type have been conceived as improved pattern-analysis and pattern-recognition computational systems. These neurons are represented by a mathematical model denoted the state-variable model (SVM), which among other things, exploits a computational parallelism inherent in spiking-neuron geometry. Networks of SVM neurons offer advantages of speed and computational efficiency, relative to traditional artificial neural networks. The SVM also overcomes some of the limitations of prior spiking-neuron models. There are numerous potential pattern-recognition, tracking, and data-reduction (data preprocessing) applications for these SVM neural networks on Earth and in exploration of remote planets. Spiking neurons imitate biological neurons more closely than do the neurons of traditional artificial neural networks. A spiking neuron includes a central cell body (soma) surrounded by a tree-like interconnection network (dendrites). Spiking neurons are so named because they generate trains of output pulses (spikes) in response to inputs received from sensors or from other neurons. They gain their speed advantage over traditional neural networks by using the timing of individual spikes for computation, whereas traditional artificial neurons use averages of activity levels over time. Moreover, spiking neurons use the delays inherent in dendritic processing in order to efficiently encode the information content of incoming signals. Because traditional artificial neurons fail to capture this encoding, they have less processing capability, and so it is necessary to use more gates when implementing traditional artificial neurons in electronic circuitry. Such higher-order functions as dynamic tasking are effected by use of pools (collections) of spiking neurons interconnected by spike-transmitting fibers. The SVM includes adaptive thresholds and submodels of transport of ions (in imitation of such transport in biological neurons). These features enable the neurons to adapt their responses to high-rate inputs from sensors, and to adapt their firing thresholds to mitigate noise or effects of potential sensor failure. The mathematical derivation of the SVM starts from a prior model, known in the art as the point soma model, which captures all of the salient properties of neuronal response while keeping the computational cost low. The point-soma latency time is modified to be an exponentially decaying function of the strength of the applied potential. Choosing computational efficiency over biological fidelity, the dendrites surrounding a neuron are represented by simplified compartmental submodels and there are no dendritic spines. Updates to the dendritic potential, calcium-ion concentrations and conductances, and potassium-ion conductances are done by use of equations similar to those of the point soma. Diffusion processes in dendrites are modeled by averaging among nearest-neighbor compartments. Inputs to each of the dendritic compartments come from sensors. Alternatively or in addition, when an affected neuron is part of a pool, inputs can come from other spiking neurons. At present, SVM neural networks are implemented by computational simulation, using algorithms that encode the SVM and its submodels. However, it should be possible to implement these neural networks in hardware: The differential equations for the dendritic and cellular processes in the SVM model of spiking neurons map to equivalent circuits that can be implemented directly in analog very-large-scale integrated (VLSI) circuits.

  9. Reliability and synchronization in a delay-coupled neuronal network with synaptic plasticity

    NASA Astrophysics Data System (ADS)

    Pérez, Toni; Uchida, Atsushi

    2011-06-01

    We investigate the characteristics of reliability and synchronization of a neuronal network of delay-coupled integrate and fire neurons. Reliability and synchronization appear in separated regions of the phase space of the parameters considered. The effect of including synaptic plasticity and different delay values between the connections are also considered. We found that plasticity strongly changes the characteristics of reliability and synchronization in the parameter space of the coupling strength and the drive amplitude for the neuronal network. We also found that delay does not affect the reliability of the network but has a determinant influence on the synchronization of the neurons.

  10. Reducing Neuronal Networks to Discrete Dynamics

    PubMed Central

    Terman, David; Ahn, Sungwoo; Wang, Xueying; Just, Winfried

    2008-01-01

    We consider a general class of purely inhibitory and excitatory-inhibitory neuronal networks, with a general class of network architectures, and characterize the complex firing patterns that emerge. Our strategy for studying these networks is to first reduce them to a discrete model. In the discrete model, each neuron is represented as a finite number of states and there are rules for how a neuron transitions from one state to another. In this paper, we rigorously demonstrate that the continuous neuronal model can be reduced to the discrete model if the intrinsic and synaptic properties of the cells are chosen appropriately. In a companion paper [1], we analyze the discrete model. PMID:18443649

  11. A principle of economy predicts the functional architecture of grid cells

    PubMed Central

    Wei, Xue-Xin; Prentice, Jason; Balasubramanian, Vijay

    2015-01-01

    Grid cells in the brain respond when an animal occupies a periodic lattice of ‘grid fields’ during navigation. Grids are organized in modules with different periodicity. We propose that the grid system implements a hierarchical code for space that economizes the number of neurons required to encode location with a given resolution across a range equal to the largest period. This theory predicts that (i) grid fields should lie on a triangular lattice, (ii) grid scales should follow a geometric progression, (iii) the ratio between adjacent grid scales should be √e for idealized neurons, and lie between 1.4 and 1.7 for realistic neurons, (iv) the scale ratio should vary modestly within and between animals. These results explain the measured grid structure in rodents. We also predict optimal organization in one and three dimensions, the number of modules, and, with added assumptions, the ratio between grid periods and field widths. DOI: http://dx.doi.org/10.7554/eLife.08362.001 PMID:26335200

  12. Effects of channel noise on firing coherence of small-world Hodgkin-Huxley neuronal networks

    NASA Astrophysics Data System (ADS)

    Sun, X. J.; Lei, J. Z.; Perc, M.; Lu, Q. S.; Lv, S. J.

    2011-01-01

    We investigate the effects of channel noise on firing coherence of Watts-Strogatz small-world networks consisting of biophysically realistic HH neurons having a fraction of blocked voltage-gated sodium and potassium ion channels embedded in their neuronal membranes. The intensity of channel noise is determined by the number of non-blocked ion channels, which depends on the fraction of working ion channels and the membrane patch size with the assumption of homogeneous ion channel density. We find that firing coherence of the neuronal network can be either enhanced or reduced depending on the source of channel noise. As shown in this paper, sodium channel noise reduces firing coherence of neuronal networks; in contrast, potassium channel noise enhances it. Furthermore, compared with potassium channel noise, sodium channel noise plays a dominant role in affecting firing coherence of the neuronal network. Moreover, we declare that the observed phenomena are independent of the rewiring probability.

  13. Study on algorithm of process neural network for soft sensing in sewage disposal system

    NASA Astrophysics Data System (ADS)

    Liu, Zaiwen; Xue, Hong; Wang, Xiaoyi; Yang, Bin; Lu, Siying

    2006-11-01

    A new method of soft sensing based on process neural network (PNN) for sewage disposal system is represented in the paper. PNN is an extension of traditional neural network, in which the inputs and outputs are time-variation. An aggregation operator is introduced to process neuron, and it makes the neuron network has the ability to deal with the information of space-time two dimensions at the same time, so the data processing enginery of biological neuron is imitated better than traditional neuron. Process neural network with the structure of three layers in which hidden layer is process neuron and input and output are common neurons for soft sensing is discussed. The intelligent soft sensing based on PNN may be used to fulfill measurement of the effluent BOD (Biochemical Oxygen Demand) from sewage disposal system, and a good training result of soft sensing was obtained by the method.

  14. A combined Bodian-Nissl stain for improved network analysis in neuronal cell culture.

    PubMed

    Hightower, M; Gross, G W

    1985-11-01

    Bodian and Nissl procedures were combined to stain dissociated mouse spinal cord cells cultured on coverslips. The Bodian technique stains fine neuronal processes in great detail as well as an intracellular fibrillar network concentrated around the nucleus and in proximal neurites. The Nissl stain clearly delimits neuronal cytoplasm in somata and in large dendrites. A combination of these techniques allows the simultaneous depiction of neuronal perikarya and all afferent and efferent processes. Costaining with little background staining by either procedure suggests high specificity for neurons. This procedure could be exploited for routine network analysis of cultured neurons.

  15. Cortical Dynamics in Presence of Assemblies of Densely Connected Weight-Hub Neurons

    PubMed Central

    Setareh, Hesam; Deger, Moritz; Petersen, Carl C. H.; Gerstner, Wulfram

    2017-01-01

    Experimental measurements of pairwise connection probability of pyramidal neurons together with the distribution of synaptic weights have been used to construct randomly connected model networks. However, several experimental studies suggest that both wiring and synaptic weight structure between neurons show statistics that differ from random networks. Here we study a network containing a subset of neurons which we call weight-hub neurons, that are characterized by strong inward synapses. We propose a connectivity structure for excitatory neurons that contain assemblies of densely connected weight-hub neurons, while the pairwise connection probability and synaptic weight distribution remain consistent with experimental data. Simulations of such a network with generalized integrate-and-fire neurons display regular and irregular slow oscillations akin to experimentally observed up/down state transitions in the activity of cortical neurons with a broad distribution of pairwise spike correlations. Moreover, stimulation of a model network in the presence or absence of assembly structure exhibits responses similar to light-evoked responses of cortical layers in optogenetically modified animals. We conclude that a high connection probability into and within assemblies of excitatory weight-hub neurons, as it likely is present in some but not all cortical layers, changes the dynamics of a layer of cortical microcircuitry significantly. PMID:28690508

  16. The Role of Adult-Born Neurons in the Constantly Changing Olfactory Bulb Network

    PubMed Central

    Malvaut, Sarah; Saghatelyan, Armen

    2016-01-01

    The adult mammalian brain is remarkably plastic and constantly undergoes structurofunctional modifications in response to environmental stimuli. In many regions plasticity is manifested by modifications in the efficacy of existing synaptic connections or synapse formation and elimination. In a few regions, however, plasticity is brought by the addition of new neurons that integrate into established neuronal networks. This type of neuronal plasticity is particularly prominent in the olfactory bulb (OB) where thousands of neuronal progenitors are produced on a daily basis in the subventricular zone (SVZ) and migrate along the rostral migratory stream (RMS) towards the OB. In the OB, these neuronal precursors differentiate into local interneurons, mature, and functionally integrate into the bulbar network by establishing output synapses with principal neurons. Despite continuous progress, it is still not well understood how normal functioning of the OB is preserved in the constantly remodelling bulbar network and what role adult-born neurons play in odor behaviour. In this review we will discuss different levels of morphofunctional plasticity effected by adult-born neurons and their functional role in the adult OB and also highlight the possibility that different subpopulations of adult-born cells may fulfill distinct functions in the OB neuronal network and odor behaviour. PMID:26839709

  17. The frequency preference of neurons and synapses in a recurrent oscillatory network.

    PubMed

    Tseng, Hua-an; Martinez, Diana; Nadim, Farzan

    2014-09-17

    A variety of neurons and synapses shows a maximal response at a preferred frequency, generally considered to be important in shaping network activity. We are interested in whether all neurons and synapses in a recurrent oscillatory network can have preferred frequencies and, if so, whether these frequencies are the same or correlated, and whether they influence the network activity. We address this question using identified neurons in the pyloric network of the crab Cancer borealis. Previous work has shown that the pyloric pacemaker neurons exhibit membrane potential resonance whose resonance frequency is correlated with the network frequency. The follower lateral pyloric (LP) neuron makes reciprocally inhibitory synapses with the pacemakers. We find that LP shows resonance at a higher frequency than the pacemakers and the network frequency falls between the two. We also find that the reciprocal synapses between the pacemakers and LP have preferred frequencies but at significantly lower values. The preferred frequency of the LP to pacemaker synapse is correlated with the presynaptic preferred frequency, which is most pronounced when the peak voltage of the LP waveform is within the dynamic range of the synaptic activation curve and a shift in the activation curve by the modulatory neuropeptide proctolin shifts the frequency preference. Proctolin also changes the power of the LP neuron resonance without significantly changing the resonance frequency. These results indicate that different neuron types and synapses in a network may have distinct preferred frequencies, which are subject to neuromodulation and may interact to shape network oscillations. Copyright © 2014 the authors 0270-6474/14/3412933-13$15.00/0.

  18. On the Dynamics of the Spontaneous Activity in Neuronal Networks

    PubMed Central

    Bonifazi, Paolo; Ruaro, Maria Elisabetta; Torre, Vincent

    2007-01-01

    Most neuronal networks, even in the absence of external stimuli, produce spontaneous bursts of spikes separated by periods of reduced activity. The origin and functional role of these neuronal events are still unclear. The present work shows that the spontaneous activity of two very different networks, intact leech ganglia and dissociated cultures of rat hippocampal neurons, share several features. Indeed, in both networks: i) the inter-spike intervals distribution of the spontaneous firing of single neurons is either regular or periodic or bursting, with the fraction of bursting neurons depending on the network activity; ii) bursts of spontaneous spikes have the same broad distributions of size and duration; iii) the degree of correlated activity increases with the bin width, and the power spectrum of the network firing rate has a 1/f behavior at low frequencies, indicating the existence of long-range temporal correlations; iv) the activity of excitatory synaptic pathways mediated by NMDA receptors is necessary for the onset of the long-range correlations and for the presence of large bursts; v) blockage of inhibitory synaptic pathways mediated by GABAA receptors causes instead an increase in the correlation among neurons and leads to a burst distribution composed only of very small and very large bursts. These results suggest that the spontaneous electrical activity in neuronal networks with different architectures and functions can have very similar properties and common dynamics. PMID:17502919

  19. Numbers And Gains Of Neurons In Winner-Take-All Networks

    NASA Technical Reports Server (NTRS)

    Brown, Timothy X.

    1993-01-01

    Report presents theoretical study of gains required in neurons to implement winner-take-all electronic neural network of given size and related question of maximum size of winner-take-all network in which neurons have specified sigmoid transfer or response function with specified gain.

  20. Simultaneous stability and sensitivity in model cortical networks is achieved through anti-correlations between the in- and out-degree of connectivity

    PubMed Central

    Vasquez, Juan C.; Houweling, Arthur R.; Tiesinga, Paul

    2013-01-01

    Neuronal networks in rodent barrel cortex are characterized by stable low baseline firing rates. However, they are sensitive to the action potentials of single neurons as suggested by recent single-cell stimulation experiments that reported quantifiable behavioral responses in response to short spike trains elicited in single neurons. Hence, these networks are stable against internally generated fluctuations in firing rate but at the same time remain sensitive to similarly-sized externally induced perturbations. We investigated stability and sensitivity in a simple recurrent network of stochastic binary neurons and determined numerically the effects of correlation between the number of afferent (“in-degree”) and efferent (“out-degree”) connections in neurons. The key advance reported in this work is that anti-correlation between in-/out-degree distributions increased the stability of the network in comparison to networks with no correlation or positive correlations, while being able to achieve the same level of sensitivity. The experimental characterization of degree distributions is difficult because all pre-synaptic and post-synaptic neurons have to be identified and counted. We explored whether the statistics of network motifs, which requires the characterization of connections between small subsets of neurons, could be used to detect evidence for degree anti-correlations. We find that the sample frequency of the 3-neuron “ring” motif (1→2→3→1), can be used to detect degree anti-correlation for sub-networks of size 30 using about 50 samples, which is of significance because the necessary measurements are achievable experimentally in the near future. Taken together, we hypothesize that barrel cortex networks exhibit degree anti-correlations and specific network motif statistics. PMID:24223550

  1. Numerical design and optimization of hydraulic resistance and wall shear stress inside pressure-driven microfluidic networks.

    PubMed

    Damiri, Hazem Salim; Bardaweel, Hamzeh Khalid

    2015-11-07

    Microfluidic networks represent the milestone of microfluidic devices. Recent advancements in microfluidic technologies mandate complex designs where both hydraulic resistance and pressure drop across the microfluidic network are minimized, while wall shear stress is precisely mapped throughout the network. In this work, a combination of theoretical and modeling techniques is used to construct a microfluidic network that operates under minimum hydraulic resistance and minimum pressure drop while constraining wall shear stress throughout the network. The results show that in order to minimize the hydraulic resistance and pressure drop throughout the network while maintaining constant wall shear stress throughout the network, geometric and shape conditions related to the compactness and aspect ratio of the parent and daughter branches must be followed. Also, results suggest that while a "local" minimum hydraulic resistance can be achieved for a geometry with an arbitrary aspect ratio, a "global" minimum hydraulic resistance occurs only when the aspect ratio of that geometry is set to unity. Thus, it is concluded that square and equilateral triangular cross-sectional area microfluidic networks have the least resistance compared to all rectangular and isosceles triangular cross-sectional microfluidic networks, respectively. Precise control over wall shear stress through the bifurcations of the microfluidic network is demonstrated in this work. Three multi-generation microfluidic network designs are considered. In these three designs, wall shear stress in the microfluidic network is successfully kept constant, increased in the daughter-branch direction, or decreased in the daughter-branch direction, respectively. For the multi-generation microfluidic network with constant wall shear stress, the design guidelines presented in this work result in identical profiles of wall shear stresses not only within a single generation but also through all the generations of the microfluidic network under investigation. The results obtained in this work are consistent with previously reported data and suitable for a wide range of lab-on-chip applications.

  2. GaAs Optoelectronic Integrated-Circuit Neurons

    NASA Technical Reports Server (NTRS)

    Lin, Steven H.; Kim, Jae H.; Psaltis, Demetri

    1992-01-01

    Monolithic GaAs optoelectronic integrated circuits developed for use as artificial neurons. Neural-network computer contains planar arrays of optoelectronic neurons, and variable synaptic connections between neurons effected by diffraction of light from volume hologram in photorefractive material. Basic principles of neural-network computers explained more fully in "Optoelectronic Integrated Circuits For Neural Networks" (NPO-17652). In present circuits, devices replaced by metal/semiconductor field effect transistors (MESFET's), which consume less power.

  3. Sustained synchronized neuronal network activity in a human astrocyte co-culture system

    PubMed Central

    Kuijlaars, Jacobine; Oyelami, Tutu; Diels, Annick; Rohrbacher, Jutta; Versweyveld, Sofie; Meneghello, Giulia; Tuefferd, Marianne; Verstraelen, Peter; Detrez, Jan R.; Verschuuren, Marlies; De Vos, Winnok H.; Meert, Theo; Peeters, Pieter J.; Cik, Miroslav; Nuydens, Rony; Brône, Bert; Verheyen, An

    2016-01-01

    Impaired neuronal network function is a hallmark of neurodevelopmental and neurodegenerative disorders such as autism, schizophrenia, and Alzheimer’s disease and is typically studied using genetically modified cellular and animal models. Weak predictive capacity and poor translational value of these models urge for better human derived in vitro models. The implementation of human induced pluripotent stem cells (hiPSCs) allows studying pathologies in differentiated disease-relevant and patient-derived neuronal cells. However, the differentiation process and growth conditions of hiPSC-derived neurons are non-trivial. In order to study neuronal network formation and (mal)function in a fully humanized system, we have established an in vitro co-culture model of hiPSC-derived cortical neurons and human primary astrocytes that recapitulates neuronal network synchronization and connectivity within three to four weeks after final plating. Live cell calcium imaging, electrophysiology and high content image analyses revealed an increased maturation of network functionality and synchronicity over time for co-cultures compared to neuronal monocultures. The cells express GABAergic and glutamatergic markers and respond to inhibitors of both neurotransmitter pathways in a functional assay. The combination of this co-culture model with quantitative imaging of network morphofunction is amenable to high throughput screening for lead discovery and drug optimization for neurological diseases. PMID:27819315

  4. Leader neurons in leaky integrate and fire neural network simulations.

    PubMed

    Zbinden, Cyrille

    2011-10-01

    In this paper, we highlight the topological properties of leader neurons whose existence is an experimental fact. Several experimental studies show the existence of leader neurons in population bursts of activity in 2D living neural networks (Eytan and Marom, J Neurosci 26(33):8465-8476, 2006; Eckmann et al., New J Phys 10(015011), 2008). A leader neuron is defined as a neuron which fires at the beginning of a burst (respectively network spike) more often than we expect by chance considering its mean firing rate. This means that leader neurons have some burst triggering power beyond a chance-level statistical effect. In this study, we characterize these leader neuron properties. This naturally leads us to simulate neural 2D networks. To build our simulations, we choose the leaky integrate and fire (lIF) neuron model (Gerstner and Kistler 2002; Cessac, J Math Biol 56(3):311-345, 2008), which allows fast simulations (Izhikevich, IEEE Trans Neural Netw 15(5):1063-1070, 2004; Gerstner and Naud, Science 326:379-380, 2009). The dynamics of our lIF model has got stable leader neurons in the burst population that we simulate. These leader neurons are excitatory neurons and have a low membrane potential firing threshold. Except for these two first properties, the conditions required for a neuron to be a leader neuron are difficult to identify and seem to depend on several parameters involved in the simulations themselves. However, a detailed linear analysis shows a trend of the properties required for a neuron to be a leader neuron. Our main finding is: A leader neuron sends signals to many excitatory neurons as well as to few inhibitory neurons and a leader neuron receives only signals from few other excitatory neurons. Our linear analysis exhibits five essential properties of leader neurons each with different relative importance. This means that considering a given neural network with a fixed mean number of connections per neuron, our analysis gives us a way of predicting which neuron is a good leader neuron and which is not. Our prediction formula correctly assesses leadership for at least ninety percent of neurons.

  5. Hierarchical self-assembly of actin in micro-confinements using microfluidics

    PubMed Central

    Deshpande, Siddharth; Pfohl, Thomas

    2012-01-01

    We present a straightforward microfluidics system to achieve step-by-step reaction sequences in a diffusion-controlled manner in quasi two-dimensional micro-confinements. We demonstrate the hierarchical self-organization of actin (actin monomers—entangled networks of filaments—networks of bundles) in a reversible fashion by tuning the Mg2+ ion concentration in the system. We show that actin can form networks of bundles in the presence of Mg2+ without any cross-linking proteins. The properties of these networks are influenced by the confinement geometry. In square microchambers we predominantly find rectangular networks, whereas triangular meshes are predominantly found in circular chambers. PMID:24032070

  6. Adaptive Neurons For Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Tawel, Raoul

    1990-01-01

    Training time decreases dramatically. In improved mathematical model of neural-network processor, temperature of neurons (in addition to connection strengths, also called weights, of synapses) varied during supervised-learning phase of operation according to mathematical formalism and not heuristic rule. Evidence that biological neural networks also process information at neuronal level.

  7. Intrinsic protective mechanisms of the neuron-glia network against glioma invasion.

    PubMed

    Iwadate, Yasuo; Fukuda, Kazumasa; Matsutani, Tomoo; Saeki, Naokatsu

    2016-04-01

    Gliomas arising in the brain parenchyma infiltrate into the surrounding brain and break down established complex neuron-glia networks. However, mounting evidence suggests that initially the network microenvironment of the adult central nervous system (CNS) is innately non-permissive to glioma cell invasion. The main players are inhibitory molecules in CNS myelin, as well as proteoglycans associated with astrocytes. Neural stem cells, and neurons themselves, possess inhibitory functions against neighboring tumor cells. These mechanisms have evolved to protect the established neuron-glia network, which is necessary for brain function. Greater insight into the interaction between glioma cells and the surrounding neuron-glia network is crucial for developing new therapies for treating these devastating tumors while preserving the important and complex neural functions of patients. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Network synchronization in hippocampal neurons.

    PubMed

    Penn, Yaron; Segal, Menahem; Moses, Elisha

    2016-03-22

    Oscillatory activity is widespread in dynamic neuronal networks. The main paradigm for the origin of periodicity consists of specialized pacemaking elements that synchronize and drive the rest of the network; however, other models exist. Here, we studied the spontaneous emergence of synchronized periodic bursting in a network of cultured dissociated neurons from rat hippocampus and cortex. Surprisingly, about 60% of all active neurons were self-sustained oscillators when disconnected, each with its own natural frequency. The individual neuron's tendency to oscillate and the corresponding oscillation frequency are controlled by its excitability. The single neuron intrinsic oscillations were blocked by riluzole, and are thus dependent on persistent sodium leak currents. Upon a gradual retrieval of connectivity, the synchrony evolves: Loose synchrony appears already at weak connectivity, with the oscillators converging to one common oscillation frequency, yet shifted in phase across the population. Further strengthening of the connectivity causes a reduction in the mean phase shifts until zero-lag is achieved, manifested by synchronous periodic network bursts. Interestingly, the frequency of network bursting matches the average of the intrinsic frequencies. Overall, the network behaves like other universal systems, where order emerges spontaneously by entrainment of independent rhythmic units. Although simplified with respect to circuitry in the brain, our results attribute a basic functional role for intrinsic single neuron excitability mechanisms in driving the network's activity and dynamics, contributing to our understanding of developing neural circuits.

  9. Temporal coding in a silicon network of integrate-and-fire neurons.

    PubMed

    Liu, Shih-Chii; Douglas, Rodney

    2004-09-01

    Spatio-temporal processing of spike trains by neuronal networks depends on a variety of mechanisms distributed across synapses, dendrites, and somata. In natural systems, the spike trains and the processing mechanisms cohere though their common physical instantiation. This coherence is lost when the natural system is encoded for simulation on a general purpose computer. By contrast, analog VLSI circuits are, like neurons, inherently related by their real-time physics, and so, could provide a useful substrate for exploring neuronlike event-based processing. Here, we describe a hybrid analog-digital VLSI chip comprising a set of integrate-and-fire neurons and short-term dynamical synapses that can be configured into simple network architectures with some properties of neocortical neuronal circuits. We show that, despite considerable fabrication variance in the properties of individual neurons, the chip offers a viable substrate for exploring real-time spike-based processing in networks of neurons.

  10. Causal Interrogation of Neuronal Networks and Behavior through Virally Transduced Ivermectin Receptors.

    PubMed

    Obenhaus, Horst A; Rozov, Andrei; Bertocchi, Ilaria; Tang, Wannan; Kirsch, Joachim; Betz, Heinrich; Sprengel, Rolf

    2016-01-01

    The causal interrogation of neuronal networks involved in specific behaviors requires the spatially and temporally controlled modulation of neuronal activity. For long-term manipulation of neuronal activity, chemogenetic tools provide a reasonable alternative to short-term optogenetic approaches. Here we show that virus mediated gene transfer of the ivermectin (IVM) activated glycine receptor mutant GlyRα1 (AG) can be used for the selective and reversible silencing of specific neuronal networks in mice. In the striatum, dorsal hippocampus, and olfactory bulb, GlyRα1 (AG) promoted IVM dependent effects in representative behavioral assays. Moreover, GlyRα1 (AG) mediated silencing had a strong and reversible impact on neuronal ensemble activity and c-Fos activation in the olfactory bulb. Together our results demonstrate that long-term, reversible and re-inducible neuronal silencing via GlyRα1 (AG) is a promising tool for the interrogation of network mechanisms underlying the control of behavior and memory formation.

  11. The emergence of spontaneous activity in neuronal cultures

    NASA Astrophysics Data System (ADS)

    Orlandi, J. G.; Alvarez-Lacalle, E.; Teller, S.; Soriano, J.; Casademunt, J.

    2013-01-01

    In vitro neuronal networks of dissociated hippocampal or cortical tissues are one of the most attractive model systems for the physics and neuroscience communities. Cultured neurons grow and mature, develop axons and dendrites, and quickly connect to their neighbors to establish a spontaneously active network within a week. The resulting neuronal network is characterized by a combination of excitatory and inhibitory neurons coupled through synaptic connections that interact in a highly nonlinear manner. The nonlinear behavior emerges from the dynamics of both the neurons' spiking activity and synaptic transmission, together with biological noise. These ingredients give rise to a rich repertoire of phenomena that are still poorly understood, including the emergence and maintenance of periodic spontaneous activity, avalanches, propagation of fronts and synchronization. In this work we present an overview on the rich activity of cultured neuronal networks, and detail the minimal theoretical considerations needed to describe experimental observations.

  12. Synchronization in neural nets

    NASA Technical Reports Server (NTRS)

    Vidal, Jacques J.; Haggerty, John

    1988-01-01

    The paper presents an artificial neural network concept (the Synchronizable Oscillator Networks) where the instants of individual firings in the form of point processes constitute the only form of information transmitted between joining neurons. In the model, neurons fire spontaneously and regularly in the absence of perturbation. When interaction is present, the scheduled firings are advanced or delayed by the firing of neighboring neurons. Networks of such neurons become global oscillators which exhibit multiple synchronizing attractors. From arbitrary initial states, energy minimization learning procedures can make the network converge to oscillatory modes that satisfy multi-dimensional constraints. Such networks can directly represent routing and scheduling problems that consist of ordering sequences of events.

  13. Intrinsic Neuronal Properties Switch the Mode of Information Transmission in Networks

    PubMed Central

    Gjorgjieva, Julijana; Mease, Rebecca A.; Moody, William J.; Fairhall, Adrienne L.

    2014-01-01

    Diverse ion channels and their dynamics endow single neurons with complex biophysical properties. These properties determine the heterogeneity of cell types that make up the brain, as constituents of neural circuits tuned to perform highly specific computations. How do biophysical properties of single neurons impact network function? We study a set of biophysical properties that emerge in cortical neurons during the first week of development, eventually allowing these neurons to adaptively scale the gain of their response to the amplitude of the fluctuations they encounter. During the same time period, these same neurons participate in large-scale waves of spontaneously generated electrical activity. We investigate the potential role of experimentally observed changes in intrinsic neuronal properties in determining the ability of cortical networks to propagate waves of activity. We show that such changes can strongly affect the ability of multi-layered feedforward networks to represent and transmit information on multiple timescales. With properties modeled on those observed at early stages of development, neurons are relatively insensitive to rapid fluctuations and tend to fire synchronously in response to wave-like events of large amplitude. Following developmental changes in voltage-dependent conductances, these same neurons become efficient encoders of fast input fluctuations over few layers, but lose the ability to transmit slower, population-wide input variations across many layers. Depending on the neurons' intrinsic properties, noise plays different roles in modulating neuronal input-output curves, which can dramatically impact network transmission. The developmental change in intrinsic properties supports a transformation of a networks function from the propagation of network-wide information to one in which computations are scaled to local activity. This work underscores the significance of simple changes in conductance parameters in governing how neurons represent and propagate information, and suggests a role for background synaptic noise in switching the mode of information transmission. PMID:25474701

  14. Replicating receptive fields of simple and complex cells in primary visual cortex in a neuronal network model with temporal and population sparseness and reliability.

    PubMed

    Tanaka, Takuma; Aoyagi, Toshio; Kaneko, Takeshi

    2012-10-01

    We propose a new principle for replicating receptive field properties of neurons in the primary visual cortex. We derive a learning rule for a feedforward network, which maintains a low firing rate for the output neurons (resulting in temporal sparseness) and allows only a small subset of the neurons in the network to fire at any given time (resulting in population sparseness). Our learning rule also sets the firing rates of the output neurons at each time step to near-maximum or near-minimum levels, resulting in neuronal reliability. The learning rule is simple enough to be written in spatially and temporally local forms. After the learning stage is performed using input image patches of natural scenes, output neurons in the model network are found to exhibit simple-cell-like receptive field properties. When the output of these simple-cell-like neurons are input to another model layer using the same learning rule, the second-layer output neurons after learning become less sensitive to the phase of gratings than the simple-cell-like input neurons. In particular, some of the second-layer output neurons become completely phase invariant, owing to the convergence of the connections from first-layer neurons with similar orientation selectivity to second-layer neurons in the model network. We examine the parameter dependencies of the receptive field properties of the model neurons after learning and discuss their biological implications. We also show that the localized learning rule is consistent with experimental results concerning neuronal plasticity and can replicate the receptive fields of simple and complex cells.

  15. Synchronization transition in neuronal networks composed of chaotic or non-chaotic oscillators.

    PubMed

    Xu, Kesheng; Maidana, Jean Paul; Castro, Samy; Orio, Patricio

    2018-05-30

    Chaotic dynamics has been shown in the dynamics of neurons and neural networks, in experimental data and numerical simulations. Theoretical studies have proposed an underlying role of chaos in neural systems. Nevertheless, whether chaotic neural oscillators make a significant contribution to network behaviour and whether the dynamical richness of neural networks is sensitive to the dynamics of isolated neurons, still remain open questions. We investigated synchronization transitions in heterogeneous neural networks of neurons connected by electrical coupling in a small world topology. The nodes in our model are oscillatory neurons that - when isolated - can exhibit either chaotic or non-chaotic behaviour, depending on conductance parameters. We found that the heterogeneity of firing rates and firing patterns make a greater contribution than chaos to the steepness of the synchronization transition curve. We also show that chaotic dynamics of the isolated neurons do not always make a visible difference in the transition to full synchrony. Moreover, macroscopic chaos is observed regardless of the dynamics nature of the neurons. However, performing a Functional Connectivity Dynamics analysis, we show that chaotic nodes can promote what is known as multi-stable behaviour, where the network dynamically switches between a number of different semi-synchronized, metastable states.

  16. Biological conservation law as an emerging functionality in dynamical neuronal networks.

    PubMed

    Podobnik, Boris; Jusup, Marko; Tiganj, Zoran; Wang, Wen-Xu; Buldú, Javier M; Stanley, H Eugene

    2017-11-07

    Scientists strive to understand how functionalities, such as conservation laws, emerge in complex systems. Living complex systems in particular create high-ordered functionalities by pairing up low-ordered complementary processes, e.g., one process to build and the other to correct. We propose a network mechanism that demonstrates how collective statistical laws can emerge at a macro (i.e., whole-network) level even when they do not exist at a unit (i.e., network-node) level. Drawing inspiration from neuroscience, we model a highly stylized dynamical neuronal network in which neurons fire either randomly or in response to the firing of neighboring neurons. A synapse connecting two neighboring neurons strengthens when both of these neurons are excited and weakens otherwise. We demonstrate that during this interplay between the synaptic and neuronal dynamics, when the network is near a critical point, both recurrent spontaneous and stimulated phase transitions enable the phase-dependent processes to replace each other and spontaneously generate a statistical conservation law-the conservation of synaptic strength. This conservation law is an emerging functionality selected by evolution and is thus a form of biological self-organized criticality in which the key dynamical modes are collective.

  17. Biological conservation law as an emerging functionality in dynamical neuronal networks

    PubMed Central

    Podobnik, Boris; Tiganj, Zoran; Wang, Wen-Xu; Buldú, Javier M.

    2017-01-01

    Scientists strive to understand how functionalities, such as conservation laws, emerge in complex systems. Living complex systems in particular create high-ordered functionalities by pairing up low-ordered complementary processes, e.g., one process to build and the other to correct. We propose a network mechanism that demonstrates how collective statistical laws can emerge at a macro (i.e., whole-network) level even when they do not exist at a unit (i.e., network-node) level. Drawing inspiration from neuroscience, we model a highly stylized dynamical neuronal network in which neurons fire either randomly or in response to the firing of neighboring neurons. A synapse connecting two neighboring neurons strengthens when both of these neurons are excited and weakens otherwise. We demonstrate that during this interplay between the synaptic and neuronal dynamics, when the network is near a critical point, both recurrent spontaneous and stimulated phase transitions enable the phase-dependent processes to replace each other and spontaneously generate a statistical conservation law—the conservation of synaptic strength. This conservation law is an emerging functionality selected by evolution and is thus a form of biological self-organized criticality in which the key dynamical modes are collective. PMID:29078286

  18. Molecular changes in brain aging and Alzheimer’s disease are mirrored in experimentally silenced cortical neuron networks

    PubMed Central

    Gleichmann, Marc; Zhang, Yongqing; Wood, William H.; Becker, Kevin G.; Mughal, Mohamed R.; Pazin, Michael J.; van Praag, Henriette; Kobilo, Tali; Zonderman, Alan B.; Troncoso, Juan C.; Markesbery, William R.; Mattson, Mark P.

    2010-01-01

    Activity-dependent modulation of neuronal gene expression promotes neuronal survival and plasticity, and neuronal network activity is perturbed in aging and Alzheimer’s disease (AD). Here we show that cerebral cortical neurons respond to chronic suppression of excitability by downregulating the expression of genes and their encoded proteins involved in inhibitory transmission (GABAergic and somatostatin) and Ca2+ signaling; alterations in pathways involved in lipid metabolism and energy management are also features of silenced neuronal networks. A molecular fingerprint strikingly similar to that of diminished network activity occurs in the human brain during aging and in AD, and opposite changes occur in response to activation of N-methyl-D-aspartate (NMDA) and brain-derived neurotrophic factor (BDNF) receptors in cultured cortical neurons and in mice in response to an enriched environment or electroconvulsive shock. Our findings suggest that reduced inhibitory neurotransmission during aging and in AD may be the result of compensatory responses that, paradoxically, render the neurons vulnerable to Ca2+-mediated degeneration. PMID:20947216

  19. Stochastic multiresonance in coupled excitable FHN neurons

    NASA Astrophysics Data System (ADS)

    Li, Huiyan; Sun, Xiaojuan; Xiao, Jinghua

    2018-04-01

    In this paper, effects of noise on Watts-Strogatz small-world neuronal networks, which are stimulated by a subthreshold signal, have been investigated. With the numerical simulations, it is surprisingly found that there exist several optimal noise intensities at which the subthreshold signal can be detected efficiently. This indicates the occurrence of stochastic multiresonance in the studied neuronal networks. Moreover, it is revealed that the occurrence of stochastic multiresonance has close relationship with the period of subthreshold signal Te and the noise-induced mean period of the neuronal networks T0. In detail, we find that noise could induce the neuronal networks to generate stochastic resonance for M times if Te is not very large and falls into the interval ( M × T 0 , ( M + 1 ) × T 0 ) with M being a positive integer. In real neuronal system, subthreshold signal detection is very meaningful. Thus, the obtained results in this paper could give some important implications on detecting subthreshold signal and propagating neuronal information in neuronal systems.

  20. The circadian rhythm induced by the heterogeneous network structure of the suprachiasmatic nucleus

    NASA Astrophysics Data System (ADS)

    Gu, Changgui; Yang, Huijie

    2016-05-01

    In mammals, the master clock is located in the suprachiasmatic nucleus (SCN), which is composed of about 20 000 nonidentical neuronal oscillators expressing different intrinsic periods. These neurons are coupled through neurotransmitters to form a network consisting of two subgroups, i.e., a ventrolateral (VL) subgroup and a dorsomedial (DM) subgroup. The VL contains about 25% SCN neurons that receive photic input from the retina, and the DM comprises the remaining 75% SCN neurons which are coupled to the VL. The synapses from the VL to the DM are evidently denser than that from the DM to the VL, in which the VL dominates the DM. Therefore, the SCN is a heterogeneous network where the neurons of the VL are linked with a large number of SCN neurons. In the present study, we mimicked the SCN network based on Goodwin model considering four types of networks including an all-to-all network, a Newman-Watts (NW) small world network, an Erdös-Rényi (ER) random network, and a Barabási-Albert (BA) scale free network. We found that the circadian rhythm was induced in the BA, ER, and NW networks, while the circadian rhythm was absent in the all-to-all network with weak cellular coupling, where the amplitude of the circadian rhythm is largest in the BA network which is most heterogeneous in the network structure. Our finding provides an alternative explanation for the induction or enhancement of circadian rhythm by the heterogeneity of the network structure.

  1. Experiments in clustered neuronal networks: A paradigm for complex modular dynamics

    NASA Astrophysics Data System (ADS)

    Teller, Sara; Soriano, Jordi

    2016-06-01

    Uncovering the interplay activity-connectivity is one of the major challenges in neuroscience. To deepen in the understanding of how a neuronal circuit shapes network dynamics, neuronal cultures have emerged as remarkable systems given their accessibility and easy manipulation. An attractive configuration of these in vitro systems consists in an ensemble of interconnected clusters of neurons. Using calcium fluorescence imaging to monitor spontaneous activity in these clustered neuronal networks, we were able to draw functional maps and reveal their topological features. We also observed that these networks exhibit a hierarchical modular dynamics, in which clusters fire in small groups that shape characteristic communities in the network. The structure and stability of these communities is sensitive to chemical or physical action, and therefore their analysis may serve as a proxy for network health. Indeed, the combination of all these approaches is helping to develop models to quantify damage upon network degradation, with promising applications for the study of neurological disorders in vitro.

  2. Artificial astrocytes improve neural network performance.

    PubMed

    Porto-Pazos, Ana B; Veiguela, Noha; Mesejo, Pablo; Navarrete, Marta; Alvarellos, Alberto; Ibáñez, Oscar; Pazos, Alejandro; Araque, Alfonso

    2011-04-19

    Compelling evidence indicates the existence of bidirectional communication between astrocytes and neurons. Astrocytes, a type of glial cells classically considered to be passive supportive cells, have been recently demonstrated to be actively involved in the processing and regulation of synaptic information, suggesting that brain function arises from the activity of neuron-glia networks. However, the actual impact of astrocytes in neural network function is largely unknown and its application in artificial intelligence remains untested. We have investigated the consequences of including artificial astrocytes, which present the biologically defined properties involved in astrocyte-neuron communication, on artificial neural network performance. Using connectionist systems and evolutionary algorithms, we have compared the performance of artificial neural networks (NN) and artificial neuron-glia networks (NGN) to solve classification problems. We show that the degree of success of NGN is superior to NN. Analysis of performances of NN with different number of neurons or different architectures indicate that the effects of NGN cannot be accounted for an increased number of network elements, but rather they are specifically due to astrocytes. Furthermore, the relative efficacy of NGN vs. NN increases as the complexity of the network increases. These results indicate that artificial astrocytes improve neural network performance, and established the concept of Artificial Neuron-Glia Networks, which represents a novel concept in Artificial Intelligence with implications in computational science as well as in the understanding of brain function.

  3. Artificial Astrocytes Improve Neural Network Performance

    PubMed Central

    Porto-Pazos, Ana B.; Veiguela, Noha; Mesejo, Pablo; Navarrete, Marta; Alvarellos, Alberto; Ibáñez, Oscar; Pazos, Alejandro; Araque, Alfonso

    2011-01-01

    Compelling evidence indicates the existence of bidirectional communication between astrocytes and neurons. Astrocytes, a type of glial cells classically considered to be passive supportive cells, have been recently demonstrated to be actively involved in the processing and regulation of synaptic information, suggesting that brain function arises from the activity of neuron-glia networks. However, the actual impact of astrocytes in neural network function is largely unknown and its application in artificial intelligence remains untested. We have investigated the consequences of including artificial astrocytes, which present the biologically defined properties involved in astrocyte-neuron communication, on artificial neural network performance. Using connectionist systems and evolutionary algorithms, we have compared the performance of artificial neural networks (NN) and artificial neuron-glia networks (NGN) to solve classification problems. We show that the degree of success of NGN is superior to NN. Analysis of performances of NN with different number of neurons or different architectures indicate that the effects of NGN cannot be accounted for an increased number of network elements, but rather they are specifically due to astrocytes. Furthermore, the relative efficacy of NGN vs. NN increases as the complexity of the network increases. These results indicate that artificial astrocytes improve neural network performance, and established the concept of Artificial Neuron-Glia Networks, which represents a novel concept in Artificial Intelligence with implications in computational science as well as in the understanding of brain function. PMID:21526157

  4. A Hox regulatory network establishes motor neuron pool identity and target-muscle connectivity.

    PubMed

    Dasen, Jeremy S; Tice, Bonnie C; Brenner-Morton, Susan; Jessell, Thomas M

    2005-11-04

    Spinal motor neurons acquire specialized "pool" identities that determine their ability to form selective connections with target muscles in the limb, but the molecular basis of this striking example of neuronal specificity has remained unclear. We show here that a Hox transcriptional regulatory network specifies motor neuron pool identity and connectivity. Two interdependent sets of Hox regulatory interactions operate within motor neurons, one assigning rostrocaudal motor pool position and a second directing motor pool diversity at a single segmental level. This Hox regulatory network directs the downstream transcriptional identity of motor neuron pools and defines the pattern of target-muscle connectivity.

  5. Quantitative 3D investigation of Neuronal network in mouse spinal cord model

    NASA Astrophysics Data System (ADS)

    Bukreeva, I.; Campi, G.; Fratini, M.; Spanò, R.; Bucci, D.; Battaglia, G.; Giove, F.; Bravin, A.; Uccelli, A.; Venturi, C.; Mastrogiacomo, M.; Cedola, A.

    2017-01-01

    The investigation of the neuronal network in mouse spinal cord models represents the basis for the research on neurodegenerative diseases. In this framework, the quantitative analysis of the single elements in different districts is a crucial task. However, conventional 3D imaging techniques do not have enough spatial resolution and contrast to allow for a quantitative investigation of the neuronal network. Exploiting the high coherence and the high flux of synchrotron sources, X-ray Phase-Contrast multiscale-Tomography allows for the 3D investigation of the neuronal microanatomy without any aggressive sample preparation or sectioning. We investigated healthy-mouse neuronal architecture by imaging the 3D distribution of the neuronal-network with a spatial resolution of 640 nm. The high quality of the obtained images enables a quantitative study of the neuronal structure on a subject-by-subject basis. We developed and applied a spatial statistical analysis on the motor neurons to obtain quantitative information on their 3D arrangement in the healthy-mice spinal cord. Then, we compared the obtained results with a mouse model of multiple sclerosis. Our approach paves the way to the creation of a “database” for the characterization of the neuronal network main features for a comparative investigation of neurodegenerative diseases and therapies.

  6. Synchronous firing patterns of induced pluripotent stem cell-derived cortical neurons depend on the network structure consisting of excitatory and inhibitory neurons.

    PubMed

    Iida, Shoko; Shimba, Kenta; Sakai, Koji; Kotani, Kiyoshi; Jimbo, Yasuhiko

    2018-06-18

    The balance between glutamate-mediated excitation and GABA-mediated inhibition is critical to cortical functioning. However, the contribution of network structure consisting of the both neurons to cortical functioning has not been elucidated. We aimed to evaluate the relationship between the network structure and functional activity patterns in vitro. We used mouse induced pluripotent stem cells (iPSCs) to construct three types of neuronal populations; excitatory-rich (Exc), inhibitory-rich (Inh), and control (Cont). Then, we analyzed the activity patterns of these neuronal populations using microelectrode arrays (MEAs). Inhibitory synaptic densities differed between the three types of iPSC-derived neuronal populations, and the neurons showed spontaneously synchronized bursting activity with functional maturation for one month. Moreover, different firing patterns were observed between the three populations; Exc demonstrated the highest firing rates, including frequent, long, and dominant bursts. In contrast, Inh demonstrated the lowest firing rates and the least dominant bursts. Synchronized bursts were enhanced by disinhibition via GABA A receptor blockade. The present study, using iPSC-derived neurons and MEAs, for the first time show that synchronized bursting of cortical networks in vitro depends on the network structure consisting of excitatory and inhibitory neurons. Copyright © 2018 Elsevier Inc. All rights reserved.

  7. Inferring Single Neuron Properties in Conductance Based Balanced Networks

    PubMed Central

    Pool, Román Rossi; Mato, Germán

    2011-01-01

    Balanced states in large networks are a usual hypothesis for explaining the variability of neural activity in cortical systems. In this regime the statistics of the inputs is characterized by static and dynamic fluctuations. The dynamic fluctuations have a Gaussian distribution. Such statistics allows to use reverse correlation methods, by recording synaptic inputs and the spike trains of ongoing spontaneous activity without any additional input. By using this method, properties of the single neuron dynamics that are masked by the balanced state can be quantified. To show the feasibility of this approach we apply it to large networks of conductance based neurons. The networks are classified as Type I or Type II according to the bifurcations which neurons of the different populations undergo near the firing onset. We also analyze mixed networks, in which each population has a mixture of different neuronal types. We determine under which conditions the intrinsic noise generated by the network can be used to apply reverse correlation methods. We find that under realistic conditions we can ascertain with low error the types of neurons present in the network. We also find that data from neurons with similar firing rates can be combined to perform covariance analysis. We compare the results of these methods (that do not requite any external input) to the standard procedure (that requires the injection of Gaussian noise into a single neuron). We find a good agreement between the two procedures. PMID:22016730

  8. Impact of delays on the synchronization transitions of modular neuronal networks with hybrid synapses

    NASA Astrophysics Data System (ADS)

    Liu, Chen; Wang, Jiang; Yu, Haitao; Deng, Bin; Wei, Xile; Tsang, Kaiming; Chan, Wailok

    2013-09-01

    The combined effects of the information transmission delay and the ratio of the electrical and chemical synapses on the synchronization transitions in the hybrid modular neuronal network are investigated in this paper. Numerical results show that the synchronization of neuron activities can be either promoted or destroyed as the information transmission delay increases, irrespective of the probability of electrical synapses in the hybrid-synaptic network. Interestingly, when the number of the electrical synapses exceeds a certain level, further increasing its proportion can obviously enhance the spatiotemporal synchronization transitions. Moreover, the coupling strength has a significant effect on the synchronization transition. The dominated type of the synapse always has a more profound effect on the emergency of the synchronous behaviors. Furthermore, the results of the modular neuronal network structures demonstrate that excessive partitioning of the modular network may result in the dramatic detriment of neuronal synchronization. Considering that information transmission delays are inevitable in intra- and inter-neuronal networks communication, the obtained results may have important implications for the exploration of the synchronization mechanism underlying several neural system diseases such as Parkinson's Disease.

  9. Ergodic properties of spiking neuronal networks with delayed interactions

    NASA Astrophysics Data System (ADS)

    Palmigiano, Agostina; Wolf, Fred

    The dynamical stability of neuronal networks, and the possibility of chaotic dynamics in the brain pose profound questions to the mechanisms underlying perception. Here we advance on the tractability of large neuronal networks of exactly solvable neuronal models with delayed pulse-coupled interactions. Pulse coupled delayed systems with an infinite dimensional phase space can be studied in equivalent systems of fixed and finite degrees of freedom by introducing a delayer variable for each neuron. A Jacobian of the equivalent system can be analytically obtained, and numerically evaluated. We find that depending on the action potential onset rapidness and the level of heterogeneities, the asynchronous irregular regime characteristic of balanced state networks loses stability with increasing delays to either a slow synchronous irregular or a fast synchronous irregular state. In networks of neurons with slow action potential onset, the transition to collective oscillations leads to an increase of the exponential rate of divergence of nearby trajectories and of the entropy production rate of the chaotic dynamics. The attractor dimension, instead of increasing linearly with increasing delay as reported in many other studies, decreases until eventually the network reaches full synchrony

  10. The application of the multi-alternative approach in active neural network models

    NASA Astrophysics Data System (ADS)

    Podvalny, S.; Vasiljev, E.

    2017-02-01

    The article refers to the construction of intelligent systems based artificial neuron networks are used. We discuss the basic properties of the non-compliance of artificial neuron networks and their biological prototypes. It is shown here that the main reason for these discrepancies is the structural immutability of the neuron network models in the learning process, that is, their passivity. Based on the modern understanding of the biological nervous system as a structured ensemble of nerve cells, it is proposed to abandon the attempts to simulate its work at the level of the elementary neurons functioning processes and proceed to the reproduction of the information structure of data storage and processing on the basis of the general enough evolutionary principles of multialternativity, i.e. the multi-level structural model, diversity and modularity. The implementation method of these principles is offered, using the faceted memory organization in the neuron network with the rearranging active structure. An example of the implementation of the active facet-type neuron network in the intellectual decision-making system in the conditions of critical events development in the electrical distribution system.

  11. Autapse-Induced Spiral Wave in Network of Neurons under Noise

    PubMed Central

    Qin, Huixin; Ma, Jun; Wang, Chunni; Wu, Ying

    2014-01-01

    Autapse plays an important role in regulating the electric activity of neuron by feedbacking time-delayed current on the membrane of neuron. Autapses are considered in a local area of regular network of neurons to investigate the development of spatiotemporal pattern, and emergence of spiral wave is observed while it fails to grow up and occupy the network completely. It is found that spiral wave can be induced to occupy more area in the network under optimized noise on the network with periodical or no-flux boundary condition being used. The developed spiral wave with self-sustained property can regulate the collective behaviors of neurons as a pacemaker. To detect the collective behaviors, a statistical factor of synchronization is calculated to investigate the emergence of ordered state in the network. The network keeps ordered state when self-sustained spiral wave is formed under noise and autapse in local area of network, and it independent of the selection of periodical or no-flux boundary condition. The developed stable spiral wave could be helpful for memory due to the distinct self-sustained property. PMID:24967577

  12. An integrate-and-fire model for synchronized bursting in a network of cultured cortical neurons.

    PubMed

    French, D A; Gruenstein, E I

    2006-12-01

    It has been suggested that spontaneous synchronous neuronal activity is an essential step in the formation of functional networks in the central nervous system. The key features of this type of activity consist of bursts of action potentials with associated spikes of elevated cytoplasmic calcium. These features are also observed in networks of rat cortical neurons that have been formed in culture. Experimental studies of these cultured networks have led to several hypotheses for the mechanisms underlying the observed synchronized oscillations. In this paper, bursting integrate-and-fire type mathematical models for regular spiking (RS) and intrinsic bursting (IB) neurons are introduced and incorporated through a small-world connection scheme into a two-dimensional excitatory network similar to those in the cultured network. This computer model exhibits spontaneous synchronous activity through mechanisms similar to those hypothesized for the cultured experimental networks. Traces of the membrane potential and cytoplasmic calcium from the model closely match those obtained from experiments. We also consider the impact on network behavior of the IB neurons, the geometry and the small world connection scheme.

  13. Emergence of network structure due to spike-timing-dependent plasticity in recurrent neuronal networks IV: structuring synaptic pathways among recurrent connections.

    PubMed

    Gilson, Matthieu; Burkitt, Anthony N; Grayden, David B; Thomas, Doreen A; van Hemmen, J Leo

    2009-12-01

    In neuronal networks, the changes of synaptic strength (or weight) performed by spike-timing-dependent plasticity (STDP) are hypothesized to give rise to functional network structure. This article investigates how this phenomenon occurs for the excitatory recurrent connections of a network with fixed input weights that is stimulated by external spike trains. We develop a theoretical framework based on the Poisson neuron model to analyze the interplay between the neuronal activity (firing rates and the spike-time correlations) and the learning dynamics, when the network is stimulated by correlated pools of homogeneous Poisson spike trains. STDP can lead to both a stabilization of all the neuron firing rates (homeostatic equilibrium) and a robust weight specialization. The pattern of specialization for the recurrent weights is determined by a relationship between the input firing-rate and correlation structures, the network topology, the STDP parameters and the synaptic response properties. We find conditions for feed-forward pathways or areas with strengthened self-feedback to emerge in an initially homogeneous recurrent network.

  14. Autapse-induced spiral wave in network of neurons under noise.

    PubMed

    Qin, Huixin; Ma, Jun; Wang, Chunni; Wu, Ying

    2014-01-01

    Autapse plays an important role in regulating the electric activity of neuron by feedbacking time-delayed current on the membrane of neuron. Autapses are considered in a local area of regular network of neurons to investigate the development of spatiotemporal pattern, and emergence of spiral wave is observed while it fails to grow up and occupy the network completely. It is found that spiral wave can be induced to occupy more area in the network under optimized noise on the network with periodical or no-flux boundary condition being used. The developed spiral wave with self-sustained property can regulate the collective behaviors of neurons as a pacemaker. To detect the collective behaviors, a statistical factor of synchronization is calculated to investigate the emergence of ordered state in the network. The network keeps ordered state when self-sustained spiral wave is formed under noise and autapse in local area of network, and it independent of the selection of periodical or no-flux boundary condition. The developed stable spiral wave could be helpful for memory due to the distinct self-sustained property.

  15. The effects of neuron morphology on graph theoretic measures of network connectivity: the analysis of a two-level statistical model.

    PubMed

    Aćimović, Jugoslava; Mäki-Marttunen, Tuomo; Linne, Marja-Leena

    2015-01-01

    We developed a two-level statistical model that addresses the question of how properties of neurite morphology shape the large-scale network connectivity. We adopted a low-dimensional statistical description of neurites. From the neurite model description we derived the expected number of synapses, node degree, and the effective radius, the maximal distance between two neurons expected to form at least one synapse. We related these quantities to the network connectivity described using standard measures from graph theory, such as motif counts, clustering coefficient, minimal path length, and small-world coefficient. These measures are used in a neuroscience context to study phenomena from synaptic connectivity in the small neuronal networks to large scale functional connectivity in the cortex. For these measures we provide analytical solutions that clearly relate different model properties. Neurites that sparsely cover space lead to a small effective radius. If the effective radius is small compared to the overall neuron size the obtained networks share similarities with the uniform random networks as each neuron connects to a small number of distant neurons. Large neurites with densely packed branches lead to a large effective radius. If this effective radius is large compared to the neuron size, the obtained networks have many local connections. In between these extremes, the networks maximize the variability of connection repertoires. The presented approach connects the properties of neuron morphology with large scale network properties without requiring heavy simulations with many model parameters. The two-steps procedure provides an easier interpretation of the role of each modeled parameter. The model is flexible and each of its components can be further expanded. We identified a range of model parameters that maximizes variability in network connectivity, the property that might affect network capacity to exhibit different dynamical regimes.

  16. Effect of the heterogeneous neuron and information transmission delay on stochastic resonance of neuronal networks

    NASA Astrophysics Data System (ADS)

    Wang, Qingyun; Zhang, Honghui; Chen, Guanrong

    2012-12-01

    We study the effect of heterogeneous neuron and information transmission delay on stochastic resonance of scale-free neuronal networks. For this purpose, we introduce the heterogeneity to the specified neuron with the highest degree. It is shown that in the absence of delay, an intermediate noise level can optimally assist spike firings of collective neurons so as to achieve stochastic resonance on scale-free neuronal networks for small and intermediate αh, which plays a heterogeneous role. Maxima of stochastic resonance measure are enhanced as αh increases, which implies that the heterogeneity can improve stochastic resonance. However, as αh is beyond a certain large value, no obvious stochastic resonance can be observed. If the information transmission delay is introduced to neuronal networks, stochastic resonance is dramatically affected. In particular, the tuned information transmission delay can induce multiple stochastic resonance, which can be manifested as well-expressed maximum in the measure for stochastic resonance, appearing every multiple of one half of the subthreshold stimulus period. Furthermore, we can observe that stochastic resonance at odd multiple of one half of the subthreshold stimulus period is subharmonic, as opposed to the case of even multiple of one half of the subthreshold stimulus period. More interestingly, multiple stochastic resonance can also be improved by the suitable heterogeneous neuron. Presented results can provide good insights into the understanding of the heterogeneous neuron and information transmission delay on realistic neuronal networks.

  17. A distance constrained synaptic plasticity model of C. elegans neuronal network

    NASA Astrophysics Data System (ADS)

    Badhwar, Rahul; Bagler, Ganesh

    2017-03-01

    Brain research has been driven by enquiry for principles of brain structure organization and its control mechanisms. The neuronal wiring map of C. elegans, the only complete connectome available till date, presents an incredible opportunity to learn basic governing principles that drive structure and function of its neuronal architecture. Despite its apparently simple nervous system, C. elegans is known to possess complex functions. The nervous system forms an important underlying framework which specifies phenotypic features associated to sensation, movement, conditioning and memory. In this study, with the help of graph theoretical models, we investigated the C. elegans neuronal network to identify network features that are critical for its control. The 'driver neurons' are associated with important biological functions such as reproduction, signalling processes and anatomical structural development. We created 1D and 2D network models of C. elegans neuronal system to probe the role of features that confer controllability and small world nature. The simple 1D ring model is critically poised for the number of feed forward motifs, neuronal clustering and characteristic path-length in response to synaptic rewiring, indicating optimal rewiring. Using empirically observed distance constraint in the neuronal network as a guiding principle, we created a distance constrained synaptic plasticity model that simultaneously explains small world nature, saturation of feed forward motifs as well as observed number of driver neurons. The distance constrained model suggests optimum long distance synaptic connections as a key feature specifying control of the network.

  18. Lateral Information Processing by Spiking Neurons: A Theoretical Model of the Neural Correlate of Consciousness

    PubMed Central

    Ebner, Marc; Hameroff, Stuart

    2011-01-01

    Cognitive brain functions, for example, sensory perception, motor control and learning, are understood as computation by axonal-dendritic chemical synapses in networks of integrate-and-fire neurons. Cognitive brain functions may occur either consciously or nonconsciously (on “autopilot”). Conscious cognition is marked by gamma synchrony EEG, mediated largely by dendritic-dendritic gap junctions, sideways connections in input/integration layers. Gap-junction-connected neurons define a sub-network within a larger neural network. A theoretical model (the “conscious pilot”) suggests that as gap junctions open and close, a gamma-synchronized subnetwork, or zone moves through the brain as an executive agent, converting nonconscious “auto-pilot” cognition to consciousness, and enhancing computation by coherent processing and collective integration. In this study we implemented sideways “gap junctions” in a single-layer artificial neural network to perform figure/ground separation. The set of neurons connected through gap junctions form a reconfigurable resistive grid or sub-network zone. In the model, outgoing spikes are temporally integrated and spatially averaged using the fixed resistive grid set up by neurons of similar function which are connected through gap-junctions. This spatial average, essentially a feedback signal from the neuron's output, determines whether particular gap junctions between neurons will open or close. Neurons connected through open gap junctions synchronize their output spikes. We have tested our gap-junction-defined sub-network in a one-layer neural network on artificial retinal inputs using real-world images. Our system is able to perform figure/ground separation where the laterally connected sub-network of neurons represents a perceived object. Even though we only show results for visual stimuli, our approach should generalize to other modalities. The system demonstrates a moving sub-network zone of synchrony, within which the contents of perception are represented and contained. This mobile zone can be viewed as a model of the neural correlate of consciousness in the brain. PMID:22046178

  19. Small Modifications to Network Topology Can Induce Stochastic Bistable Spiking Dynamics in a Balanced Cortical Model

    PubMed Central

    McDonnell, Mark D.; Ward, Lawrence M.

    2014-01-01

    Abstract Directed random graph models frequently are used successfully in modeling the population dynamics of networks of cortical neurons connected by chemical synapses. Experimental results consistently reveal that neuronal network topology is complex, however, in the sense that it differs statistically from a random network, and differs for classes of neurons that are physiologically different. This suggests that complex network models whose subnetworks have distinct topological structure may be a useful, and more biologically realistic, alternative to random networks. Here we demonstrate that the balanced excitation and inhibition frequently observed in small cortical regions can transiently disappear in otherwise standard neuronal-scale models of fluctuation-driven dynamics, solely because the random network topology was replaced by a complex clustered one, whilst not changing the in-degree of any neurons. In this network, a small subset of cells whose inhibition comes only from outside their local cluster are the cause of bistable population dynamics, where different clusters of these cells irregularly switch back and forth from a sparsely firing state to a highly active state. Transitions to the highly active state occur when a cluster of these cells spikes sufficiently often to cause strong unbalanced positive feedback to each other. Transitions back to the sparsely firing state rely on occasional large fluctuations in the amount of non-local inhibition received. Neurons in the model are homogeneous in their intrinsic dynamics and in-degrees, but differ in the abundance of various directed feedback motifs in which they participate. Our findings suggest that (i) models and simulations should take into account complex structure that varies for neuron and synapse classes; (ii) differences in the dynamics of neurons with similar intrinsic properties may be caused by their membership in distinctive local networks; (iii) it is important to identify neurons that share physiological properties and location, but differ in their connectivity. PMID:24743633

  20. Lateral information processing by spiking neurons: a theoretical model of the neural correlate of consciousness.

    PubMed

    Ebner, Marc; Hameroff, Stuart

    2011-01-01

    Cognitive brain functions, for example, sensory perception, motor control and learning, are understood as computation by axonal-dendritic chemical synapses in networks of integrate-and-fire neurons. Cognitive brain functions may occur either consciously or nonconsciously (on "autopilot"). Conscious cognition is marked by gamma synchrony EEG, mediated largely by dendritic-dendritic gap junctions, sideways connections in input/integration layers. Gap-junction-connected neurons define a sub-network within a larger neural network. A theoretical model (the "conscious pilot") suggests that as gap junctions open and close, a gamma-synchronized subnetwork, or zone moves through the brain as an executive agent, converting nonconscious "auto-pilot" cognition to consciousness, and enhancing computation by coherent processing and collective integration. In this study we implemented sideways "gap junctions" in a single-layer artificial neural network to perform figure/ground separation. The set of neurons connected through gap junctions form a reconfigurable resistive grid or sub-network zone. In the model, outgoing spikes are temporally integrated and spatially averaged using the fixed resistive grid set up by neurons of similar function which are connected through gap-junctions. This spatial average, essentially a feedback signal from the neuron's output, determines whether particular gap junctions between neurons will open or close. Neurons connected through open gap junctions synchronize their output spikes. We have tested our gap-junction-defined sub-network in a one-layer neural network on artificial retinal inputs using real-world images. Our system is able to perform figure/ground separation where the laterally connected sub-network of neurons represents a perceived object. Even though we only show results for visual stimuli, our approach should generalize to other modalities. The system demonstrates a moving sub-network zone of synchrony, within which the contents of perception are represented and contained. This mobile zone can be viewed as a model of the neural correlate of consciousness in the brain.

  1. A synaptic organizing principle for cortical neuronal groups

    PubMed Central

    Perin, Rodrigo; Berger, Thomas K.; Markram, Henry

    2011-01-01

    Neuronal circuitry is often considered a clean slate that can be dynamically and arbitrarily molded by experience. However, when we investigated synaptic connectivity in groups of pyramidal neurons in the neocortex, we found that both connectivity and synaptic weights were surprisingly predictable. Synaptic weights follow very closely the number of connections in a group of neurons, saturating after only 20% of possible connections are formed between neurons in a group. When we examined the network topology of connectivity between neurons, we found that the neurons cluster into small world networks that are not scale-free, with less than 2 degrees of separation. We found a simple clustering rule where connectivity is directly proportional to the number of common neighbors, which accounts for these small world networks and accurately predicts the connection probability between any two neurons. This pyramidal neuron network clusters into multiple groups of a few dozen neurons each. The neurons composing each group are surprisingly distributed, typically more than 100 μm apart, allowing for multiple groups to be interlaced in the same space. In summary, we discovered a synaptic organizing principle that groups neurons in a manner that is common across animals and hence, independent of individual experiences. We speculate that these elementary neuronal groups are prescribed Lego-like building blocks of perception and that acquired memory relies more on combining these elementary assemblies into higher-order constructs. PMID:21383177

  2. Synchronization in a non-uniform network of excitatory spiking neurons

    NASA Astrophysics Data System (ADS)

    Echeveste, Rodrigo; Gros, Claudius

    Spontaneous synchronization of pulse coupled elements is ubiquitous in nature and seems to be of vital importance for life. Networks of pacemaker cells in the heart, extended populations of southeast asian fireflies, and neuronal oscillations in cortical networks, are examples of this. In the present work, a rich repertoire of dynamical states with different degrees of synchronization are found in a network of excitatory-only spiking neurons connected in a non-uniform fashion. In particular, uncorrelated and partially correlated states are found without the need for inhibitory neurons or external currents. The phase transitions between these states, as well the robustness, stability, and response of the network to external stimulus are studied.

  3. Sensitivity of feedforward neural networks to weight errors

    NASA Technical Reports Server (NTRS)

    Stevenson, Maryhelen; Widrow, Bernard; Winter, Rodney

    1990-01-01

    An analysis is made of the sensitivity of feedforward layered networks of Adaline elements (threshold logic units) to weight errors. An approximation is derived which expresses the probability of error for an output neuron of a large network (a network with many neurons per layer) as a function of the percentage change in the weights. As would be expected, the probability of error increases with the number of layers in the network and with the percentage change in the weights. The probability of error is essentially independent of the number of weights per neuron and of the number of neurons per layer, as long as these numbers are large (on the order of 100 or more).

  4. On the continuous differentiability of inter-spike intervals of synaptically connected cortical spiking neurons in a neuronal network.

    PubMed

    Kumar, Gautam; Kothare, Mayuresh V

    2013-12-01

    We derive conditions for continuous differentiability of inter-spike intervals (ISIs) of spiking neurons with respect to parameters (decision variables) of an external stimulating input current that drives a recurrent network of synaptically connected neurons. The dynamical behavior of individual neurons is represented by a class of discontinuous single-neuron models. We report here that ISIs of neurons in the network are continuously differentiable with respect to decision variables if (1) a continuously differentiable trajectory of the membrane potential exists between consecutive action potentials with respect to time and decision variables and (2) the partial derivative of the membrane potential of spiking neurons with respect to time is not equal to the partial derivative of their firing threshold with respect to time at the time of action potentials. Our theoretical results are supported by showing fulfillment of these conditions for a class of known bidimensional spiking neuron models.

  5. A Markovian event-based framework for stochastic spiking neural networks.

    PubMed

    Touboul, Jonathan D; Faugeras, Olivier D

    2011-11-01

    In spiking neural networks, the information is conveyed by the spike times, that depend on the intrinsic dynamics of each neuron, the input they receive and on the connections between neurons. In this article we study the Markovian nature of the sequence of spike times in stochastic neural networks, and in particular the ability to deduce from a spike train the next spike time, and therefore produce a description of the network activity only based on the spike times regardless of the membrane potential process. To study this question in a rigorous manner, we introduce and study an event-based description of networks of noisy integrate-and-fire neurons, i.e. that is based on the computation of the spike times. We show that the firing times of the neurons in the networks constitute a Markov chain, whose transition probability is related to the probability distribution of the interspike interval of the neurons in the network. In the cases where the Markovian model can be developed, the transition probability is explicitly derived in such classical cases of neural networks as the linear integrate-and-fire neuron models with excitatory and inhibitory interactions, for different types of synapses, possibly featuring noisy synaptic integration, transmission delays and absolute and relative refractory period. This covers most of the cases that have been investigated in the event-based description of spiking deterministic neural networks.

  6. Network activity of mirror neurons depends on experience.

    PubMed

    Ushakov, Vadim L; Kartashov, Sergey I; Zavyalova, Victoria V; Bezverhiy, Denis D; Posichanyuk, Vladimir I; Terentev, Vasliliy N; Anokhin, Konstantin V

    2013-03-01

    In this work, the investigation of network activity of mirror neurons systems in animal brains depending on experience (existence or absence performance of the shown actions) was carried out. It carried out the research of mirror neurons network in the C57/BL6 line mice in the supervision task of swimming mice-demonstrators in Morris water maze. It showed the presence of mirror neurons systems in the motor cortex M1, M2, cingular cortex, hippocampus in mice groups, having experience of the swimming and without it. The conclusion is drawn about the possibility of the new functional network systems formation by means of mirror neurons systems and the acquisition of new knowledge through supervision by the animals in non-specific tasks.

  7. Optimization Methods for Spiking Neurons and Networks

    PubMed Central

    Russell, Alexander; Orchard, Garrick; Dong, Yi; Mihalaş, Ştefan; Niebur, Ernst; Tapson, Jonathan; Etienne-Cummings, Ralph

    2011-01-01

    Spiking neurons and spiking neural circuits are finding uses in a multitude of tasks such as robotic locomotion control, neuroprosthetics, visual sensory processing, and audition. The desired neural output is achieved through the use of complex neuron models, or by combining multiple simple neurons into a network. In either case, a means for configuring the neuron or neural circuit is required. Manual manipulation of parameters is both time consuming and non-intuitive due to the nonlinear relationship between parameters and the neuron’s output. The complexity rises even further as the neurons are networked and the systems often become mathematically intractable. In large circuits, the desired behavior and timing of action potential trains may be known but the timing of the individual action potentials is unknown and unimportant, whereas in single neuron systems the timing of individual action potentials is critical. In this paper, we automate the process of finding parameters. To configure a single neuron we derive a maximum likelihood method for configuring a neuron model, specifically the Mihalas–Niebur Neuron. Similarly, to configure neural circuits, we show how we use genetic algorithms (GAs) to configure parameters for a network of simple integrate and fire with adaptation neurons. The GA approach is demonstrated both in software simulation and hardware implementation on a reconfigurable custom very large scale integration chip. PMID:20959265

  8. Constructing Neuronal Network Models in Massively Parallel Environments.

    PubMed

    Ippen, Tammo; Eppler, Jochen M; Plesser, Hans E; Diesmann, Markus

    2017-01-01

    Recent advances in the development of data structures to represent spiking neuron network models enable us to exploit the complete memory of petascale computers for a single brain-scale network simulation. In this work, we investigate how well we can exploit the computing power of such supercomputers for the creation of neuronal networks. Using an established benchmark, we divide the runtime of simulation code into the phase of network construction and the phase during which the dynamical state is advanced in time. We find that on multi-core compute nodes network creation scales well with process-parallel code but exhibits a prohibitively large memory consumption. Thread-parallel network creation, in contrast, exhibits speedup only up to a small number of threads but has little overhead in terms of memory. We further observe that the algorithms creating instances of model neurons and their connections scale well for networks of ten thousand neurons, but do not show the same speedup for networks of millions of neurons. Our work uncovers that the lack of scaling of thread-parallel network creation is due to inadequate memory allocation strategies and demonstrates that thread-optimized memory allocators recover excellent scaling. An analysis of the loop order used for network construction reveals that more complex tests on the locality of operations significantly improve scaling and reduce runtime by allowing construction algorithms to step through large networks more efficiently than in existing code. The combination of these techniques increases performance by an order of magnitude and harnesses the increasingly parallel compute power of the compute nodes in high-performance clusters and supercomputers.

  9. Constructing Neuronal Network Models in Massively Parallel Environments

    PubMed Central

    Ippen, Tammo; Eppler, Jochen M.; Plesser, Hans E.; Diesmann, Markus

    2017-01-01

    Recent advances in the development of data structures to represent spiking neuron network models enable us to exploit the complete memory of petascale computers for a single brain-scale network simulation. In this work, we investigate how well we can exploit the computing power of such supercomputers for the creation of neuronal networks. Using an established benchmark, we divide the runtime of simulation code into the phase of network construction and the phase during which the dynamical state is advanced in time. We find that on multi-core compute nodes network creation scales well with process-parallel code but exhibits a prohibitively large memory consumption. Thread-parallel network creation, in contrast, exhibits speedup only up to a small number of threads but has little overhead in terms of memory. We further observe that the algorithms creating instances of model neurons and their connections scale well for networks of ten thousand neurons, but do not show the same speedup for networks of millions of neurons. Our work uncovers that the lack of scaling of thread-parallel network creation is due to inadequate memory allocation strategies and demonstrates that thread-optimized memory allocators recover excellent scaling. An analysis of the loop order used for network construction reveals that more complex tests on the locality of operations significantly improve scaling and reduce runtime by allowing construction algorithms to step through large networks more efficiently than in existing code. The combination of these techniques increases performance by an order of magnitude and harnesses the increasingly parallel compute power of the compute nodes in high-performance clusters and supercomputers. PMID:28559808

  10. Multiple fMRI system-level baseline connectivity is disrupted in patients with consciousness alterations.

    PubMed

    Demertzi, Athena; Gómez, Francisco; Crone, Julia Sophia; Vanhaudenhuyse, Audrey; Tshibanda, Luaba; Noirhomme, Quentin; Thonnard, Marie; Charland-Verville, Vanessa; Kirsch, Murielle; Laureys, Steven; Soddu, Andrea

    2014-03-01

    In healthy conditions, group-level fMRI resting state analyses identify ten resting state networks (RSNs) of cognitive relevance. Here, we aim to assess the ten-network model in severely brain-injured patients suffering from disorders of consciousness and to identify those networks which will be most relevant to discriminate between patients and healthy subjects. 300 fMRI volumes were obtained in 27 healthy controls and 53 patients in minimally conscious state (MCS), vegetative state/unresponsive wakefulness syndrome (VS/UWS) and coma. Independent component analysis (ICA) reduced data dimensionality. The ten networks were identified by means of a multiple template-matching procedure and were tested on neuronality properties (neuronal vs non-neuronal) in a data-driven way. Univariate analyses detected between-group differences in networks' neuronal properties and estimated voxel-wise functional connectivity in the networks, which were significantly less identifiable in patients. A nearest-neighbor "clinical" classifier was used to determine the networks with high between-group discriminative accuracy. Healthy controls were characterized by more neuronal components compared to patients in VS/UWS and in coma. Compared to healthy controls, fewer patients in MCS and VS/UWS showed components of neuronal origin for the left executive control network, default mode network (DMN), auditory, and right executive control network. The "clinical" classifier indicated the DMN and auditory network with the highest accuracy (85.3%) in discriminating patients from healthy subjects. FMRI multiple-network resting state connectivity is disrupted in severely brain-injured patients suffering from disorders of consciousness. When performing ICA, multiple-network testing and control for neuronal properties of the identified RSNs can advance fMRI system-level characterization. Automatic data-driven patient classification is the first step towards future single-subject objective diagnostics based on fMRI resting state acquisitions. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. Synchronous behaviour in network model based on human cortico-cortical connections.

    PubMed

    Protachevicz, Paulo Ricardo; Borges, Rafael Ribaski; Reis, Adriane da Silva; Borges, Fernando da Silva; Iarosz, Kelly Cristina; Caldas, Ibere Luiz; Lameu, Ewandson Luiz; Macau, Elbert Einstein Nehrer; Viana, Ricardo Luiz; Sokolov, Igor M; Ferrari, Fabiano A S; Kurths, Jürgen; Batista, Antonio Marcos

    2018-06-22

    We consider a network topology according to the cortico-cortical connec- tion network of the human brain, where each cortical area is composed of a random network of adaptive exponential integrate-and-fire neurons. Depending on the parameters, this neuron model can exhibit spike or burst patterns. As a diagnostic tool to identify spike and burst patterns we utilise the coefficient of variation of the neuronal inter-spike interval. In our neuronal network, we verify the existence of spike and burst synchronisation in different cortical areas. Our simulations show that the network arrangement, i.e., its rich-club organisation, plays an important role in the transition of the areas from desynchronous to synchronous behaviours. © 2018 Institute of Physics and Engineering in Medicine.

  12. Unsupervised discrimination of patterns in spiking neural networks with excitatory and inhibitory synaptic plasticity

    PubMed Central

    Srinivasa, Narayan; Cho, Youngkwan

    2014-01-01

    A spiking neural network model is described for learning to discriminate among spatial patterns in an unsupervised manner. The network anatomy consists of source neurons that are activated by external inputs, a reservoir that resembles a generic cortical layer with an excitatory-inhibitory (EI) network and a sink layer of neurons for readout. Synaptic plasticity in the form of STDP is imposed on all the excitatory and inhibitory synapses at all times. While long-term excitatory STDP enables sparse and efficient learning of the salient features in inputs, inhibitory STDP enables this learning to be stable by establishing a balance between excitatory and inhibitory currents at each neuron in the network. The synaptic weights between source and reservoir neurons form a basis set for the input patterns. The neural trajectories generated in the reservoir due to input stimulation and lateral connections between reservoir neurons can be readout by the sink layer neurons. This activity is used for adaptation of synapses between reservoir and sink layer neurons. A new measure called the discriminability index (DI) is introduced to compute if the network can discriminate between old patterns already presented in an initial training session. The DI is also used to compute if the network adapts to new patterns without losing its ability to discriminate among old patterns. The final outcome is that the network is able to correctly discriminate between all patterns—both old and new. This result holds as long as inhibitory synapses employ STDP to continuously enable current balance in the network. The results suggest a possible direction for future investigation into how spiking neural networks could address the stability-plasticity question despite having continuous synaptic plasticity. PMID:25566045

  13. Unsupervised discrimination of patterns in spiking neural networks with excitatory and inhibitory synaptic plasticity.

    PubMed

    Srinivasa, Narayan; Cho, Youngkwan

    2014-01-01

    A spiking neural network model is described for learning to discriminate among spatial patterns in an unsupervised manner. The network anatomy consists of source neurons that are activated by external inputs, a reservoir that resembles a generic cortical layer with an excitatory-inhibitory (EI) network and a sink layer of neurons for readout. Synaptic plasticity in the form of STDP is imposed on all the excitatory and inhibitory synapses at all times. While long-term excitatory STDP enables sparse and efficient learning of the salient features in inputs, inhibitory STDP enables this learning to be stable by establishing a balance between excitatory and inhibitory currents at each neuron in the network. The synaptic weights between source and reservoir neurons form a basis set for the input patterns. The neural trajectories generated in the reservoir due to input stimulation and lateral connections between reservoir neurons can be readout by the sink layer neurons. This activity is used for adaptation of synapses between reservoir and sink layer neurons. A new measure called the discriminability index (DI) is introduced to compute if the network can discriminate between old patterns already presented in an initial training session. The DI is also used to compute if the network adapts to new patterns without losing its ability to discriminate among old patterns. The final outcome is that the network is able to correctly discriminate between all patterns-both old and new. This result holds as long as inhibitory synapses employ STDP to continuously enable current balance in the network. The results suggest a possible direction for future investigation into how spiking neural networks could address the stability-plasticity question despite having continuous synaptic plasticity.

  14. Hopf bifurcation of an (n + 1) -neuron bidirectional associative memory neural network model with delays.

    PubMed

    Xiao, Min; Zheng, Wei Xing; Cao, Jinde

    2013-01-01

    Recent studies on Hopf bifurcations of neural networks with delays are confined to simplified neural network models consisting of only two, three, four, five, or six neurons. It is well known that neural networks are complex and large-scale nonlinear dynamical systems, so the dynamics of the delayed neural networks are very rich and complicated. Although discussing the dynamics of networks with a few neurons may help us to understand large-scale networks, there are inevitably some complicated problems that may be overlooked if simplified networks are carried over to large-scale networks. In this paper, a general delayed bidirectional associative memory neural network model with n + 1 neurons is considered. By analyzing the associated characteristic equation, the local stability of the trivial steady state is examined, and then the existence of the Hopf bifurcation at the trivial steady state is established. By applying the normal form theory and the center manifold reduction, explicit formulae are derived to determine the direction and stability of the bifurcating periodic solution. Furthermore, the paper highlights situations where the Hopf bifurcations are particularly critical, in the sense that the amplitude and the period of oscillations are very sensitive to errors due to tolerances in the implementation of neuron interconnections. It is shown that the sensitivity is crucially dependent on the delay and also significantly influenced by the feature of the number of neurons. Numerical simulations are carried out to illustrate the main results.

  15. Interplay between population firing stability and single neuron dynamics in hippocampal networks

    PubMed Central

    Slomowitz, Edden; Styr, Boaz; Vertkin, Irena; Milshtein-Parush, Hila; Nelken, Israel; Slutsky, Michael; Slutsky, Inna

    2015-01-01

    Neuronal circuits' ability to maintain the delicate balance between stability and flexibility in changing environments is critical for normal neuronal functioning. However, to what extent individual neurons and neuronal populations maintain internal firing properties remains largely unknown. In this study, we show that distributions of spontaneous population firing rates and synchrony are subject to accurate homeostatic control following increase of synaptic inhibition in cultured hippocampal networks. Reduction in firing rate triggered synaptic and intrinsic adaptive responses operating as global homeostatic mechanisms to maintain firing macro-stability, without achieving local homeostasis at the single-neuron level. Adaptive mechanisms, while stabilizing population firing properties, reduced short-term facilitation essential for synaptic discrimination of input patterns. Thus, invariant ongoing population dynamics emerge from intrinsically unstable activity patterns of individual neurons and synapses. The observed differences in the precision of homeostatic control at different spatial scales challenge cell-autonomous theory of network homeostasis and suggest the existence of network-wide regulation rules. DOI: http://dx.doi.org/10.7554/eLife.04378.001 PMID:25556699

  16. Synchronization properties of networks of electrically coupled neurons in the presence of noise and heterogeneities.

    PubMed

    Ostojic, Srdjan; Brunel, Nicolas; Hakim, Vincent

    2009-06-01

    We investigate how synchrony can be generated or induced in networks of electrically coupled integrate-and-fire neurons subject to noisy and heterogeneous inputs. Using analytical tools, we find that in a network under constant external inputs, synchrony can appear via a Hopf bifurcation from the asynchronous state to an oscillatory state. In a homogeneous net work, in the oscillatory state all neurons fire in synchrony, while in a heterogeneous network synchrony is looser, many neurons skipping cycles of the oscillation. If the transmission of action potentials via the electrical synapses is effectively excitatory, the Hopf bifurcation is supercritical, while effectively inhibitory transmission due to pronounced hyperpolarization leads to a subcritical bifurcation. In the latter case, the network exhibits bistability between an asynchronous state and an oscillatory state where all the neurons fire in synchrony. Finally we show that for time-varying external inputs, electrical coupling enhances the synchronization in an asynchronous network via a resonance at the firing-rate frequency.

  17. Neuronal networks: flip-flops in the brain.

    PubMed

    McCormick, David A

    2005-04-26

    Neuronal activity can rapidly flip-flop between stable states. Although these semi-stable states can be generated through interactions of neuronal networks, it is now known that they can also occur in vivo through intrinsic ionic currents.

  18. Matrix stiffness modulates formation and activity of neuronal networks of controlled architectures.

    PubMed

    Lantoine, Joséphine; Grevesse, Thomas; Villers, Agnès; Delhaye, Geoffrey; Mestdagh, Camille; Versaevel, Marie; Mohammed, Danahe; Bruyère, Céline; Alaimo, Laura; Lacour, Stéphanie P; Ris, Laurence; Gabriele, Sylvain

    2016-05-01

    The ability to construct easily in vitro networks of primary neurons organized with imposed topologies is required for neural tissue engineering as well as for the development of neuronal interfaces with desirable characteristics. However, accumulating evidence suggests that the mechanical properties of the culture matrix can modulate important neuronal functions such as growth, extension, branching and activity. Here we designed robust and reproducible laminin-polylysine grid micropatterns on cell culture substrates that have similar biochemical properties but a 100-fold difference in Young's modulus to investigate the role of the matrix rigidity on the formation and activity of cortical neuronal networks. We found that cell bodies of primary cortical neurons gradually accumulate in circular islands, whereas axonal extensions spread on linear tracks to connect circular islands. Our findings indicate that migration of cortical neurons is enhanced on soft substrates, leading to a faster formation of neuronal networks. Furthermore, the pre-synaptic density was two times higher on stiff substrates and consistently the number of action potentials and miniature synaptic currents was enhanced on stiff substrates. Taken together, our results provide compelling evidence to indicate that matrix stiffness is a key parameter to modulate the growth dynamics, synaptic density and electrophysiological activity of cortical neuronal networks, thus providing useful information on scaffold design for neural tissue engineering. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Comparison of the dynamics of neural interactions between current-based and conductance-based integrate-and-fire recurrent networks

    PubMed Central

    Cavallari, Stefano; Panzeri, Stefano; Mazzoni, Alberto

    2014-01-01

    Models of networks of Leaky Integrate-and-Fire (LIF) neurons are a widely used tool for theoretical investigations of brain function. These models have been used both with current- and conductance-based synapses. However, the differences in the dynamics expressed by these two approaches have been so far mainly studied at the single neuron level. To investigate how these synaptic models affect network activity, we compared the single neuron and neural population dynamics of conductance-based networks (COBNs) and current-based networks (CUBNs) of LIF neurons. These networks were endowed with sparse excitatory and inhibitory recurrent connections, and were tested in conditions including both low- and high-conductance states. We developed a novel procedure to obtain comparable networks by properly tuning the synaptic parameters not shared by the models. The so defined comparable networks displayed an excellent and robust match of first order statistics (average single neuron firing rates and average frequency spectrum of network activity). However, these comparable networks showed profound differences in the second order statistics of neural population interactions and in the modulation of these properties by external inputs. The correlation between inhibitory and excitatory synaptic currents and the cross-neuron correlation between synaptic inputs, membrane potentials and spike trains were stronger and more stimulus-modulated in the COBN. Because of these properties, the spike train correlation carried more information about the strength of the input in the COBN, although the firing rates were equally informative in both network models. Moreover, the network activity of COBN showed stronger synchronization in the gamma band, and spectral information about the input higher and spread over a broader range of frequencies. These results suggest that the second order statistics of network dynamics depend strongly on the choice of synaptic model. PMID:24634645

  20. Comparison of the dynamics of neural interactions between current-based and conductance-based integrate-and-fire recurrent networks.

    PubMed

    Cavallari, Stefano; Panzeri, Stefano; Mazzoni, Alberto

    2014-01-01

    Models of networks of Leaky Integrate-and-Fire (LIF) neurons are a widely used tool for theoretical investigations of brain function. These models have been used both with current- and conductance-based synapses. However, the differences in the dynamics expressed by these two approaches have been so far mainly studied at the single neuron level. To investigate how these synaptic models affect network activity, we compared the single neuron and neural population dynamics of conductance-based networks (COBNs) and current-based networks (CUBNs) of LIF neurons. These networks were endowed with sparse excitatory and inhibitory recurrent connections, and were tested in conditions including both low- and high-conductance states. We developed a novel procedure to obtain comparable networks by properly tuning the synaptic parameters not shared by the models. The so defined comparable networks displayed an excellent and robust match of first order statistics (average single neuron firing rates and average frequency spectrum of network activity). However, these comparable networks showed profound differences in the second order statistics of neural population interactions and in the modulation of these properties by external inputs. The correlation between inhibitory and excitatory synaptic currents and the cross-neuron correlation between synaptic inputs, membrane potentials and spike trains were stronger and more stimulus-modulated in the COBN. Because of these properties, the spike train correlation carried more information about the strength of the input in the COBN, although the firing rates were equally informative in both network models. Moreover, the network activity of COBN showed stronger synchronization in the gamma band, and spectral information about the input higher and spread over a broader range of frequencies. These results suggest that the second order statistics of network dynamics depend strongly on the choice of synaptic model.

  1. Bifurcations of large networks of two-dimensional integrate and fire neurons.

    PubMed

    Nicola, Wilten; Campbell, Sue Ann

    2013-08-01

    Recently, a class of two-dimensional integrate and fire models has been used to faithfully model spiking neurons. This class includes the Izhikevich model, the adaptive exponential integrate and fire model, and the quartic integrate and fire model. The bifurcation types for the individual neurons have been thoroughly analyzed by Touboul (SIAM J Appl Math 68(4):1045-1079, 2008). However, when the models are coupled together to form networks, the networks can display bifurcations that an uncoupled oscillator cannot. For example, the networks can transition from firing with a constant rate to burst firing. This paper introduces a technique to reduce a full network of this class of neurons to a mean field model, in the form of a system of switching ordinary differential equations. The reduction uses population density methods and a quasi-steady state approximation to arrive at the mean field system. Reduced models are derived for networks with different topologies and different model neurons with biologically derived parameters. The mean field equations are able to qualitatively and quantitatively describe the bifurcations that the full networks display. Extensions and higher order approximations are discussed.

  2. Balanced excitation and inhibition are required for high-capacity, noise-robust neuronal selectivity

    PubMed Central

    Abbott, L. F.; Sompolinsky, Haim

    2017-01-01

    Neurons and networks in the cerebral cortex must operate reliably despite multiple sources of noise. To evaluate the impact of both input and output noise, we determine the robustness of single-neuron stimulus selective responses, as well as the robustness of attractor states of networks of neurons performing memory tasks. We find that robustness to output noise requires synaptic connections to be in a balanced regime in which excitation and inhibition are strong and largely cancel each other. We evaluate the conditions required for this regime to exist and determine the properties of networks operating within it. A plausible synaptic plasticity rule for learning that balances weight configurations is presented. Our theory predicts an optimal ratio of the number of excitatory and inhibitory synapses for maximizing the encoding capacity of balanced networks for given statistics of afferent activations. Previous work has shown that balanced networks amplify spatiotemporal variability and account for observed asynchronous irregular states. Here we present a distinct type of balanced network that amplifies small changes in the impinging signals and emerges automatically from learning to perform neuronal and network functions robustly. PMID:29042519

  3. An artificial network model for estimating the network structure underlying partially observed neuronal signals.

    PubMed

    Komatsu, Misako; Namikawa, Jun; Chao, Zenas C; Nagasaka, Yasuo; Fujii, Naotaka; Nakamura, Kiyohiko; Tani, Jun

    2014-01-01

    Many previous studies have proposed methods for quantifying neuronal interactions. However, these methods evaluated the interactions between recorded signals in an isolated network. In this study, we present a novel approach for estimating interactions between observed neuronal signals by theorizing that those signals are observed from only a part of the network that also includes unobserved structures. We propose a variant of the recurrent network model that consists of both observable and unobservable units. The observable units represent recorded neuronal activity, and the unobservable units are introduced to represent activity from unobserved structures in the network. The network structures are characterized by connective weights, i.e., the interaction intensities between individual units, which are estimated from recorded signals. We applied this model to multi-channel brain signals recorded from monkeys, and obtained robust network structures with physiological relevance. Furthermore, the network exhibited common features that portrayed cortical dynamics as inversely correlated interactions between excitatory and inhibitory populations of neurons, which are consistent with the previous view of cortical local circuits. Our results suggest that the novel concept of incorporating an unobserved structure into network estimations has theoretical advantages and could provide insights into brain dynamics beyond what can be directly observed. Copyright © 2014 Elsevier Ireland Ltd and the Japan Neuroscience Society. All rights reserved.

  4. Single-Neuron NMDA Receptor Phenotype Influences Neuronal Rewiring and Reintegration following Traumatic Injury

    PubMed Central

    Patel, Tapan P.; Ventre, Scott C.; Geddes-Klein, Donna; Singh, Pallab K.

    2014-01-01

    Alterations in the activity of neural circuits are a common consequence of traumatic brain injury (TBI), but the relationship between single-neuron properties and the aggregate network behavior is not well understood. We recently reported that the GluN2B-containing NMDA receptors (NMDARs) are key in mediating mechanical forces during TBI, and that TBI produces a complex change in the functional connectivity of neuronal networks. Here, we evaluated whether cell-to-cell heterogeneity in the connectivity and aggregate contribution of GluN2B receptors to [Ca2+]i before injury influenced the functional rewiring, spontaneous activity, and network plasticity following injury using primary rat cortical dissociated neurons. We found that the functional connectivity of a neuron to its neighbors, combined with the relative influx of calcium through distinct NMDAR subtypes, together contributed to the individual neuronal response to trauma. Specifically, individual neurons whose [Ca2+]i oscillations were largely due to GluN2B NMDAR activation lost many of their functional targets 1 h following injury. In comparison, neurons with large GluN2A contribution or neurons with high functional connectivity both independently protected against injury-induced loss in connectivity. Mechanistically, we found that traumatic injury resulted in increased uncorrelated network activity, an effect linked to reduction of the voltage-sensitive Mg2+ block of GluN2B-containing NMDARs. This uncorrelated activation of GluN2B subtypes after injury significantly limited the potential for network remodeling in response to a plasticity stimulus. Together, our data suggest that two single-cell characteristics, the aggregate contribution of NMDAR subtypes and the number of functional connections, influence network structure following traumatic injury. PMID:24647941

  5. Modelling Feedback Excitation, Pacemaker Properties and Sensory Switching of Electrically Coupled Brainstem Neurons Controlling Rhythmic Activity

    PubMed Central

    Hull, Michael J.; Soffe, Stephen R.; Willshaw, David J.; Roberts, Alan

    2016-01-01

    What cellular and network properties allow reliable neuronal rhythm generation or firing that can be started and stopped by brief synaptic inputs? We investigate rhythmic activity in an electrically-coupled population of brainstem neurons driving swimming locomotion in young frog tadpoles, and how activity is switched on and off by brief sensory stimulation. We build a computational model of 30 electrically-coupled conditional pacemaker neurons on one side of the tadpole hindbrain and spinal cord. Based on experimental estimates for neuron properties, population sizes, synapse strengths and connections, we show that: long-lasting, mutual, glutamatergic excitation between the neurons allows the network to sustain rhythmic pacemaker firing at swimming frequencies following brief synaptic excitation; activity persists but rhythm breaks down without electrical coupling; NMDA voltage-dependency doubles the range of synaptic feedback strengths generating sustained rhythm. The network can be switched on and off at short latency by brief synaptic excitation and inhibition. We demonstrate that a population of generic Hodgkin-Huxley type neurons coupled by glutamatergic excitatory feedback can generate sustained asynchronous firing switched on and off synaptically. We conclude that networks of neurons with NMDAR mediated feedback excitation can generate self-sustained activity following brief synaptic excitation. The frequency of activity is limited by the kinetics of the neuron membrane channels and can be stopped by brief inhibitory input. Network activity can be rhythmic at lower frequencies if the neurons are electrically coupled. Our key finding is that excitatory synaptic feedback within a population of neurons can produce switchable, stable, sustained firing without synaptic inhibition. PMID:26824331

  6. A Scalable Approach to Probabilistic Latent Space Inference of Large-Scale Networks

    PubMed Central

    Yin, Junming; Ho, Qirong; Xing, Eric P.

    2014-01-01

    We propose a scalable approach for making inference about latent spaces of large networks. With a succinct representation of networks as a bag of triangular motifs, a parsimonious statistical model, and an efficient stochastic variational inference algorithm, we are able to analyze real networks with over a million vertices and hundreds of latent roles on a single machine in a matter of hours, a setting that is out of reach for many existing methods. When compared to the state-of-the-art probabilistic approaches, our method is several orders of magnitude faster, with competitive or improved accuracy for latent space recovery and link prediction. PMID:25400487

  7. Simultaneous multi-patch-clamp and extracellular-array recordings: Single neuron reflects network activity.

    PubMed

    Vardi, Roni; Goldental, Amir; Sardi, Shira; Sheinin, Anton; Kanter, Ido

    2016-11-08

    The increasing number of recording electrodes enhances the capability of capturing the network's cooperative activity, however, using too many monitors might alter the properties of the measured neural network and induce noise. Using a technique that merges simultaneous multi-patch-clamp and multi-electrode array recordings of neural networks in-vitro, we show that the membrane potential of a single neuron is a reliable and super-sensitive probe for monitoring such cooperative activities and their detailed rhythms. Specifically, the membrane potential and the spiking activity of a single neuron are either highly correlated or highly anti-correlated with the time-dependent macroscopic activity of the entire network. This surprising observation also sheds light on the cooperative origin of neuronal burst in cultured networks. Our findings present an alternative flexible approach to the technique based on a massive tiling of networks by large-scale arrays of electrodes to monitor their activity.

  8. Channel Noise-Enhanced Synchronization Transitions Induced by Time Delay in Adaptive Neuronal Networks with Spike-Timing-Dependent Plasticity

    NASA Astrophysics Data System (ADS)

    Xie, Huijuan; Gong, Yubing; Wang, Baoying

    In this paper, we numerically study the effect of channel noise on synchronization transitions induced by time delay in adaptive scale-free Hodgkin-Huxley neuronal networks with spike-timing-dependent plasticity (STDP). It is found that synchronization transitions by time delay vary as channel noise intensity is changed and become most pronounced when channel noise intensity is optimal. This phenomenon depends on STDP and network average degree, and it can be either enhanced or suppressed as network average degree increases depending on channel noise intensity. These results show that there are optimal channel noise and network average degree that can enhance the synchronization transitions by time delay in the adaptive neuronal networks. These findings could be helpful for better understanding of the regulation effect of channel noise on synchronization of neuronal networks. They could find potential implications for information transmission in neural systems.

  9. Cooperation-Controlled Learning for Explicit Class Structure in Self-Organizing Maps

    PubMed Central

    Kamimura, Ryotaro

    2014-01-01

    We attempt to demonstrate the effectiveness of multiple points of view toward neural networks. By restricting ourselves to two points of view of a neuron, we propose a new type of information-theoretic method called “cooperation-controlled learning.” In this method, individual and collective neurons are distinguished from one another, and we suppose that the characteristics of individual and collective neurons are different. To implement individual and collective neurons, we prepare two networks, namely, cooperative and uncooperative networks. The roles of these networks and the roles of individual and collective neurons are controlled by the cooperation parameter. As the parameter is increased, the role of cooperative networks becomes more important in learning, and the characteristics of collective neurons become more dominant. On the other hand, when the parameter is small, individual neurons play a more important role. We applied the method to the automobile and housing data from the machine learning database and examined whether explicit class boundaries could be obtained. Experimental results showed that cooperation-controlled learning, in particular taking into account information on input units, could be used to produce clearer class structure than conventional self-organizing maps. PMID:25309950

  10. Bayesian Inference and Online Learning in Poisson Neuronal Networks.

    PubMed

    Huang, Yanping; Rao, Rajesh P N

    2016-08-01

    Motivated by the growing evidence for Bayesian computation in the brain, we show how a two-layer recurrent network of Poisson neurons can perform both approximate Bayesian inference and learning for any hidden Markov model. The lower-layer sensory neurons receive noisy measurements of hidden world states. The higher-layer neurons infer a posterior distribution over world states via Bayesian inference from inputs generated by sensory neurons. We demonstrate how such a neuronal network with synaptic plasticity can implement a form of Bayesian inference similar to Monte Carlo methods such as particle filtering. Each spike in a higher-layer neuron represents a sample of a particular hidden world state. The spiking activity across the neural population approximates the posterior distribution over hidden states. In this model, variability in spiking is regarded not as a nuisance but as an integral feature that provides the variability necessary for sampling during inference. We demonstrate how the network can learn the likelihood model, as well as the transition probabilities underlying the dynamics, using a Hebbian learning rule. We present results illustrating the ability of the network to perform inference and learning for arbitrary hidden Markov models.

  11. Collective Dynamics for Heterogeneous Networks of Theta Neurons

    NASA Astrophysics Data System (ADS)

    Luke, Tanushree

    Collective behavior in neural networks has often been used as an indicator of communication between different brain areas. These collective synchronization and desynchronization patterns are also considered an important feature in understanding normal and abnormal brain function. To understand the emergence of these collective patterns, I create an analytic model that identifies all such macroscopic steady-states attainable by a network of Type-I neurons. This network, whose basic unit is the model "theta'' neuron, contains a mixture of excitable and spiking neurons coupled via a smooth pulse-like synapse. Applying the Ott-Antonsen reduction method in the thermodynamic limit, I obtain a low-dimensional evolution equation that describes the asymptotic dynamics of the macroscopic mean field of the network. This model can be used as the basis in understanding more complicated neuronal networks when additional dynamical features are included. From this reduced dynamical equation for the mean field, I show that the network exhibits three collective attracting steady-states. The first two are equilibrium states that both reflect partial synchronization in the network, whereas the third is a limit cycle in which the degree of network synchronization oscillates in time. In addition to a comprehensive identification of all possible attracting macro-states, this analytic model permits a complete bifurcation analysis of the collective behavior of the network with respect to three key network features: the degree of excitability of the neurons, the heterogeneity of the population, and the overall coupling strength. The network typically tends towards the two macroscopic equilibrium states when the neuron's intrinsic dynamics and the network interactions reinforce each other. In contrast, the limit cycle state, bifurcations, and multistability tend to occur when there is competition between these network features. I also outline here an extension of the above model where the neurons' excitability now varies in time sinuosoidally, thus simulating a parabolic bursting network. This time-varying excitability can lead to the emergence of macroscopic chaos and multistability in the collective behavior of the network. Finally, I expand the single population model described above to examine a two-population neuronal network where each population has its own unique mixture of excitable and spiking neurons, as well as its own coupling strength (either excitatory or inhibitory in nature). Specifically, I consider the situation where the first population is only allowed to influence the second population without any feedback, thus effectively creating a feed-forward "driver-response" system. In this special arrangement, the driver's asymptotic macroscopic dynamics are fully explored in the comprehensive analysis of the single population. Then, in the presence of an influence from the driver, the modified dynamics of the second population, which now acts as a response population, can also be fully analyzed. As in the time-varying model, these modifications give rise to richer dynamics to the response population than those found from the single population formalism, including multi-periodicity and chaos.

  12. Synchronization in a noise-driven developing neural network

    NASA Astrophysics Data System (ADS)

    Lin, I.-H.; Wu, R.-K.; Chen, C.-M.

    2011-11-01

    We use computer simulations to investigate the structural and dynamical properties of a developing neural network whose activity is driven by noise. Structurally, the constructed neural networks in our simulations exhibit the small-world properties that have been observed in several neural networks. The dynamical change of neuronal membrane potential is described by the Hodgkin-Huxley model, and two types of learning rules, including spike-timing-dependent plasticity (STDP) and inverse STDP, are considered to restructure the synaptic strength between neurons. Clustered synchronized firing (SF) of the network is observed when the network connectivity (number of connections/maximal connections) is about 0.75, in which the firing rate of neurons is only half of the network frequency. At the connectivity of 0.86, all neurons fire synchronously at the network frequency. The network SF frequency increases logarithmically with the culturing time of a growing network and decreases exponentially with the delay time in signal transmission. These conclusions are consistent with experimental observations. The phase diagrams of SF in a developing network are investigated for both learning rules.

  13. Hebbian based learning with winner-take-all for spiking neural networks

    NASA Astrophysics Data System (ADS)

    Gupta, Ankur; Long, Lyle

    2009-03-01

    Learning methods for spiking neural networks are not as well developed as the traditional neural networks that widely use back-propagation training. We propose and implement a Hebbian based learning method with winner-take-all competition for spiking neural networks. This approach is spike time dependent which makes it naturally well suited for a network of spiking neurons. Homeostasis with Hebbian learning is implemented which ensures stability and quicker learning. Homeostasis implies that the net sum of incoming weights associated with a neuron remains the same. Winner-take-all is also implemented for competitive learning between output neurons. We implemented this learning rule on a biologically based vision processing system that we are developing, and use layers of leaky integrate and fire neurons. The network when presented with 4 bars (or Gabor filters) of different orientation learns to recognize the bar orientations (or Gabor filters). After training, each output neuron learns to recognize a bar at specific orientation and responds by firing more vigorously to that bar and less vigorously to others. These neurons are found to have bell shaped tuning curves and are similar to the simple cells experimentally observed by Hubel and Wiesel in the striate cortex of cat and monkey.

  14. Inference of neuronal network spike dynamics and topology from calcium imaging data

    PubMed Central

    Lütcke, Henry; Gerhard, Felipe; Zenke, Friedemann; Gerstner, Wulfram; Helmchen, Fritjof

    2013-01-01

    Two-photon calcium imaging enables functional analysis of neuronal circuits by inferring action potential (AP) occurrence (“spike trains”) from cellular fluorescence signals. It remains unclear how experimental parameters such as signal-to-noise ratio (SNR) and acquisition rate affect spike inference and whether additional information about network structure can be extracted. Here we present a simulation framework for quantitatively assessing how well spike dynamics and network topology can be inferred from noisy calcium imaging data. For simulated AP-evoked calcium transients in neocortical pyramidal cells, we analyzed the quality of spike inference as a function of SNR and data acquisition rate using a recently introduced peeling algorithm. Given experimentally attainable values of SNR and acquisition rate, neural spike trains could be reconstructed accurately and with up to millisecond precision. We then applied statistical neuronal network models to explore how remaining uncertainties in spike inference affect estimates of network connectivity and topological features of network organization. We define the experimental conditions suitable for inferring whether the network has a scale-free structure and determine how well hub neurons can be identified. Our findings provide a benchmark for future calcium imaging studies that aim to reliably infer neuronal network properties. PMID:24399936

  15. Autaptic effects on synchrony of neurons coupled by electrical synapses

    NASA Astrophysics Data System (ADS)

    Kim, Youngtae

    2017-07-01

    In this paper, we numerically study the effects of a special synapse known as autapse on synchronization of population of Morris-Lecar (ML) neurons coupled by electrical synapses. Several configurations of the ML neuronal populations such as a pair or a ring or a globally coupled network with and without autapses are examined. While most of the papers on the autaptic effects on synchronization have used networks of neurons of same spiking rate, we use the network of neurons of different spiking rates. We find that the optimal autaptic coupling strength and the autaptic time delay enhance synchronization in our neural networks. We use the phase response curve analysis to explain the enhanced synchronization by autapses. Our findings reveal the important relationship between the intraneuronal feedback loop and the interneuronal coupling.

  16. Phase-locking and bistability in neuronal networks with synaptic depression

    NASA Astrophysics Data System (ADS)

    Akcay, Zeynep; Huang, Xinxian; Nadim, Farzan; Bose, Amitabha

    2018-02-01

    We consider a recurrent network of two oscillatory neurons that are coupled with inhibitory synapses. We use the phase response curves of the neurons and the properties of short-term synaptic depression to define Poincaré maps for the activity of the network. The fixed points of these maps correspond to phase-locked modes of the network. Using these maps, we analyze the conditions that allow short-term synaptic depression to lead to the existence of bistable phase-locked, periodic solutions. We show that bistability arises when either the phase response curve of the neuron or the short-term depression profile changes steeply enough. The results apply to any Type I oscillator and we illustrate our findings using the Quadratic Integrate-and-Fire and Morris-Lecar neuron models.

  17. An NV-Diamond Magnetic Imager for Neuroscience

    NASA Astrophysics Data System (ADS)

    Turner, Matthew; Schloss, Jennifer; Bauch, Erik; Hart, Connor; Walsworth, Ronald

    2017-04-01

    We present recent progress towards imaging time-varying magnetic fields from neurons using nitrogen-vacancy centers in diamond. The diamond neuron imager is noninvasive, label-free, and achieves single-cell resolution and state-of-the-art broadband sensitivity. By imaging magnetic fields from injected currents in mammalian neurons, we will map functional neuronal network connections and illuminate biophysical properties of neurons invisible to traditional electrophysiology. Furthermore, through enhancing magnetometer sensitivity, we aim to demonstrate real-time imaging of action potentials from networks of mammalian neurons.

  18. Network Receptive Field Modeling Reveals Extensive Integration and Multi-feature Selectivity in Auditory Cortical Neurons.

    PubMed

    Harper, Nicol S; Schoppe, Oliver; Willmore, Ben D B; Cui, Zhanfeng; Schnupp, Jan W H; King, Andrew J

    2016-11-01

    Cortical sensory neurons are commonly characterized using the receptive field, the linear dependence of their response on the stimulus. In primary auditory cortex neurons can be characterized by their spectrotemporal receptive fields, the spectral and temporal features of a sound that linearly drive a neuron. However, receptive fields do not capture the fact that the response of a cortical neuron results from the complex nonlinear network in which it is embedded. By fitting a nonlinear feedforward network model (a network receptive field) to cortical responses to natural sounds, we reveal that primary auditory cortical neurons are sensitive over a substantially larger spectrotemporal domain than is seen in their standard spectrotemporal receptive fields. Furthermore, the network receptive field, a parsimonious network consisting of 1-7 sub-receptive fields that interact nonlinearly, consistently better predicts neural responses to auditory stimuli than the standard receptive fields. The network receptive field reveals separate excitatory and inhibitory sub-fields with different nonlinear properties, and interaction of the sub-fields gives rise to important operations such as gain control and conjunctive feature detection. The conjunctive effects, where neurons respond only if several specific features are present together, enable increased selectivity for particular complex spectrotemporal structures, and may constitute an important stage in sound recognition. In conclusion, we demonstrate that fitting auditory cortical neural responses with feedforward network models expands on simple linear receptive field models in a manner that yields substantially improved predictive power and reveals key nonlinear aspects of cortical processing, while remaining easy to interpret in a physiological context.

  19. Network Receptive Field Modeling Reveals Extensive Integration and Multi-feature Selectivity in Auditory Cortical Neurons

    PubMed Central

    Willmore, Ben D. B.; Cui, Zhanfeng; Schnupp, Jan W. H.; King, Andrew J.

    2016-01-01

    Cortical sensory neurons are commonly characterized using the receptive field, the linear dependence of their response on the stimulus. In primary auditory cortex neurons can be characterized by their spectrotemporal receptive fields, the spectral and temporal features of a sound that linearly drive a neuron. However, receptive fields do not capture the fact that the response of a cortical neuron results from the complex nonlinear network in which it is embedded. By fitting a nonlinear feedforward network model (a network receptive field) to cortical responses to natural sounds, we reveal that primary auditory cortical neurons are sensitive over a substantially larger spectrotemporal domain than is seen in their standard spectrotemporal receptive fields. Furthermore, the network receptive field, a parsimonious network consisting of 1–7 sub-receptive fields that interact nonlinearly, consistently better predicts neural responses to auditory stimuli than the standard receptive fields. The network receptive field reveals separate excitatory and inhibitory sub-fields with different nonlinear properties, and interaction of the sub-fields gives rise to important operations such as gain control and conjunctive feature detection. The conjunctive effects, where neurons respond only if several specific features are present together, enable increased selectivity for particular complex spectrotemporal structures, and may constitute an important stage in sound recognition. In conclusion, we demonstrate that fitting auditory cortical neural responses with feedforward network models expands on simple linear receptive field models in a manner that yields substantially improved predictive power and reveals key nonlinear aspects of cortical processing, while remaining easy to interpret in a physiological context. PMID:27835647

  20. Excitement and synchronization of small-world neuronal networks with short-term synaptic plasticity.

    PubMed

    Han, Fang; Wiercigroch, Marian; Fang, Jian-An; Wang, Zhijie

    2011-10-01

    Excitement and synchronization of electrically and chemically coupled Newman-Watts (NW) small-world neuronal networks with a short-term synaptic plasticity described by a modified Oja learning rule are investigated. For each type of neuronal network, the variation properties of synaptic weights are examined first. Then the effects of the learning rate, the coupling strength and the shortcut-adding probability on excitement and synchronization of the neuronal network are studied. It is shown that the synaptic learning suppresses the over-excitement, helps synchronization for the electrically coupled network but impairs synchronization for the chemically coupled one. Both the introduction of shortcuts and the increase of the coupling strength improve synchronization and they are helpful in increasing the excitement for the chemically coupled network, but have little effect on the excitement of the electrically coupled one.

  1. Convergent neuromodulation onto a network neuron can have divergent effects at the network level.

    PubMed

    Kintos, Nickolas; Nusbaum, Michael P; Nadim, Farzan

    2016-04-01

    Different neuromodulators often target the same ion channel. When such modulators act on different neuron types, this convergent action can enable a rhythmic network to produce distinct outputs. Less clear are the functional consequences when two neuromodulators influence the same ion channel in the same neuron. We examine the consequences of this seeming redundancy using a mathematical model of the crab gastric mill (chewing) network. This network is activated in vitro by the projection neuron MCN1, which elicits a half-center bursting oscillation between the reciprocally-inhibitory neurons LG and Int1. We focus on two neuropeptides which modulate this network, including a MCN1 neurotransmitter and the hormone crustacean cardioactive peptide (CCAP). Both activate the same voltage-gated current (I MI ) in the LG neuron. However, I MI-MCN1 , resulting from MCN1 released neuropeptide, has phasic dynamics in its maximal conductance due to LG presynaptic inhibition of MCN1, while I MI-CCAP retains the same maximal conductance in both phases of the gastric mill rhythm. Separation of time scales allows us to produce a 2D model from which phase plane analysis shows that, as in the biological system, I MI-MCN1 and I MI-CCAP primarily influence the durations of opposing phases of this rhythm. Furthermore, I MI-MCN1 influences the rhythmic output in a manner similar to the Int1-to-LG synapse, whereas I MI-CCAP has an influence similar to the LG-to-Int1 synapse. These results show that distinct neuromodulators which target the same voltage-gated ion channel in the same network neuron can nevertheless produce distinct effects at the network level, providing divergent neuromodulator actions on network activity.

  2. Convergent neuromodulation onto a network neuron can have divergent effects at the network level

    PubMed Central

    Kintos, Nickolas; Nusbaum, Michael P.; Nadim, Farzan

    2016-01-01

    Different neuromodulators often target the same ion channel. When such modulators act on different neuron types, this convergent action can enable a rhythmic network to produce distinct outputs. Less clear are the functional consequences when two neuromodulators influence the same ion channel in the same neuron. We examine the consequences of this seeming redundancy using a mathematical model of the crab gastric mill (chewing) network. This network is activated in vitro by the projection neuron MCN1, which elicits a half-center bursting oscillation between the reciprocally-inhibitory neurons LG and Int1. We focus on two neuropeptides which modulate this network, including a MCN1 neurotransmitter and the hormone crustacean cardioactive peptide (CCAP). Both activate the same voltage-gated current (IMI) in the LG neuron. However, IMI-MCN1, resulting from MCN1 released neuropeptide, has phasic dynamics in its maximal conductance due to LG presynaptic inhibition of MCN1, while IMI-CCAP retains the same maximal conductance in both phases of the gastric mill rhythm. Separation of time scales allows us to produce a 2D model from which phase plane analysis shows that, as in the biological system, IMI-MCN1 and IMI-CCAP primarily influence the durations of opposing phases of this rhythm. Furthermore, IMI-MCN1 influences the rhythmic output in a manner similar to the Int1-to-LG synapse, whereas IMI-CCAP has an influence similar to the LG-to-Int1 synapse. These results show that distinct neuromodulators which target the same voltage-gated ion channel in the same network neuron can nevertheless produce distinct effects at the network level, providing divergent neuromodulator actions on network activity. PMID:26798029

  3. Adaptive Neural Output-Feedback Control for a Class of Nonlower Triangular Nonlinear Systems With Unmodeled Dynamics.

    PubMed

    Wang, Huanqing; Liu, Peter Xiaoping; Li, Shuai; Wang, Ding

    2017-08-29

    This paper presents the development of an adaptive neural controller for a class of nonlinear systems with unmodeled dynamics and immeasurable states. An observer is designed to estimate system states. The structure consistency of virtual control signals and the variable partition technique are combined to overcome the difficulties appearing in a nonlower triangular form. An adaptive neural output-feedback controller is developed based on the backstepping technique and the universal approximation property of the radial basis function (RBF) neural networks. By using the Lyapunov stability analysis, the semiglobally and uniformly ultimate boundedness of all signals within the closed-loop system is guaranteed. The simulation results show that the controlled system converges quickly, and all the signals are bounded. This paper is novel at least in the two aspects: 1) an output-feedback control strategy is developed for a class of nonlower triangular nonlinear systems with unmodeled dynamics and 2) the nonlinear disturbances and their bounds are the functions of all states, which is in a more general form than existing results.

  4. Low Temperature Vacuum Synthesis of Triangular CoO Nanocrystal/Graphene Nanosheets Composites with Enhanced Lithium Storage Capacity

    PubMed Central

    Guan, Qun; Cheng, Jianli; Li, Xiaodong; Wang, Bin; Huang, Ling; Nie, Fude; Ni, Wei

    2015-01-01

    CoO nanocrystal/graphene nanosheets (GNS) composites, consisting of a triangular CoO nanocrystal of 2~20 nm on the surface of GNS, are synthesized by a mild synthetic method. First, cobalt acetate tetrahydrate is recrystallized in the alcohol solution at a low temperature. Then, graphene oxide mixed with cobalt-precursor followed by high vacuum annealing to form the CoO nanocrystal/GNS composites. The CoO nanocrystal/GNS composites exhibit a high reversible capacity of 1481.9 m Ah g−1 after 30 cycles with a high Coulombic efficiency of over 96% when used as anode materials for lithium ion battery. The excellent electrochemical performances may be attributed to the special structure of the composites. The well-dispersed triangular CoO nanocrystal on the substrate of conductive graphene can not only have a shorter diffusion length for lithium ions, better stress accommodation capability during the charge-discharge processes and more accessible active sites for lithium-ion storage and electrolyte wetting, but also possess a good conductive network, which can significantly improve the whole electrochemical performance. PMID:25961670

  5. Obtaining Arbitrary Prescribed Mean Field Dynamics for Recurrently Coupled Networks of Type-I Spiking Neurons with Analytically Determined Weights

    PubMed Central

    Nicola, Wilten; Tripp, Bryan; Scott, Matthew

    2016-01-01

    A fundamental question in computational neuroscience is how to connect a network of spiking neurons to produce desired macroscopic or mean field dynamics. One possible approach is through the Neural Engineering Framework (NEF). The NEF approach requires quantities called decoders which are solved through an optimization problem requiring large matrix inversion. Here, we show how a decoder can be obtained analytically for type I and certain type II firing rates as a function of the heterogeneity of its associated neuron. These decoders generate approximants for functions that converge to the desired function in mean-squared error like 1/N, where N is the number of neurons in the network. We refer to these decoders as scale-invariant decoders due to their structure. These decoders generate weights for a network of neurons through the NEF formula for weights. These weights force the spiking network to have arbitrary and prescribed mean field dynamics. The weights generated with scale-invariant decoders all lie on low dimensional hypersurfaces asymptotically. We demonstrate the applicability of these scale-invariant decoders and weight surfaces by constructing networks of spiking theta neurons that replicate the dynamics of various well known dynamical systems such as the neural integrator, Van der Pol system and the Lorenz system. As these decoders are analytically determined and non-unique, the weights are also analytically determined and non-unique. We discuss the implications for measured weights of neuronal networks. PMID:26973503

  6. Obtaining Arbitrary Prescribed Mean Field Dynamics for Recurrently Coupled Networks of Type-I Spiking Neurons with Analytically Determined Weights.

    PubMed

    Nicola, Wilten; Tripp, Bryan; Scott, Matthew

    2016-01-01

    A fundamental question in computational neuroscience is how to connect a network of spiking neurons to produce desired macroscopic or mean field dynamics. One possible approach is through the Neural Engineering Framework (NEF). The NEF approach requires quantities called decoders which are solved through an optimization problem requiring large matrix inversion. Here, we show how a decoder can be obtained analytically for type I and certain type II firing rates as a function of the heterogeneity of its associated neuron. These decoders generate approximants for functions that converge to the desired function in mean-squared error like 1/N, where N is the number of neurons in the network. We refer to these decoders as scale-invariant decoders due to their structure. These decoders generate weights for a network of neurons through the NEF formula for weights. These weights force the spiking network to have arbitrary and prescribed mean field dynamics. The weights generated with scale-invariant decoders all lie on low dimensional hypersurfaces asymptotically. We demonstrate the applicability of these scale-invariant decoders and weight surfaces by constructing networks of spiking theta neurons that replicate the dynamics of various well known dynamical systems such as the neural integrator, Van der Pol system and the Lorenz system. As these decoders are analytically determined and non-unique, the weights are also analytically determined and non-unique. We discuss the implications for measured weights of neuronal networks.

  7. A microfluidic platform for controlled biochemical stimulation of twin neuronal networks.

    PubMed

    Biffi, Emilia; Piraino, Francesco; Pedrocchi, Alessandra; Fiore, Gianfranco B; Ferrigno, Giancarlo; Redaelli, Alberto; Menegon, Andrea; Rasponi, Marco

    2012-06-01

    Spatially and temporally resolved delivery of soluble factors is a key feature for pharmacological applications. In this framework, microfluidics coupled to multisite electrophysiology offers great advantages in neuropharmacology and toxicology. In this work, a microfluidic device for biochemical stimulation of neuronal networks was developed. A micro-chamber for cell culturing, previously developed and tested for long term neuronal growth by our group, was provided with a thin wall, which partially divided the cell culture region in two sub-compartments. The device was reversibly coupled to a flat micro electrode array and used to culture primary neurons in the same microenvironment. We demonstrated that the two fluidically connected compartments were able to originate two parallel neuronal networks with similar electrophysiological activity but functionally independent. Furthermore, the device allowed to connect the outlet port to a syringe pump and to transform the static culture chamber in a perfused one. At 14 days invitro, sub-networks were independently stimulated with a test molecule, tetrodotoxin, a neurotoxin known to block action potentials, by means of continuous delivery. Electrical activity recordings proved the ability of the device configuration to selectively stimulate each neuronal network individually. The proposed microfluidic approach represents an innovative methodology to perform biological, pharmacological, and electrophysiological experiments on neuronal networks. Indeed, it allows for controlled delivery of substances to cells, and it overcomes the limitations due to standard drug stimulation techniques. Finally, the twin network configuration reduces biological variability, which has important outcomes on pharmacological and drug screening.

  8. Effects of Morphology Constraint on Electrophysiological Properties of Cortical Neurons

    NASA Astrophysics Data System (ADS)

    Zhu, Geng; Du, Liping; Jin, Lei; Offenhäusser, Andreas

    2016-04-01

    There is growing interest in engineering nerve cells in vitro to control architecture and connectivity of cultured neuronal networks or to build neuronal networks with predictable computational function. Pattern technologies, such as micro-contact printing, have been developed to design ordered neuronal networks. However, electrophysiological characteristics of the single patterned neuron haven’t been reported. Here, micro-contact printing, using polyolefine polymer (POP) stamps with high resolution, was employed to grow cortical neurons in a designed structure. The results demonstrated that the morphology of patterned neurons was well constrained, and the number of dendrites was decreased to be about 2. Our electrophysiological results showed that alterations of dendritic morphology affected firing patterns of neurons and neural excitability. When stimulated by current, though both patterned and un-patterned neurons presented regular spiking, the dynamics and strength of the response were different. The un-patterned neurons exhibited a monotonically increasing firing frequency in response to injected current, while the patterned neurons first exhibited frequency increase and then a slow decrease. Our findings indicate that the decrease in dendritic complexity of cortical neurons will influence their electrophysiological characteristics and alter their information processing activity, which could be considered when designing neuronal circuitries.

  9. The transfer and transformation of collective network information in gene-matched networks.

    PubMed

    Kitsukawa, Takashi; Yagi, Takeshi

    2015-10-09

    Networks, such as the human society network, social and professional networks, and biological system networks, contain vast amounts of information. Information signals in networks are distributed over nodes and transmitted through intricately wired links, making the transfer and transformation of such information difficult to follow. Here we introduce a novel method for describing network information and its transfer using a model network, the Gene-matched network (GMN), in which nodes (neurons) possess attributes (genes). In the GMN, nodes are connected according to their expression of common genes. Because neurons have multiple genes, the GMN is cluster-rich. We show that, in the GMN, information transfer and transformation were controlled systematically, according to the activity level of the network. Furthermore, information transfer and transformation could be traced numerically with a vector using genes expressed in the activated neurons, the active-gene array, which was used to assess the relative activity among overlapping neuronal groups. Interestingly, this coding style closely resembles the cell-assembly neural coding theory. The method introduced here could be applied to many real-world networks, since many systems, including human society and various biological systems, can be represented as a network of this type.

  10. Population equations for degree-heterogenous neural networks

    NASA Astrophysics Data System (ADS)

    Kähne, M.; Sokolov, I. M.; Rüdiger, S.

    2017-11-01

    We develop a statistical framework for studying recurrent networks with broad distributions of the number of synaptic links per neuron. We treat each group of neurons with equal input degree as one population and derive a system of equations determining the population-averaged firing rates. The derivation rests on an assumption of a large number of neurons and, additionally, an assumption of a large number of synapses per neuron. For the case of binary neurons, analytical solutions can be constructed, which correspond to steps in the activity versus degree space. We apply this theory to networks with degree-correlated topology and show that complex, multi-stable regimes can result for increasing correlations. Our work is motivated by the recent finding of subnetworks of highly active neurons and the fact that these neurons tend to be connected to each other with higher probability.

  11. Multistability, local pattern formation, and global collective firing in a small-world network of nonleaky integrate-and-fire neurons.

    PubMed

    Rothkegel, Alexander; Lehnertz, Klaus

    2009-03-01

    We investigate numerically the collective dynamical behavior of pulse-coupled nonleaky integrate-and-fire neurons that are arranged on a two-dimensional small-world network. To ensure ongoing activity, we impose a probability for spontaneous firing for each neuron. We study network dynamics evolving from different sets of initial conditions in dependence on coupling strength and rewiring probability. Besides a homogeneous equilibrium state for low coupling strength, we observe different local patterns including cyclic waves, spiral waves, and turbulentlike patterns, which-depending on network parameters-interfere with the global collective firing of the neurons. We attribute the various network dynamics to distinct regimes in the parameter space. For the same network parameters different network dynamics can be observed depending on the set of initial conditions only. Such a multistable behavior and the interplay between local pattern formation and global collective firing may be attributable to the spatiotemporal dynamics of biological networks.

  12. Computing the Local Field Potential (LFP) from Integrate-and-Fire Network Models.

    PubMed

    Mazzoni, Alberto; Lindén, Henrik; Cuntz, Hermann; Lansner, Anders; Panzeri, Stefano; Einevoll, Gaute T

    2015-12-01

    Leaky integrate-and-fire (LIF) network models are commonly used to study how the spiking dynamics of neural networks changes with stimuli, tasks or dynamic network states. However, neurophysiological studies in vivo often rather measure the mass activity of neuronal microcircuits with the local field potential (LFP). Given that LFPs are generated by spatially separated currents across the neuronal membrane, they cannot be computed directly from quantities defined in models of point-like LIF neurons. Here, we explore the best approximation for predicting the LFP based on standard output from point-neuron LIF networks. To search for this best "LFP proxy", we compared LFP predictions from candidate proxies based on LIF network output (e.g, firing rates, membrane potentials, synaptic currents) with "ground-truth" LFP obtained when the LIF network synaptic input currents were injected into an analogous three-dimensional (3D) network model of multi-compartmental neurons with realistic morphology, spatial distributions of somata and synapses. We found that a specific fixed linear combination of the LIF synaptic currents provided an accurate LFP proxy, accounting for most of the variance of the LFP time course observed in the 3D network for all recording locations. This proxy performed well over a broad set of conditions, including substantial variations of the neuronal morphologies. Our results provide a simple formula for estimating the time course of the LFP from LIF network simulations in cases where a single pyramidal population dominates the LFP generation, and thereby facilitate quantitative comparison between computational models and experimental LFP recordings in vivo.

  13. Computing the Local Field Potential (LFP) from Integrate-and-Fire Network Models

    PubMed Central

    Cuntz, Hermann; Lansner, Anders; Panzeri, Stefano; Einevoll, Gaute T.

    2015-01-01

    Leaky integrate-and-fire (LIF) network models are commonly used to study how the spiking dynamics of neural networks changes with stimuli, tasks or dynamic network states. However, neurophysiological studies in vivo often rather measure the mass activity of neuronal microcircuits with the local field potential (LFP). Given that LFPs are generated by spatially separated currents across the neuronal membrane, they cannot be computed directly from quantities defined in models of point-like LIF neurons. Here, we explore the best approximation for predicting the LFP based on standard output from point-neuron LIF networks. To search for this best “LFP proxy”, we compared LFP predictions from candidate proxies based on LIF network output (e.g, firing rates, membrane potentials, synaptic currents) with “ground-truth” LFP obtained when the LIF network synaptic input currents were injected into an analogous three-dimensional (3D) network model of multi-compartmental neurons with realistic morphology, spatial distributions of somata and synapses. We found that a specific fixed linear combination of the LIF synaptic currents provided an accurate LFP proxy, accounting for most of the variance of the LFP time course observed in the 3D network for all recording locations. This proxy performed well over a broad set of conditions, including substantial variations of the neuronal morphologies. Our results provide a simple formula for estimating the time course of the LFP from LIF network simulations in cases where a single pyramidal population dominates the LFP generation, and thereby facilitate quantitative comparison between computational models and experimental LFP recordings in vivo. PMID:26657024

  14. Analysis of connectivity map: Control to glutamate injured and phenobarbital treated neuronal network

    NASA Astrophysics Data System (ADS)

    Kamal, Hassan; Kanhirodan, Rajan; Srinivas, Kalyan V.; Sikdar, Sujit K.

    2010-04-01

    We study the responses of a cultured neural network when it is exposed to epileptogenesis glutamate injury causing epilepsy and subsequent treatment with phenobarbital by constructing connectivity map of neurons using correlation matrix. This study is particularly useful in understanding the pharmaceutical drug induced changes in the neuronal network properties with insights into changes at the systems biology level.

  15. Dynamics of social balance on networks

    NASA Astrophysics Data System (ADS)

    Antal, T.; Krapivsky, P. L.; Redner, S.

    2005-09-01

    We study the evolution of social networks that contain both friendly and unfriendly pairwise links between individual nodes. The network is endowed with dynamics in which the sense of a link in an imbalanced triad—a triangular loop with one or three unfriendly links—is reversed to make the triad balanced. With this dynamics, an infinite network undergoes a dynamic phase transition from a steady state to “paradise”—all links are friendly—as the propensity p for friendly links in an update event passes through 1/2 . A finite network always falls into a socially balanced absorbing state where no imbalanced triads remain. If the additional constraint that the number of imbalanced triads in the network not increase in an update is imposed, then the network quickly reaches a balanced final state.

  16. The formation mechanism of defects, spiral wave in the network of neurons.

    PubMed

    Wu, Xinyi; Ma, Jun

    2013-01-01

    A regular network of neurons is constructed by using the Morris-Lecar (ML) neuron with the ion channels being considered, and the potential mechnism of the formation of a spiral wave is investigated in detail. Several spiral waves are initiated by blocking the target wave with artificial defects and/or partial blocking (poisoning) in ion channels. Furthermore, possible conditions for spiral wave formation and the effect of partial channel blocking are discussed completely. Our results are summarized as follows. 1) The emergence of a target wave depends on the transmembrane currents with diversity, which mapped from the external forcing current and this kind of diversity is associated with spatial heterogeneity in the media. 2) Distinct spiral wave could be induced to occupy the network when the target wave is broken by partially blocking the ion channels of a fraction of neurons (local poisoned area), and these generated spiral waves are similar with the spiral waves induced by artificial defects. It is confirmed that partial channel blocking of some neurons in the network could play a similar role in breaking a target wave as do artificial defects; 3) Channel noise and additive Gaussian white noise are also considered, and it is confirmed that spiral waves are also induced in the network in the presence of noise. According to the results mentioned above, we conclude that appropriate poisoning in ion channels of neurons in the network acts as 'defects' on the evolution of the spatiotemporal pattern, and accounts for the emergence of a spiral wave in the network of neurons. These results could be helpful to understand the potential cause of the formation and development of spiral waves in the cortex of a neuronal system.

  17. The Formation Mechanism of Defects, Spiral Wave in the Network of Neurons

    PubMed Central

    Wu, Xinyi; Ma, Jun

    2013-01-01

    A regular network of neurons is constructed by using the Morris-Lecar (ML) neuron with the ion channels being considered, and the potential mechnism of the formation of a spiral wave is investigated in detail. Several spiral waves are initiated by blocking the target wave with artificial defects and/or partial blocking (poisoning) in ion channels. Furthermore, possible conditions for spiral wave formation and the effect of partial channel blocking are discussed completely. Our results are summarized as follows. 1) The emergence of a target wave depends on the transmembrane currents with diversity, which mapped from the external forcing current and this kind of diversity is associated with spatial heterogeneity in the media. 2) Distinct spiral wave could be induced to occupy the network when the target wave is broken by partially blocking the ion channels of a fraction of neurons (local poisoned area), and these generated spiral waves are similar with the spiral waves induced by artificial defects. It is confirmed that partial channel blocking of some neurons in the network could play a similar role in breaking a target wave as do artificial defects; 3) Channel noise and additive Gaussian white noise are also considered, and it is confirmed that spiral waves are also induced in the network in the presence of noise. According to the results mentioned above, we conclude that appropriate poisoning in ion channels of neurons in the network acts as ‘defects’ on the evolution of the spatiotemporal pattern, and accounts for the emergence of a spiral wave in the network of neurons. These results could be helpful to understand the potential cause of the formation and development of spiral waves in the cortex of a neuronal system. PMID:23383179

  18. Fast numerical methods for simulating large-scale integrate-and-fire neuronal networks.

    PubMed

    Rangan, Aaditya V; Cai, David

    2007-02-01

    We discuss numerical methods for simulating large-scale, integrate-and-fire (I&F) neuronal networks. Important elements in our numerical methods are (i) a neurophysiologically inspired integrating factor which casts the solution as a numerically tractable integral equation, and allows us to obtain stable and accurate individual neuronal trajectories (i.e., voltage and conductance time-courses) even when the I&F neuronal equations are stiff, such as in strongly fluctuating, high-conductance states; (ii) an iterated process of spike-spike corrections within groups of strongly coupled neurons to account for spike-spike interactions within a single large numerical time-step; and (iii) a clustering procedure of firing events in the network to take advantage of localized architectures, such as spatial scales of strong local interactions, which are often present in large-scale computational models-for example, those of the primary visual cortex. (We note that the spike-spike corrections in our methods are more involved than the correction of single neuron spike-time via a polynomial interpolation as in the modified Runge-Kutta methods commonly used in simulations of I&F neuronal networks.) Our methods can evolve networks with relatively strong local interactions in an asymptotically optimal way such that each neuron fires approximately once in [Formula: see text] operations, where N is the number of neurons in the system. We note that quantifications used in computational modeling are often statistical, since measurements in a real experiment to characterize physiological systems are typically statistical, such as firing rate, interspike interval distributions, and spike-triggered voltage distributions. We emphasize that it takes much less computational effort to resolve statistical properties of certain I&F neuronal networks than to fully resolve trajectories of each and every neuron within the system. For networks operating in realistic dynamical regimes, such as strongly fluctuating, high-conductance states, our methods are designed to achieve statistical accuracy when very large time-steps are used. Moreover, our methods can also achieve trajectory-wise accuracy when small time-steps are used.

  19. Effects of Calcium Spikes in the Layer 5 Pyramidal Neuron on Coincidence Detection and Activity Propagation

    PubMed Central

    Chua, Yansong; Morrison, Abigail

    2016-01-01

    The role of dendritic spiking mechanisms in neural processing is so far poorly understood. To investigate the role of calcium spikes in the functional properties of the single neuron and recurrent networks, we investigated a three compartment neuron model of the layer 5 pyramidal neuron with calcium dynamics in the distal compartment. By performing single neuron simulations with noisy synaptic input and occasional large coincident input at either just the distal compartment or at both somatic and distal compartments, we show that the presence of calcium spikes confers a substantial advantage for coincidence detection in the former case and a lesser advantage in the latter. We further show that the experimentally observed critical frequency phenomenon, in which action potentials triggered by stimuli near the soma above a certain frequency trigger a calcium spike at distal dendrites, leading to further somatic depolarization, is not exhibited by a neuron receiving realistically noisy synaptic input, and so is unlikely to be a necessary component of coincidence detection. We next investigate the effect of calcium spikes in propagation of spiking activities in a feed-forward network (FFN) embedded in a balanced recurrent network. The excitatory neurons in the network are again connected to either just the distal, or both somatic and distal compartments. With purely distal connectivity, activity propagation is stable and distinguishable for a large range of recurrent synaptic strengths if the feed-forward connections are sufficiently strong, but propagation does not occur in the absence of calcium spikes. When connections are made to both the somatic and the distal compartments, activity propagation is achieved for neurons with active calcium dynamics at a much smaller number of neurons per pool, compared to a network of passive neurons, but quickly becomes unstable as the strength of recurrent synapses increases. Activity propagation at higher scaling factors can be stabilized by increasing network inhibition or introducing short term depression in the excitatory synapses, but the signal to noise ratio remains low. Our results demonstrate that the interaction of synchrony with dendritic spiking mechanisms can have profound consequences for the dynamics on the single neuron and network level. PMID:27499740

  20. Effects of Calcium Spikes in the Layer 5 Pyramidal Neuron on Coincidence Detection and Activity Propagation.

    PubMed

    Chua, Yansong; Morrison, Abigail

    2016-01-01

    The role of dendritic spiking mechanisms in neural processing is so far poorly understood. To investigate the role of calcium spikes in the functional properties of the single neuron and recurrent networks, we investigated a three compartment neuron model of the layer 5 pyramidal neuron with calcium dynamics in the distal compartment. By performing single neuron simulations with noisy synaptic input and occasional large coincident input at either just the distal compartment or at both somatic and distal compartments, we show that the presence of calcium spikes confers a substantial advantage for coincidence detection in the former case and a lesser advantage in the latter. We further show that the experimentally observed critical frequency phenomenon, in which action potentials triggered by stimuli near the soma above a certain frequency trigger a calcium spike at distal dendrites, leading to further somatic depolarization, is not exhibited by a neuron receiving realistically noisy synaptic input, and so is unlikely to be a necessary component of coincidence detection. We next investigate the effect of calcium spikes in propagation of spiking activities in a feed-forward network (FFN) embedded in a balanced recurrent network. The excitatory neurons in the network are again connected to either just the distal, or both somatic and distal compartments. With purely distal connectivity, activity propagation is stable and distinguishable for a large range of recurrent synaptic strengths if the feed-forward connections are sufficiently strong, but propagation does not occur in the absence of calcium spikes. When connections are made to both the somatic and the distal compartments, activity propagation is achieved for neurons with active calcium dynamics at a much smaller number of neurons per pool, compared to a network of passive neurons, but quickly becomes unstable as the strength of recurrent synapses increases. Activity propagation at higher scaling factors can be stabilized by increasing network inhibition or introducing short term depression in the excitatory synapses, but the signal to noise ratio remains low. Our results demonstrate that the interaction of synchrony with dendritic spiking mechanisms can have profound consequences for the dynamics on the single neuron and network level.

  1. Combined exposure to simulated microgravity and acute or chronic radiation reduces neuronal network integrity and cell survival

    NASA Astrophysics Data System (ADS)

    Benotmane, Rafi

    During orbital or interplanetary space flights, astronauts are exposed to cosmic radiations and microgravity. This study aimed at assessing the effect of these combined conditions on neuronal network density, cell morphology and survival, using well-connected mouse cortical neuron cultures. To this end, neurons were exposed to acute low and high doses of low LET (X-rays) radiation or to chronic low dose-rate of high LET neutron irradiation (Californium-252), under the simulated microgravity generated by the Random Positioning Machine (RPM, Dutch space). High content image analysis of cortical neurons positive for the neuronal marker βIII-tubulin unveiled a reduced neuronal network integrity and connectivity, and an altered cell morphology after exposure to acute/chronic radiation or to simulated microgravity. Additionally, in both conditions, a defect in DNA-repair efficiency was revealed by an increased number of γH2AX-positive foci, as well as an increased number of Annexin V-positive apoptotic neurons. Of interest, when combining both simulated space conditions, we noted a synergistic effect on neuronal network density, neuronal morphology, cell survival and DNA repair. Furthermore, these observations are in agreement with preliminary gene expression data, revealing modulations in cytoskeletal and apoptosis-related genes after exposure to simulated microgravity. In conclusion, the observed in vitro changes in neuronal network integrity and cell survival induced by space simulated conditions provide us with mechanistic understanding to evaluate health risks and the development of countermeasures to prevent neurological disorders in astronauts over long-term space travels. Acknowledgements: This work is supported partly by the EU-FP7 projects CEREBRAD (n° 295552)

  2. Multiple mechanisms switch an electrically coupled, synaptically inhibited neuron between competing rhythmic oscillators.

    PubMed

    Gutierrez, Gabrielle J; O'Leary, Timothy; Marder, Eve

    2013-03-06

    Rhythmic oscillations are common features of nervous systems. One of the fundamental questions posed by these rhythms is how individual neurons or groups of neurons are recruited into different network oscillations. We modeled competing fast and slow oscillators connected to a hub neuron with electrical and inhibitory synapses. We explore the patterns of coordination shown in the network as a function of the electrical coupling and inhibitory synapse strengths with the help of a novel visualization method that we call the "parameterscape." The hub neuron can be switched between the fast and slow oscillators by multiple network mechanisms, indicating that a given change in network state can be achieved by degenerate cellular mechanisms. These results have importance for interpreting experiments employing optogenetic, genetic, and pharmacological manipulations to understand circuit dynamics. Copyright © 2013 Elsevier Inc. All rights reserved.

  3. Development and function of human cerebral cortex neural networks from pluripotent stem cells in vitro

    PubMed Central

    Kirwan, Peter; Turner-Bridger, Benita; Peter, Manuel; Momoh, Ayiba; Arambepola, Devika; Robinson, Hugh P. C.; Livesey, Frederick J.

    2015-01-01

    A key aspect of nervous system development, including that of the cerebral cortex, is the formation of higher-order neural networks. Developing neural networks undergo several phases with distinct activity patterns in vivo, which are thought to prune and fine-tune network connectivity. We report here that human pluripotent stem cell (hPSC)-derived cerebral cortex neurons form large-scale networks that reflect those found in the developing cerebral cortex in vivo. Synchronised oscillatory networks develop in a highly stereotyped pattern over several weeks in culture. An initial phase of increasing frequency of oscillations is followed by a phase of decreasing frequency, before giving rise to non-synchronous, ordered activity patterns. hPSC-derived cortical neural networks are excitatory, driven by activation of AMPA- and NMDA-type glutamate receptors, and can undergo NMDA-receptor-mediated plasticity. Investigating single neuron connectivity within PSC-derived cultures, using rabies-based trans-synaptic tracing, we found two broad classes of neuronal connectivity: most neurons have small numbers (<10) of presynaptic inputs, whereas a small set of hub-like neurons have large numbers of synaptic connections (>40). These data demonstrate that the formation of hPSC-derived cortical networks mimics in vivo cortical network development and function, demonstrating the utility of in vitro systems for mechanistic studies of human forebrain neural network biology. PMID:26395144

  4. Development and function of human cerebral cortex neural networks from pluripotent stem cells in vitro.

    PubMed

    Kirwan, Peter; Turner-Bridger, Benita; Peter, Manuel; Momoh, Ayiba; Arambepola, Devika; Robinson, Hugh P C; Livesey, Frederick J

    2015-09-15

    A key aspect of nervous system development, including that of the cerebral cortex, is the formation of higher-order neural networks. Developing neural networks undergo several phases with distinct activity patterns in vivo, which are thought to prune and fine-tune network connectivity. We report here that human pluripotent stem cell (hPSC)-derived cerebral cortex neurons form large-scale networks that reflect those found in the developing cerebral cortex in vivo. Synchronised oscillatory networks develop in a highly stereotyped pattern over several weeks in culture. An initial phase of increasing frequency of oscillations is followed by a phase of decreasing frequency, before giving rise to non-synchronous, ordered activity patterns. hPSC-derived cortical neural networks are excitatory, driven by activation of AMPA- and NMDA-type glutamate receptors, and can undergo NMDA-receptor-mediated plasticity. Investigating single neuron connectivity within PSC-derived cultures, using rabies-based trans-synaptic tracing, we found two broad classes of neuronal connectivity: most neurons have small numbers (<10) of presynaptic inputs, whereas a small set of hub-like neurons have large numbers of synaptic connections (>40). These data demonstrate that the formation of hPSC-derived cortical networks mimics in vivo cortical network development and function, demonstrating the utility of in vitro systems for mechanistic studies of human forebrain neural network biology. © 2015. Published by The Company of Biologists Ltd.

  5. Macroscopic self-oscillations and aging transition in a network of synaptically coupled quadratic integrate-and-fire neurons.

    PubMed

    Ratas, Irmantas; Pyragas, Kestutis

    2016-09-01

    We analyze the dynamics of a large network of coupled quadratic integrate-and-fire neurons, which represent the canonical model for class I neurons near the spiking threshold. The network is heterogeneous in that it includes both inherently spiking and excitable neurons. The coupling is global via synapses that take into account the finite width of synaptic pulses. Using a recently developed reduction method based on the Lorentzian ansatz, we derive a closed system of equations for the neuron's firing rate and the mean membrane potential, which are exact in the infinite-size limit. The bifurcation analysis of the reduced equations reveals a rich scenario of asymptotic behavior, the most interesting of which is the macroscopic limit-cycle oscillations. It is shown that the finite width of synaptic pulses is a necessary condition for the existence of such oscillations. The robustness of the oscillations against aging damage, which transforms spiking neurons into nonspiking neurons, is analyzed. The validity of the reduced equations is confirmed by comparing their solutions with the solutions of microscopic equations for the finite-size networks.

  6. Optimal Control-Based Adaptive NN Design for a Class of Nonlinear Discrete-Time Block-Triangular Systems.

    PubMed

    Liu, Yan-Jun; Tong, Shaocheng

    2016-11-01

    In this paper, we propose an optimal control scheme-based adaptive neural network design for a class of unknown nonlinear discrete-time systems. The controlled systems are in a block-triangular multi-input-multi-output pure-feedback structure, i.e., there are both state and input couplings and nonaffine functions to be included in every equation of each subsystem. The design objective is to provide a control scheme, which not only guarantees the stability of the systems, but also achieves optimal control performance. The main contribution of this paper is that it is for the first time to achieve the optimal performance for such a class of systems. Owing to the interactions among subsystems, making an optimal control signal is a difficult task. The design ideas are that: 1) the systems are transformed into an output predictor form; 2) for the output predictor, the ideal control signal and the strategic utility function can be approximated by using an action network and a critic network, respectively; and 3) an optimal control signal is constructed with the weight update rules to be designed based on a gradient descent method. The stability of the systems can be proved based on the difference Lyapunov method. Finally, a numerical simulation is given to illustrate the performance of the proposed scheme.

  7. The Dynamics of Networks of Identical Theta Neurons.

    PubMed

    Laing, Carlo R

    2018-02-05

    We consider finite and infinite all-to-all coupled networks of identical theta neurons. Two types of synaptic interactions are investigated: instantaneous and delayed (via first-order synaptic processing). Extensive use is made of the Watanabe/Strogatz (WS) ansatz for reducing the dimension of networks of identical sinusoidally-coupled oscillators. As well as the degeneracy associated with the constants of motion of the WS ansatz, we also find continuous families of solutions for instantaneously coupled neurons, resulting from the reversibility of the reduced model and the form of the synaptic input. We also investigate a number of similar related models. We conclude that the dynamics of networks of all-to-all coupled identical neurons can be surprisingly complicated.

  8. Complex Rotation Quantum Dynamic Neural Networks (CRQDNN) using Complex Quantum Neuron (CQN): Applications to time series prediction.

    PubMed

    Cui, Yiqian; Shi, Junyou; Wang, Zili

    2015-11-01

    Quantum Neural Networks (QNN) models have attracted great attention since it innovates a new neural computing manner based on quantum entanglement. However, the existing QNN models are mainly based on the real quantum operations, and the potential of quantum entanglement is not fully exploited. In this paper, we proposes a novel quantum neuron model called Complex Quantum Neuron (CQN) that realizes a deep quantum entanglement. Also, a novel hybrid networks model Complex Rotation Quantum Dynamic Neural Networks (CRQDNN) is proposed based on Complex Quantum Neuron (CQN). CRQDNN is a three layer model with both CQN and classical neurons. An infinite impulse response (IIR) filter is embedded in the Networks model to enable the memory function to process time series inputs. The Levenberg-Marquardt (LM) algorithm is used for fast parameter learning. The networks model is developed to conduct time series predictions. Two application studies are done in this paper, including the chaotic time series prediction and electronic remaining useful life (RUL) prediction. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Effects of spike-time-dependent plasticity on the stochastic resonance of small-world neuronal networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Haitao; Guo, Xinmeng; Wang, Jiang, E-mail: jiangwang@tju.edu.cn

    2014-09-01

    The phenomenon of stochastic resonance in Newman-Watts small-world neuronal networks is investigated when the strength of synaptic connections between neurons is adaptively adjusted by spike-time-dependent plasticity (STDP). It is shown that irrespective of the synaptic connectivity is fixed or adaptive, the phenomenon of stochastic resonance occurs. The efficiency of network stochastic resonance can be largely enhanced by STDP in the coupling process. Particularly, the resonance for adaptive coupling can reach a much larger value than that for fixed one when the noise intensity is small or intermediate. STDP with dominant depression and small temporal window ratio is more efficient formore » the transmission of weak external signal in small-world neuronal networks. In addition, we demonstrate that the effect of stochastic resonance can be further improved via fine-tuning of the average coupling strength of the adaptive network. Furthermore, the small-world topology can significantly affect stochastic resonance of excitable neuronal networks. It is found that there exists an optimal probability of adding links by which the noise-induced transmission of weak periodic signal peaks.« less

  10. Local and global synchronization transitions induced by time delays in small-world neuronal networks with chemical synapses.

    PubMed

    Yu, Haitao; Wang, Jiang; Du, Jiwei; Deng, Bin; Wei, Xile

    2015-02-01

    Effects of time delay on the local and global synchronization in small-world neuronal networks with chemical synapses are investigated in this paper. Numerical results show that, for both excitatory and inhibitory coupling types, the information transmission delay can always induce synchronization transitions of spiking neurons in small-world networks. In particular, regions of in-phase and out-of-phase synchronization of connected neurons emerge intermittently as the synaptic delay increases. For excitatory coupling, all transitions to spiking synchronization occur approximately at integer multiples of the firing period of individual neurons; while for inhibitory coupling, these transitions appear at the odd multiples of the half of the firing period of neurons. More importantly, the local synchronization transition is more profound than the global synchronization transition, depending on the type of coupling synapse. For excitatory synapses, the local in-phase synchronization observed for some values of the delay also occur at a global scale; while for inhibitory ones, this synchronization, observed at the local scale, disappears at a global scale. Furthermore, the small-world structure can also affect the phase synchronization of neuronal networks. It is demonstrated that increasing the rewiring probability can always improve the global synchronization of neuronal activity, but has little effect on the local synchronization of neighboring neurons.

  11. One-to-one neuron-electrode interfacing.

    PubMed

    Greenbaum, Alon; Anava, Sarit; Ayali, Amir; Shein, Mark; David-Pur, Moshe; Ben-Jacob, Eshel; Hanein, Yael

    2009-09-15

    The question of neuronal network development and organization is a principle one, which is closely related to aspects of neuronal and network form-function interactions. In-vitro two-dimensional neuronal cultures have proved to be an attractive and successful model for the study of these questions. Research is constraint however by the search for techniques aimed at culturing stable networks, whose electrical activity can be reliably and consistently monitored. A simple approach to form small interconnected neuronal circuits while achieving one-to-one neuron-electrode interfacing is presented. Locust neurons were cultured on a novel bio-chip consisting of carbon-nanotube multi-electrode-arrays. The cells self-organized to position themselves in close proximity to the bio-chip electrodes. The organization of the cells on the electrodes was analyzed using time lapse microscopy, fluorescence imaging and scanning electron microscopy. Electrical recordings from well identified cells is presented and discussed. The unique properties of the bio-chip and the specific neuron-nanotube interactions, together with the use of relatively large insect ganglion cells, allowed long-term stabilization (as long as 10 days) of predefined neural network topology as well as high fidelity electrical recording of individual neuron firing. This novel preparation opens ample opportunity for future investigation into key neurobiological questions and principles.

  12. Rich-Club Organization in Effective Connectivity among Cortical Neurons.

    PubMed

    Nigam, Sunny; Shimono, Masanori; Ito, Shinya; Yeh, Fang-Chin; Timme, Nicholas; Myroshnychenko, Maxym; Lapish, Christopher C; Tosi, Zachary; Hottowy, Pawel; Smith, Wesley C; Masmanidis, Sotiris C; Litke, Alan M; Sporns, Olaf; Beggs, John M

    2016-01-20

    The performance of complex networks, like the brain, depends on how effectively their elements communicate. Despite the importance of communication, it is virtually unknown how information is transferred in local cortical networks, consisting of hundreds of closely spaced neurons. To address this, it is important to record simultaneously from hundreds of neurons at a spacing that matches typical axonal connection distances, and at a temporal resolution that matches synaptic delays. We used a 512-electrode array (60 μm spacing) to record spontaneous activity at 20 kHz from up to 500 neurons simultaneously in slice cultures of mouse somatosensory cortex for 1 h at a time. We applied a previously validated version of transfer entropy to quantify information transfer. Similar to in vivo reports, we found an approximately lognormal distribution of firing rates. Pairwise information transfer strengths also were nearly lognormally distributed, similar to reports of synaptic strengths. Some neurons transferred and received much more information than others, which is consistent with previous predictions. Neurons with the highest outgoing and incoming information transfer were more strongly connected to each other than chance, thus forming a "rich club." We found similar results in networks recorded in vivo from rodent cortex, suggesting the generality of these findings. A rich-club structure has been found previously in large-scale human brain networks and is thought to facilitate communication between cortical regions. The discovery of a small, but information-rich, subset of neurons within cortical regions suggests that this population will play a vital role in communication, learning, and memory. Significance statement: Many studies have focused on communication networks between cortical brain regions. In contrast, very few studies have examined communication networks within a cortical region. This is the first study to combine such a large number of neurons (several hundred at a time) with such high temporal resolution (so we can know the direction of communication between neurons) for mapping networks within cortex. We found that information was not transferred equally through all neurons. Instead, ∼70% of the information passed through only 20% of the neurons. Network models suggest that this highly concentrated pattern of information transfer would be both efficient and robust to damage. Therefore, this work may help in understanding how the cortex processes information and responds to neurodegenerative diseases. Copyright © 2016 Nigam et al.

  13. Rich-Club Organization in Effective Connectivity among Cortical Neurons

    PubMed Central

    Shimono, Masanori; Ito, Shinya; Yeh, Fang-Chin; Timme, Nicholas; Myroshnychenko, Maxym; Lapish, Christopher C.; Tosi, Zachary; Hottowy, Pawel; Smith, Wesley C.; Masmanidis, Sotiris C.; Litke, Alan M.; Sporns, Olaf; Beggs, John M.

    2016-01-01

    The performance of complex networks, like the brain, depends on how effectively their elements communicate. Despite the importance of communication, it is virtually unknown how information is transferred in local cortical networks, consisting of hundreds of closely spaced neurons. To address this, it is important to record simultaneously from hundreds of neurons at a spacing that matches typical axonal connection distances, and at a temporal resolution that matches synaptic delays. We used a 512-electrode array (60 μm spacing) to record spontaneous activity at 20 kHz from up to 500 neurons simultaneously in slice cultures of mouse somatosensory cortex for 1 h at a time. We applied a previously validated version of transfer entropy to quantify information transfer. Similar to in vivo reports, we found an approximately lognormal distribution of firing rates. Pairwise information transfer strengths also were nearly lognormally distributed, similar to reports of synaptic strengths. Some neurons transferred and received much more information than others, which is consistent with previous predictions. Neurons with the highest outgoing and incoming information transfer were more strongly connected to each other than chance, thus forming a “rich club.” We found similar results in networks recorded in vivo from rodent cortex, suggesting the generality of these findings. A rich-club structure has been found previously in large-scale human brain networks and is thought to facilitate communication between cortical regions. The discovery of a small, but information-rich, subset of neurons within cortical regions suggests that this population will play a vital role in communication, learning, and memory. SIGNIFICANCE STATEMENT Many studies have focused on communication networks between cortical brain regions. In contrast, very few studies have examined communication networks within a cortical region. This is the first study to combine such a large number of neurons (several hundred at a time) with such high temporal resolution (so we can know the direction of communication between neurons) for mapping networks within cortex. We found that information was not transferred equally through all neurons. Instead, ∼70% of the information passed through only 20% of the neurons. Network models suggest that this highly concentrated pattern of information transfer would be both efficient and robust to damage. Therefore, this work may help in understanding how the cortex processes information and responds to neurodegenerative diseases. PMID:26791200

  14. Simultaneous multi-patch-clamp and extracellular-array recordings: Single neuron reflects network activity

    NASA Astrophysics Data System (ADS)

    Vardi, Roni; Goldental, Amir; Sardi, Shira; Sheinin, Anton; Kanter, Ido

    2016-11-01

    The increasing number of recording electrodes enhances the capability of capturing the network’s cooperative activity, however, using too many monitors might alter the properties of the measured neural network and induce noise. Using a technique that merges simultaneous multi-patch-clamp and multi-electrode array recordings of neural networks in-vitro, we show that the membrane potential of a single neuron is a reliable and super-sensitive probe for monitoring such cooperative activities and their detailed rhythms. Specifically, the membrane potential and the spiking activity of a single neuron are either highly correlated or highly anti-correlated with the time-dependent macroscopic activity of the entire network. This surprising observation also sheds light on the cooperative origin of neuronal burst in cultured networks. Our findings present an alternative flexible approach to the technique based on a massive tiling of networks by large-scale arrays of electrodes to monitor their activity.

  15. Simultaneous multi-patch-clamp and extracellular-array recordings: Single neuron reflects network activity

    PubMed Central

    Vardi, Roni; Goldental, Amir; Sardi, Shira; Sheinin, Anton; Kanter, Ido

    2016-01-01

    The increasing number of recording electrodes enhances the capability of capturing the network’s cooperative activity, however, using too many monitors might alter the properties of the measured neural network and induce noise. Using a technique that merges simultaneous multi-patch-clamp and multi-electrode array recordings of neural networks in-vitro, we show that the membrane potential of a single neuron is a reliable and super-sensitive probe for monitoring such cooperative activities and their detailed rhythms. Specifically, the membrane potential and the spiking activity of a single neuron are either highly correlated or highly anti-correlated with the time-dependent macroscopic activity of the entire network. This surprising observation also sheds light on the cooperative origin of neuronal burst in cultured networks. Our findings present an alternative flexible approach to the technique based on a massive tiling of networks by large-scale arrays of electrodes to monitor their activity. PMID:27824075

  16. Developing neuronal networks: Self-organized criticality predicts the future

    NASA Astrophysics Data System (ADS)

    Pu, Jiangbo; Gong, Hui; Li, Xiangning; Luo, Qingming

    2013-01-01

    Self-organized criticality emerged in neural activity is one of the key concepts to describe the formation and the function of developing neuronal networks. The relationship between critical dynamics and neural development is both theoretically and experimentally appealing. However, whereas it is well-known that cortical networks exhibit a rich repertoire of activity patterns at different stages during in vitro maturation, dynamical activity patterns through the entire neural development still remains unclear. Here we show that a series of metastable network states emerged in the developing and ``aging'' process of hippocampal networks cultured from dissociated rat neurons. The unidirectional sequence of state transitions could be only observed in networks showing power-law scaling of distributed neuronal avalanches. Our data suggest that self-organized criticality may guide spontaneous activity into a sequential succession of homeostatically-regulated transient patterns during development, which may help to predict the tendency of neural development at early ages in the future.

  17. Cell Assembly Dynamics of Sparsely-Connected Inhibitory Networks: A Simple Model for the Collective Activity of Striatal Projection Neurons.

    PubMed

    Angulo-Garcia, David; Berke, Joshua D; Torcini, Alessandro

    2016-02-01

    Striatal projection neurons form a sparsely-connected inhibitory network, and this arrangement may be essential for the appropriate temporal organization of behavior. Here we show that a simplified, sparse inhibitory network of Leaky-Integrate-and-Fire neurons can reproduce some key features of striatal population activity, as observed in brain slices. In particular we develop a new metric to determine the conditions under which sparse inhibitory networks form anti-correlated cell assemblies with time-varying activity of individual cells. We find that under these conditions the network displays an input-specific sequence of cell assembly switching, that effectively discriminates similar inputs. Our results support the proposal that GABAergic connections between striatal projection neurons allow stimulus-selective, temporally-extended sequential activation of cell assemblies. Furthermore, we help to show how altered intrastriatal GABAergic signaling may produce aberrant network-level information processing in disorders such as Parkinson's and Huntington's diseases.

  18. A patterned recombinant human IgM guides neurite outgrowth of CNS neurons

    PubMed Central

    Xu, Xiaohua; Wittenberg, Nathan J.; Jordan, Luke R.; Kumar, Shailabh; Watzlawik, Jens O.; Warrington, Arthur E.; Oh, Sang-Hyun; Rodriguez, Moses

    2013-01-01

    Matrix molecules convey biochemical and physical guiding signals to neurons in the central nervous system (CNS) and shape the trajectory of neuronal fibers that constitute neural networks. We have developed recombinant human IgMs that bind to epitopes on neural cells, with the aim of treating neurological diseases. Here we test the hypothesis that recombinant human IgMs (rHIgM) can guide neurite outgrowth of CNS neurons. Microcontact printing was employed to pattern rHIgM12 and rHIgM22, antibodies that were bioengineered to have variable regions capable of binding to neurons or oligodendrocytes, respectively. rHIgM12 promoted neuronal attachment and guided outgrowth of neurites from hippocampal neurons. Processes from spinal neurons followed grid patterns of rHIgM12 and formed a physical network. Comparison between rHIgM12 and rHIgM22 suggested the biochemistry that facilitates anchoring the neuronal surfaces is a prerequisite for the function of IgM, and spatial properties cooperate in guiding the assembly of neuronal networks. PMID:23881231

  19. Where the thoughts dwell: the physiology of neuronal-glial "diffuse neural net".

    PubMed

    Verkhratsky, Alexei; Parpura, Vladimir; Rodríguez, José J

    2011-01-07

    The mechanisms underlying the production of thoughts by exceedingly complex cellular networks that construct the human brain constitute the most challenging problem of natural sciences. Our understanding of the brain function is very much shaped by the neuronal doctrine that assumes that neuronal networks represent the only substrate for cognition. These neuronal networks however are embedded into much larger and probably more complex network formed by neuroglia. The latter, although being electrically silent, employ many different mechanisms for intercellular signalling. It appears that astrocytes can control synaptic networks and in such a capacity they may represent an integral component of the computational power of the brain rather than being just brain "connective tissue". The fundamental question of whether neuroglia is involved in cognition and information processing remains, however, open. Indeed, a remarkable increase in the number of glial cells that distinguishes the human brain can be simply a result of exceedingly high specialisation of the neuronal networks, which delegated all matters of survival and maintenance to the neuroglia. At the same time potential power of analogue processing offered by internally connected glial networks may represent the alternative mechanism involved in cognition. Copyright © 2010 Elsevier B.V. All rights reserved.

  20. Layer-specific optogenetic activation of pyramidal neurons causes beta–gamma entrainment of neonatal networks

    PubMed Central

    Bitzenhofer, Sebastian H; Ahlbeck, Joachim; Wolff, Amy; Wiegert, J. Simon; Gee, Christine E.; Oertner, Thomas G.; Hanganu-Opatz, Ileana L.

    2017-01-01

    Coordinated activity patterns in the developing brain may contribute to the wiring of neuronal circuits underlying future behavioural requirements. However, causal evidence for this hypothesis has been difficult to obtain owing to the absence of tools for selective manipulation of oscillations during early development. We established a protocol that combines optogenetics with electrophysiological recordings from neonatal mice in vivo to elucidate the substrate of early network oscillations in the prefrontal cortex. We show that light-induced activation of layer II/III pyramidal neurons that are transfected by in utero electroporation with a high-efficiency channelrhodopsin drives frequency-specific spiking and boosts network oscillations within beta–gamma frequency range. By contrast, activation of layer V/VI pyramidal neurons causes nonspecific network activation. Thus, entrainment of neonatal prefrontal networks in fast rhythms relies on the activation of layer II/III pyramidal neurons. This approach used here may be useful for further interrogation of developing circuits, and their behavioural readout. PMID:28216627

  1. Collective behavior of large-scale neural networks with GPU acceleration.

    PubMed

    Qu, Jingyi; Wang, Rubin

    2017-12-01

    In this paper, the collective behaviors of a small-world neuronal network motivated by the anatomy of a mammalian cortex based on both Izhikevich model and Rulkov model are studied. The Izhikevich model can not only reproduce the rich behaviors of biological neurons but also has only two equations and one nonlinear term. Rulkov model is in the form of difference equations that generate a sequence of membrane potential samples in discrete moments of time to improve computational efficiency. These two models are suitable for the construction of large scale neural networks. By varying some key parameters, such as the connection probability and the number of nearest neighbor of each node, the coupled neurons will exhibit types of temporal and spatial characteristics. It is demonstrated that the implementation of GPU can achieve more and more acceleration than CPU with the increasing of neuron number and iterations. These two small-world network models and GPU acceleration give us a new opportunity to reproduce the real biological network containing a large number of neurons.

  2. A mixed-signal implementation of a polychronous spiking neural network with delay adaptation

    PubMed Central

    Wang, Runchun M.; Hamilton, Tara J.; Tapson, Jonathan C.; van Schaik, André

    2014-01-01

    We present a mixed-signal implementation of a re-configurable polychronous spiking neural network capable of storing and recalling spatio-temporal patterns. The proposed neural network contains one neuron array and one axon array. Spike Timing Dependent Delay Plasticity is used to fine-tune delays and add dynamics to the network. In our mixed-signal implementation, the neurons and axons have been implemented as both analog and digital circuits. The system thus consists of one FPGA, containing the digital neuron array and the digital axon array, and one analog IC containing the analog neuron array and the analog axon array. The system can be easily configured to use different combinations of each. We present and discuss the experimental results of all combinations of the analog and digital axon arrays and the analog and digital neuron arrays. The test results show that the proposed neural network is capable of successfully recalling more than 85% of stored patterns using both analog and digital circuits. PMID:24672422

  3. A mixed-signal implementation of a polychronous spiking neural network with delay adaptation.

    PubMed

    Wang, Runchun M; Hamilton, Tara J; Tapson, Jonathan C; van Schaik, André

    2014-01-01

    We present a mixed-signal implementation of a re-configurable polychronous spiking neural network capable of storing and recalling spatio-temporal patterns. The proposed neural network contains one neuron array and one axon array. Spike Timing Dependent Delay Plasticity is used to fine-tune delays and add dynamics to the network. In our mixed-signal implementation, the neurons and axons have been implemented as both analog and digital circuits. The system thus consists of one FPGA, containing the digital neuron array and the digital axon array, and one analog IC containing the analog neuron array and the analog axon array. The system can be easily configured to use different combinations of each. We present and discuss the experimental results of all combinations of the analog and digital axon arrays and the analog and digital neuron arrays. The test results show that the proposed neural network is capable of successfully recalling more than 85% of stored patterns using both analog and digital circuits.

  4. Topographic expression of active faults in the foothills of the Northern Apennines

    NASA Astrophysics Data System (ADS)

    Picotti, Vincenzo; Ponza, Alessio; Pazzaglia, Frank J.

    2009-09-01

    Active faults that rupture the earth's surface leave an imprint on the topography that is recognized using a combination of geomorphic and geologic metrics including triangular facets, the shape of mountain fronts, the drainage network, and incised river valleys with inset terraces. We document the presence of a network of active, high-angle extensional faults, collectively embedded in the actively shortening mountain front of the Northern Apennines, that possess unique geomorphic expressions. We measure the strain rate for these structures and find that they have a constant throw-to-length ratio. We demonstrate the necessary and sufficient conditions for triangular facet development in the footwalls of these faults and argue that rock-type exerts the strongest control. The slip rates of these faults range from 0.1 to 0.3 mm/yr, which is similar to the average rate of river incision and mountain front unroofing determined by corollary studies. The faults are a near-surface manifestation of deeper crustal processes that are actively uplifting rocks and growing topography at a rate commensurate with surface processes that are eroding the mountain front to base level.

  5. Functional model of biological neural networks.

    PubMed

    Lo, James Ting-Ho

    2010-12-01

    A functional model of biological neural networks, called temporal hierarchical probabilistic associative memory (THPAM), is proposed in this paper. THPAM comprises functional models of dendritic trees for encoding inputs to neurons, a first type of neuron for generating spike trains, a second type of neuron for generating graded signals to modulate neurons of the first type, supervised and unsupervised Hebbian learning mechanisms for easy learning and retrieving, an arrangement of dendritic trees for maximizing generalization, hardwiring for rotation-translation-scaling invariance, and feedback connections with different delay durations for neurons to make full use of present and past informations generated by neurons in the same and higher layers. These functional models and their processing operations have many functions of biological neural networks that have not been achieved by other models in the open literature and provide logically coherent answers to many long-standing neuroscientific questions. However, biological justifications of these functional models and their processing operations are required for THPAM to qualify as a macroscopic model (or low-order approximate) of biological neural networks.

  6. The immunoglobulin-like genetic predetermination of the brain: the protocadherins, blueprint of the neuronal network

    NASA Astrophysics Data System (ADS)

    Hilschmann, N.; Barnikol, H. U.; Barnikol-Watanabe, S.; Götz, H.; Kratzin, H.; Thinnes, F. P.

    2001-01-01

    The morphogenesis of the brain is governed by synaptogenesis. Synaptogenesis in turn is determined by cell adhesion molecules, which bridge the synaptic cleft and, by homophilic contact, decide which neurons are connected and which are not. Because of their enormous diversification in specificities, protocadherins (pcdhα, pcdhβ, pcdhγ), a new class of cadherins, play a decisive role. Surprisingly, the genetic control of the protocadherins is very similar to that of the immunoglobulins. There are three sets of variable (V) genes followed by a corresponding constant (C) gene. Applying the rules of the immunoglobulin genes to the protocadherin genes leads, despite of this similarity, to quite different results in the central nervous system. The lymphocyte expresses one single receptor molecule specifically directed against an outside stimulus. In contrast, there are three specific recognition sites in each neuron, each expressing a different protocadherin. In this way, 4,950 different neurons arising from one stem cell form a neuronal network, in which homophilic contacts can be formed in 52 layers, permitting an enormous number of different connections and restraints between neurons. This network is one module of the central computer of the brain. Since the V-genes are generated during evolution and V-gene translocation during embryogenesis, outside stimuli have no influence on this network. The network is an inborn property of the protocadherin genes. Every circuit produced, as well as learning and memory, has to be based on this genetically predetermined network. This network is so universal that it can cope with everything, even the unexpected. In this respect the neuronal network resembles the recognition sites of the immunoglobulins.

  7. Role of Ongoing, Intrinsic Activity of Neuronal Populations for Quantitative Neuroimaging of Functional Magnetic Resonance Imaging–Based Networks

    PubMed Central

    Herman, Peter; Sanganahalli, Basavaraju G.; Coman, Daniel; Blumenfeld, Hal; Rothman, Douglas L.

    2011-01-01

    Abstract A primary objective in neuroscience is to determine how neuronal populations process information within networks. In humans and animal models, functional magnetic resonance imaging (fMRI) is gaining increasing popularity for network mapping. Although neuroimaging with fMRI—conducted with or without tasks—is actively discovering new brain networks, current fMRI data analysis schemes disregard the importance of the total neuronal activity in a region. In task fMRI experiments, the baseline is differenced away to disclose areas of small evoked changes in the blood oxygenation level-dependent (BOLD) signal. In resting-state fMRI experiments, the spotlight is on regions revealed by correlations of tiny fluctuations in the baseline (or spontaneous) BOLD signal. Interpretation of fMRI-based networks is obscured further, because the BOLD signal indirectly reflects neuronal activity, and difference/correlation maps are thresholded. Since the small changes of BOLD signal typically observed in cognitive fMRI experiments represent a minimal fraction of the total energy/activity in a given area, the relevance of fMRI-based networks is uncertain, because the majority of neuronal energy/activity is ignored. Thus, another alternative for quantitative neuroimaging of fMRI-based networks is a perspective in which the activity of a neuronal population is accounted for by the demanded oxidative energy (CMRO2). In this article, we argue that network mapping can be improved by including neuronal energy/activity of both the information about baseline and small differences/fluctuations of BOLD signal. Thus, total energy/activity information can be obtained through use of calibrated fMRI to quantify differences of ΔCMRO2 and through resting-state positron emission tomography/magnetic resonance spectroscopy measurements for average CMRO2. PMID:22433047

  8. Synchronization behaviors of coupled neurons under electromagnetic radiation

    NASA Astrophysics Data System (ADS)

    Ma, Jun; Wu, Fuqiang; Wang, Chunni

    2017-01-01

    Based on an improved neuronal model, in which the effect of magnetic flux is considered during the fluctuation and change of ion concentration in cells, the transition of synchronization is investigated by imposing external electromagnetic radiation on the coupled neurons, and networks, respectively. It is found that the synchronization degree depends on the coupling intensity and the intensity of external electromagnetic radiation. Indeed, appropriate intensity of electromagnetic radiation could be effective to realize intermittent synchronization, while stronger intensity of electromagnetic radiation can induce disorder of coupled neurons and network. Neurons show rhythm synchronization in the electrical activities by increasing the coupling intensity under electromagnetic radiation, and spatial patterns can be formed in the network under smaller factor of synchronization.

  9. Representation of Non-Spatial and Spatial Information in the Lateral Entorhinal Cortex

    PubMed Central

    Deshmukh, Sachin S.; Knierim, James J.

    2011-01-01

    Some theories of memory propose that the hippocampus integrates the individual items and events of experience within a contextual or spatial framework. The hippocampus receives cortical input from two major pathways: the medial entorhinal cortex (MEC) and the lateral entorhinal cortex (LEC). During exploration in an open field, the firing fields of MEC grid cells form a periodically repeating, triangular array. In contrast, LEC neurons show little spatial selectivity, and it has been proposed that the LEC may provide non-spatial input to the hippocampus. Here, we recorded MEC and LEC neurons while rats explored an open field that contained discrete objects. LEC cells fired selectively at locations relative to the objects, whereas MEC cells were weakly influenced by the objects. These results provide the first direct demonstration of a double dissociation between LEC and MEC inputs to the hippocampus under conditions of exploration typically used to study hippocampal place cells. PMID:22065409

  10. Entorhinal cortex receptive fields are modulated by spatial attention, even without movement

    PubMed Central

    König, Peter; König, Seth; Buffalo, Elizabeth A

    2018-01-01

    Grid cells in the entorhinal cortex allow for the precise decoding of position in space. Along with potentially playing an important role in navigation, grid cells have recently been hypothesized to make a general contribution to mental operations. A prerequisite for this hypothesis is that grid cell activity does not critically depend on physical movement. Here, we show that movement of covert attention, without any physical movement, also elicits spatial receptive fields with a triangular tiling of space. In monkeys trained to maintain central fixation while covertly attending to a stimulus moving in the periphery we identified a significant population (20/141, 14% neurons at a FDR <5%) of entorhinal cells with spatially structured receptive fields. This contrasts with recordings obtained in the hippocampus, where grid-like representations were not observed. Our results provide evidence that neurons in macaque entorhinal cortex do not rely on physical movement. PMID:29537964

  11. Introduction to Concepts in Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Niebur, Dagmar

    1995-01-01

    This introduction to artificial neural networks summarizes some basic concepts of computational neuroscience and the resulting models of artificial neurons. The terminology of biological and artificial neurons, biological and machine learning and neural processing is introduced. The concepts of supervised and unsupervised learning are explained with examples from the power system area. Finally, a taxonomy of different types of neurons and different classes of artificial neural networks is presented.

  12. Emergent properties of interacting populations of spiking neurons.

    PubMed

    Cardanobile, Stefano; Rotter, Stefan

    2011-01-01

    Dynamic neuronal networks are a key paradigm of increasing importance in brain research, concerned with the functional analysis of biological neuronal networks and, at the same time, with the synthesis of artificial brain-like systems. In this context, neuronal network models serve as mathematical tools to understand the function of brains, but they might as well develop into future tools for enhancing certain functions of our nervous system. Here, we present and discuss our recent achievements in developing multiplicative point processes into a viable mathematical framework for spiking network modeling. The perspective is that the dynamic behavior of these neuronal networks is faithfully reflected by a set of non-linear rate equations, describing all interactions on the population level. These equations are similar in structure to Lotka-Volterra equations, well known by their use in modeling predator-prey relations in population biology, but abundant applications to economic theory have also been described. We present a number of biologically relevant examples for spiking network function, which can be studied with the help of the aforementioned correspondence between spike trains and specific systems of non-linear coupled ordinary differential equations. We claim that, enabled by the use of multiplicative point processes, we can make essential contributions to a more thorough understanding of the dynamical properties of interacting neuronal populations.

  13. Caged Neuron MEA: A system for long-term investigation of cultured neural network connectivity

    PubMed Central

    Erickson, Jonathan; Tooker, Angela; Tai, Y-C.; Pine, Jerome

    2008-01-01

    Traditional techniques for investigating cultured neural networks, such as the patch clamp and multi-electrode array, are limited by: 1) the number of identified cells which can be simultaneously electrically contacted, 2) the length of time for which cells can be studied, and 3) the lack of one-to-one neuron-to-electrode specificity. Here, we present a new device—the caged neuron multi-electrode array—which overcomes these limitations. This micro-machined device consists of an array of neurocages which mechanically trap a neuron near an extracellular electrode. While the cell body is trapped, the axon and dendrites can freely grow into the surrounding area to form a network. The electrode is bi-directional, capable of both stimulating and recording action potentials. This system is non-invasive, so that all constituent neurons of a network can be studied over its lifetime with stable one-to-one neuron-to-electrode correspondence. Proof-of-concept experiments are described to illustrate that functional networks form in a neurochip system of 16 cages in a 4×4 array, and that suprathreshold connectivity can be fully mapped over several weeks. The neurochip opens a new domain in neurobiology for studying small cultured neural networks. PMID:18775453

  14. Emergent Properties of Interacting Populations of Spiking Neurons

    PubMed Central

    Cardanobile, Stefano; Rotter, Stefan

    2011-01-01

    Dynamic neuronal networks are a key paradigm of increasing importance in brain research, concerned with the functional analysis of biological neuronal networks and, at the same time, with the synthesis of artificial brain-like systems. In this context, neuronal network models serve as mathematical tools to understand the function of brains, but they might as well develop into future tools for enhancing certain functions of our nervous system. Here, we present and discuss our recent achievements in developing multiplicative point processes into a viable mathematical framework for spiking network modeling. The perspective is that the dynamic behavior of these neuronal networks is faithfully reflected by a set of non-linear rate equations, describing all interactions on the population level. These equations are similar in structure to Lotka-Volterra equations, well known by their use in modeling predator-prey relations in population biology, but abundant applications to economic theory have also been described. We present a number of biologically relevant examples for spiking network function, which can be studied with the help of the aforementioned correspondence between spike trains and specific systems of non-linear coupled ordinary differential equations. We claim that, enabled by the use of multiplicative point processes, we can make essential contributions to a more thorough understanding of the dynamical properties of interacting neuronal populations. PMID:22207844

  15. Axon and dendrite geography predict the specificity of synaptic connections in a functioning spinal cord network.

    PubMed

    Li, Wen-Chang; Cooke, Tom; Sautois, Bart; Soffe, Stephen R; Borisyuk, Roman; Roberts, Alan

    2007-09-10

    How specific are the synaptic connections formed as neuronal networks develop and can simple rules account for the formation of functioning circuits? These questions are assessed in the spinal circuits controlling swimming in hatchling frog tadpoles. This is possible because detailed information is now available on the identity and synaptic connections of the main types of neuron. The probabilities of synapses between 7 types of identified spinal neuron were measured directly by making electrical recordings from 500 pairs of neurons. For the same neuron types, the dorso-ventral distributions of axons and dendrites were measured and then used to calculate the probabilities that axons would encounter particular dendrites and so potentially form synaptic connections. Surprisingly, synapses were found between all types of neuron but contact probabilities could be predicted simply by the anatomical overlap of their axons and dendrites. These results suggested that synapse formation may not require axons to recognise specific, correct dendrites. To test the plausibility of simpler hypotheses, we first made computational models that were able to generate longitudinal axon growth paths and reproduce the axon distribution patterns and synaptic contact probabilities found in the spinal cord. To test if probabilistic rules could produce functioning spinal networks, we then made realistic computational models of spinal cord neurons, giving them established cell-specific properties and connecting them into networks using the contact probabilities we had determined. A majority of these networks produced robust swimming activity. Simple factors such as morphogen gradients controlling dorso-ventral soma, dendrite and axon positions may sufficiently constrain the synaptic connections made between different types of neuron as the spinal cord first develops and allow functional networks to form. Our analysis implies that detailed cellular recognition between spinal neuron types may not be necessary for the reliable formation of functional networks to generate early behaviour like swimming.

  16. Emergent gamma synchrony in all-to-all interneuronal networks.

    PubMed

    Ratnadurai-Giridharan, Shivakeshavan; Khargonekar, Pramod P; Talathi, Sachin S

    2015-01-01

    We investigate the emergence of in-phase synchronization in a heterogeneous network of coupled inhibitory interneurons in the presence of spike timing dependent plasticity (STDP). Using a simple network of two mutually coupled interneurons (2-MCI), we first study the effects of STDP on in-phase synchronization. We demonstrate that, with STDP, the 2-MCI network can evolve to either a state of stable 1:1 in-phase synchronization or exhibit multiple regimes of higher order synchronization states. We show that the emergence of synchronization induces a structural asymmetry in the 2-MCI network such that the synapses onto the high frequency firing neurons are potentiated, while those onto the low frequency firing neurons are de-potentiated, resulting in the directed flow of information from low frequency firing neurons to high frequency firing neurons. Finally, we demonstrate that the principal findings from our analysis of the 2-MCI network contribute to the emergence of robust synchronization in the Wang-Buzsaki network (Wang and Buzsáki, 1996) of all-to-all coupled inhibitory interneurons (100-MCI) for a significantly larger range of heterogeneity in the intrinsic firing rate of the neurons in the network. We conclude that STDP of inhibitory synapses provide a viable mechanism for robust neural synchronization.

  17. Emergent gamma synchrony in all-to-all interneuronal networks

    PubMed Central

    Ratnadurai-Giridharan, Shivakeshavan; Khargonekar, Pramod P.; Talathi, Sachin S.

    2015-01-01

    We investigate the emergence of in-phase synchronization in a heterogeneous network of coupled inhibitory interneurons in the presence of spike timing dependent plasticity (STDP). Using a simple network of two mutually coupled interneurons (2-MCI), we first study the effects of STDP on in-phase synchronization. We demonstrate that, with STDP, the 2-MCI network can evolve to either a state of stable 1:1 in-phase synchronization or exhibit multiple regimes of higher order synchronization states. We show that the emergence of synchronization induces a structural asymmetry in the 2-MCI network such that the synapses onto the high frequency firing neurons are potentiated, while those onto the low frequency firing neurons are de-potentiated, resulting in the directed flow of information from low frequency firing neurons to high frequency firing neurons. Finally, we demonstrate that the principal findings from our analysis of the 2-MCI network contribute to the emergence of robust synchronization in the Wang-Buzsaki network (Wang and Buzsáki, 1996) of all-to-all coupled inhibitory interneurons (100-MCI) for a significantly larger range of heterogeneity in the intrinsic firing rate of the neurons in the network. We conclude that STDP of inhibitory synapses provide a viable mechanism for robust neural synchronization. PMID:26528174

  18. Dynamics of neuromodulatory feedback determines frequency modulation in a reduced respiratory network: a computational study.

    PubMed

    Toporikova, Natalia; Butera, Robert J

    2013-02-01

    Neuromodulators, such as amines and neuropeptides, alter the activity of neurons and neuronal networks. In this work, we investigate how neuromodulators, which activate G(q)-protein second messenger systems, can modulate the bursting frequency of neurons in a critical portion of the respiratory neural network, the pre-Bötzinger complex (preBötC). These neurons are a vital part of the ponto-medullary neuronal network, which generates a stable respiratory rhythm whose frequency is regulated by neuromodulator release from the nearby Raphe nucleus. Using a simulated 50-cell network of excitatory preBötC neurons with a heterogeneous distribution of persistent sodium conductance and Ca(2+), we determined conditions for frequency modulation in such a network by simulating interaction between Raphe and preBötC nuclei. We found that the positive feedback between the Raphe excitability and preBötC activity induces frequency modulation in the preBötC neurons. In addition, the frequency of the respiratory rhythm can be regulated via phasic release of excitatory neuromodulators from the Raphe nucleus. We predict that the application of a G(q) antagonist will eliminate this frequency modulation by the Raphe and keep the network frequency constant and low. In contrast, application of a G(q) agonist will result in a high frequency for all levels of Raphe stimulation. Our modeling results also suggest that high [K(+)] requirement in respiratory brain slice experiments may serve as a compensatory mechanism for low neuromodulatory tone. Copyright © 2012 Elsevier B.V. All rights reserved.

  19. Effect of inhibitory firing pattern on coherence resonance in random neural networks

    NASA Astrophysics Data System (ADS)

    Yu, Haitao; Zhang, Lianghao; Guo, Xinmeng; Wang, Jiang; Cao, Yibin; Liu, Jing

    2018-01-01

    The effect of inhibitory firing patterns on coherence resonance (CR) in random neuronal network is systematically studied. Spiking and bursting are two main types of firing pattern considered in this work. Numerical results show that, irrespective of the inhibitory firing patterns, the regularity of network is maximized by an optimal intensity of external noise, indicating the occurrence of coherence resonance. Moreover, the firing pattern of inhibitory neuron indeed has a significant influence on coherence resonance, but the efficacy is determined by network property. In the network with strong coupling strength but weak inhibition, bursting neurons largely increase the amplitude of resonance, while they can decrease the noise intensity that induced coherence resonance within the neural system of strong inhibition. Different temporal windows of inhibition induced by different inhibitory neurons may account for the above observations. The network structure also plays a constructive role in the coherence resonance. There exists an optimal network topology to maximize the regularity of the neural systems.

  20. Identification of the connections in biologically inspired neural networks

    NASA Technical Reports Server (NTRS)

    Demuth, H.; Leung, K.; Beale, M.; Hicklin, J.

    1990-01-01

    We developed an identification method to find the strength of the connections between neurons from their behavior in small biologically-inspired artificial neural networks. That is, given the network external inputs and the temporal firing pattern of the neurons, we can calculate a solution for the strengths of the connections between neurons and the initial neuron activations if a solution exists. The method determines directly if there is a solution to a particular neural network problem. No training of the network is required. It should be noted that this is a first pass at the solution of a difficult problem. The neuron and network models chosen are related to biology but do not contain all of its complexities, some of which we hope to add to the model in future work. A variety of new results have been obtained. First, the method has been tailored to produce connection weight matrix solutions for networks with important features of biological neural (bioneural) networks. Second, a computationally efficient method of finding a robust central solution has been developed. This later method also enables us to find the most consistent solution in the presence of noisy data. Prospects of applying our method to identify bioneural network connections are exciting because such connections are almost impossible to measure in the laboratory. Knowledge of such connections would facilitate an understanding of bioneural networks and would allow the construction of the electronic counterparts of bioneural networks on very large scale integrated (VLSI) circuits.

  1. Granger causality network reconstruction of conductance-based integrate-and-fire neuronal systems.

    PubMed

    Zhou, Douglas; Xiao, Yanyang; Zhang, Yaoyu; Xu, Zhiqin; Cai, David

    2014-01-01

    Reconstruction of anatomical connectivity from measured dynamical activities of coupled neurons is one of the fundamental issues in the understanding of structure-function relationship of neuronal circuitry. Many approaches have been developed to address this issue based on either electrical or metabolic data observed in experiment. The Granger causality (GC) analysis remains one of the major approaches to explore the dynamical causal connectivity among individual neurons or neuronal populations. However, it is yet to be clarified how such causal connectivity, i.e., the GC connectivity, can be mapped to the underlying anatomical connectivity in neuronal networks. We perform the GC analysis on the conductance-based integrate-and-fire (I&F) neuronal networks to obtain their causal connectivity. Through numerical experiments, we find that the underlying synaptic connectivity amongst individual neurons or subnetworks, can be successfully reconstructed by the GC connectivity constructed from voltage time series. Furthermore, this reconstruction is insensitive to dynamical regimes and can be achieved without perturbing systems and prior knowledge of neuronal model parameters. Surprisingly, the synaptic connectivity can even be reconstructed by merely knowing the raster of systems, i.e., spike timing of neurons. Using spike-triggered correlation techniques, we establish a direct mapping between the causal connectivity and the synaptic connectivity for the conductance-based I&F neuronal networks, and show the GC is quadratically related to the coupling strength. The theoretical approach we develop here may provide a framework for examining the validity of the GC analysis in other settings.

  2. Granger Causality Network Reconstruction of Conductance-Based Integrate-and-Fire Neuronal Systems

    PubMed Central

    Zhou, Douglas; Xiao, Yanyang; Zhang, Yaoyu; Xu, Zhiqin; Cai, David

    2014-01-01

    Reconstruction of anatomical connectivity from measured dynamical activities of coupled neurons is one of the fundamental issues in the understanding of structure-function relationship of neuronal circuitry. Many approaches have been developed to address this issue based on either electrical or metabolic data observed in experiment. The Granger causality (GC) analysis remains one of the major approaches to explore the dynamical causal connectivity among individual neurons or neuronal populations. However, it is yet to be clarified how such causal connectivity, i.e., the GC connectivity, can be mapped to the underlying anatomical connectivity in neuronal networks. We perform the GC analysis on the conductance-based integrate-and-fire (IF) neuronal networks to obtain their causal connectivity. Through numerical experiments, we find that the underlying synaptic connectivity amongst individual neurons or subnetworks, can be successfully reconstructed by the GC connectivity constructed from voltage time series. Furthermore, this reconstruction is insensitive to dynamical regimes and can be achieved without perturbing systems and prior knowledge of neuronal model parameters. Surprisingly, the synaptic connectivity can even be reconstructed by merely knowing the raster of systems, i.e., spike timing of neurons. Using spike-triggered correlation techniques, we establish a direct mapping between the causal connectivity and the synaptic connectivity for the conductance-based IF neuronal networks, and show the GC is quadratically related to the coupling strength. The theoretical approach we develop here may provide a framework for examining the validity of the GC analysis in other settings. PMID:24586285

  3. Optimal sparse approximation with integrate and fire neurons.

    PubMed

    Shapero, Samuel; Zhu, Mengchen; Hasler, Jennifer; Rozell, Christopher

    2014-08-01

    Sparse approximation is a hypothesized coding strategy where a population of sensory neurons (e.g. V1) encodes a stimulus using as few active neurons as possible. We present the Spiking LCA (locally competitive algorithm), a rate encoded Spiking Neural Network (SNN) of integrate and fire neurons that calculate sparse approximations. The Spiking LCA is designed to be equivalent to the nonspiking LCA, an analog dynamical system that converges on a ℓ(1)-norm sparse approximations exponentially. We show that the firing rate of the Spiking LCA converges on the same solution as the analog LCA, with an error inversely proportional to the sampling time. We simulate in NEURON a network of 128 neuron pairs that encode 8 × 8 pixel image patches, demonstrating that the network converges to nearly optimal encodings within 20 ms of biological time. We also show that when using more biophysically realistic parameters in the neurons, the gain function encourages additional ℓ(0)-norm sparsity in the encoding, relative both to ideal neurons and digital solvers.

  4. Population activity structure of excitatory and inhibitory neurons

    PubMed Central

    Doiron, Brent

    2017-01-01

    Many studies use population analysis approaches, such as dimensionality reduction, to characterize the activity of large groups of neurons. To date, these methods have treated each neuron equally, without taking into account whether neurons are excitatory or inhibitory. We studied population activity structure as a function of neuron type by applying factor analysis to spontaneous activity from spiking networks with balanced excitation and inhibition. Throughout the study, we characterized population activity structure by measuring its dimensionality and the percentage of overall activity variance that is shared among neurons. First, by sampling only excitatory or only inhibitory neurons, we found that the activity structures of these two populations in balanced networks are measurably different. We also found that the population activity structure is dependent on the ratio of excitatory to inhibitory neurons sampled. Finally we classified neurons from extracellular recordings in the primary visual cortex of anesthetized macaques as putative excitatory or inhibitory using waveform classification, and found similarities with the neuron type-specific population activity structure of a balanced network with excitatory clustering. These results imply that knowledge of neuron type is important, and allows for stronger statistical tests, when interpreting population activity structure. PMID:28817581

  5. Creation of defined single cell resolution neuronal circuits on microelectrode arrays

    NASA Astrophysics Data System (ADS)

    Pirlo, Russell Kirk

    2009-12-01

    The way cell-cell organization of neuronal networks influences activity and facilitates function is not well understood. Microelectrode arrays (MEAs) and advancing cell patterning technologies have enabled access to and control of in vitro neuronal networks spawning much new research in neuroscience and neuroengineering. We propose that small, simple networks of neurons with defined circuitry may serve as valuable research models where every connection can be analyzed, controlled and manipulated. Towards the goal of creating such neuronal networks we have applied microfabricated elastomeric membranes, surface modification and our unique laser cell patterning system to create defined neuronal circuits with single-cell precision on MEAs. Definition of synaptic connectivity was imposed by the 3D physical constraints of polydimethylsiloxane elastomeric membranes. The membranes had 20mum clear-through holes and 2-3mum deep channels which when applied to the surface of the MEA formed microwells to confine neurons to electrodes connected via shallow tunnels to direct neurite outgrowth. Tapering and turning of channels was used to influence neurite polarity. Biocompatibility of the membranes was increased by vacuum baking, oligomer extraction, and autoclaving. Membranes were bound to the MEA by oxygen plasma treatment and heated pressure. The MEA/membrane surface was treated with oxygen plasma, poly-D-lysine and laminin to improve neuron attachment, survival and neurite outgrowth. Prior to cell patterning the outer edge of culture area was seeded with 5x10 5 cells per cm and incubated for 2 days. Single embryonic day 7 chick forebrain neurons were then patterned into the microwells and onto the electrodes using our laser cell patterning system. Patterned neurons successfully attached to and were confined to the electrodes. Neurites extended through the interconnecting channels and connected with adjacent neurons. These results demonstrate that neuronal circuits can be created with clearly defined circuitry and a one-to-one neuron-electrode ratio. The techniques and processes described here may be used in future research to create defined neuronal circuits to model in vivo circuits and study neuronal network processing.

  6. Coherent periodic activity in excitatory Erdös-Renyi neural networks: the role of network connectivity.

    PubMed

    Tattini, Lorenzo; Olmi, Simona; Torcini, Alessandro

    2012-06-01

    In this article, we investigate the role of connectivity in promoting coherent activity in excitatory neural networks. In particular, we would like to understand if the onset of collective oscillations can be related to a minimal average connectivity and how this critical connectivity depends on the number of neurons in the networks. For these purposes, we consider an excitatory random network of leaky integrate-and-fire pulse coupled neurons. The neurons are connected as in a directed Erdös-Renyi graph with average connectivity scaling as a power law with the number of neurons in the network. The scaling is controlled by a parameter γ, which allows to pass from massively connected to sparse networks and therefore to modify the topology of the system. At a macroscopic level, we observe two distinct dynamical phases: an asynchronous state corresponding to a desynchronized dynamics of the neurons and a regime of partial synchronization (PS) associated with a coherent periodic activity of the network. At low connectivity, the system is in an asynchronous state, while PS emerges above a certain critical average connectivity (c). For sufficiently large networks, (c) saturates to a constant value suggesting that a minimal average connectivity is sufficient to observe coherent activity in systems of any size irrespectively of the kind of considered network: sparse or massively connected. However, this value depends on the nature of the synapses: reliable or unreliable. For unreliable synapses, the critical value required to observe the onset of macroscopic behaviors is noticeably smaller than for reliable synaptic transmission. Due to the disorder present in the system, for finite number of neurons we have inhomogeneities in the neuronal behaviors, inducing a weak form of chaos, which vanishes in the thermodynamic limit. In such a limit, the disordered systems exhibit regular (non chaotic) dynamics and their properties correspond to that of a homogeneous fully connected network for any γ-value. Apart for the peculiar exception of sparse networks, which remain intrinsically inhomogeneous at any system size.

  7. Channel noise-induced temporal coherence transitions and synchronization transitions in adaptive neuronal networks with time delay

    NASA Astrophysics Data System (ADS)

    Gong, Yubing; Xie, Huijuan

    2017-09-01

    Using spike-timing-dependent plasticity (STDP), we study the effect of channel noise on temporal coherence and synchronization of adaptive scale-free Hodgkin-Huxley neuronal networks with time delay. It is found that the spiking regularity and spatial synchronization of the neurons intermittently increase and decrease as channel noise intensity is varied, exhibiting transitions of temporal coherence and synchronization. Moreover, this phenomenon depends on time delay, STDP, and network average degree. As time delay increases, the phenomenon is weakened, however, there are optimal STDP and network average degree by which the phenomenon becomes strongest. These results show that channel noise can intermittently enhance the temporal coherence and synchronization of the delayed adaptive neuronal networks. These findings provide a new insight into channel noise for the information processing and transmission in neural systems.

  8. INTERDISCIPLINARY PHYSICS AND RELATED AREAS OF SCIENCE AND TECHNOLOGY: Spiral Wave in Small-World Networks of Hodgkin-Huxley Neurons

    NASA Astrophysics Data System (ADS)

    Ma, Jun; Yang, Li-Jian; Wu, Ying; Zhang, Cai-Rong

    2010-09-01

    The effect of small-world connection and noise on the formation and transition of spiral wave in the networks of Hodgkin-Huxley neurons are investigated in detail. Some interesting results are found in our numerical studies. i) The quiescent neurons are activated to propagate electric signal to others by generating and developing spiral wave from spiral seed in small area. ii) A statistical factor is defined to describe the collective properties and phase transition induced by the topology of networks and noise. iii) Stable rotating spiral wave can be generated and keeps robust when the rewiring probability is below certain threshold, otherwise, spiral wave can not be developed from the spiral seed and spiral wave breakup occurs for a stable rotating spiral wave. iv) Gaussian white noise is introduced on the membrane of neurons to study the noise-induced phase transition on spiral wave in small-world networks of neurons. It is confirmed that Gaussian white noise plays active role in supporting and developing spiral wave in the networks of neurons, and appearance of smaller factor of synchronization indicates high possibility to induce spiral wave.

  9. Fast global oscillations in networks of integrate-and-fire neurons with low firing rates.

    PubMed

    Brunel, N; Hakim, V

    1999-10-01

    We study analytically the dynamics of a network of sparsely connected inhibitory integrate-and-fire neurons in a regime where individual neurons emit spikes irregularly and at a low rate. In the limit when the number of neurons --> infinity, the network exhibits a sharp transition between a stationary and an oscillatory global activity regime where neurons are weakly synchronized. The activity becomes oscillatory when the inhibitory feedback is strong enough. The period of the global oscillation is found to be mainly controlled by synaptic times but depends also on the characteristics of the external input. In large but finite networks, the analysis shows that global oscillations of finite coherence time generically exist both above and below the critical inhibition threshold. Their characteristics are determined as functions of systems parameters in these two different regions. The results are found to be in good agreement with numerical simulations.

  10. NEVESIM: event-driven neural simulation framework with a Python interface.

    PubMed

    Pecevski, Dejan; Kappel, David; Jonke, Zeno

    2014-01-01

    NEVESIM is a software package for event-driven simulation of networks of spiking neurons with a fast simulation core in C++, and a scripting user interface in the Python programming language. It supports simulation of heterogeneous networks with different types of neurons and synapses, and can be easily extended by the user with new neuron and synapse types. To enable heterogeneous networks and extensibility, NEVESIM is designed to decouple the simulation logic of communicating events (spikes) between the neurons at a network level from the implementation of the internal dynamics of individual neurons. In this paper we will present the simulation framework of NEVESIM, its concepts and features, as well as some aspects of the object-oriented design approaches and simulation strategies that were utilized to efficiently implement the concepts and functionalities of the framework. We will also give an overview of the Python user interface, its basic commands and constructs, and also discuss the benefits of integrating NEVESIM with Python. One of the valuable capabilities of the simulator is to simulate exactly and efficiently networks of stochastic spiking neurons from the recently developed theoretical framework of neural sampling. This functionality was implemented as an extension on top of the basic NEVESIM framework. Altogether, the intended purpose of the NEVESIM framework is to provide a basis for further extensions that support simulation of various neural network models incorporating different neuron and synapse types that can potentially also use different simulation strategies.

  11. NEVESIM: event-driven neural simulation framework with a Python interface

    PubMed Central

    Pecevski, Dejan; Kappel, David; Jonke, Zeno

    2014-01-01

    NEVESIM is a software package for event-driven simulation of networks of spiking neurons with a fast simulation core in C++, and a scripting user interface in the Python programming language. It supports simulation of heterogeneous networks with different types of neurons and synapses, and can be easily extended by the user with new neuron and synapse types. To enable heterogeneous networks and extensibility, NEVESIM is designed to decouple the simulation logic of communicating events (spikes) between the neurons at a network level from the implementation of the internal dynamics of individual neurons. In this paper we will present the simulation framework of NEVESIM, its concepts and features, as well as some aspects of the object-oriented design approaches and simulation strategies that were utilized to efficiently implement the concepts and functionalities of the framework. We will also give an overview of the Python user interface, its basic commands and constructs, and also discuss the benefits of integrating NEVESIM with Python. One of the valuable capabilities of the simulator is to simulate exactly and efficiently networks of stochastic spiking neurons from the recently developed theoretical framework of neural sampling. This functionality was implemented as an extension on top of the basic NEVESIM framework. Altogether, the intended purpose of the NEVESIM framework is to provide a basis for further extensions that support simulation of various neural network models incorporating different neuron and synapse types that can potentially also use different simulation strategies. PMID:25177291

  12. Beyond Critical Exponents in Neuronal Avalanches

    NASA Astrophysics Data System (ADS)

    Friedman, Nir; Butler, Tom; Deville, Robert; Beggs, John; Dahmen, Karin

    2011-03-01

    Neurons form a complex network in the brain, where they interact with one another by firing electrical signals. Neurons firing can trigger other neurons to fire, potentially causing avalanches of activity in the network. In many cases these avalanches have been found to be scale independent, similar to critical phenomena in diverse systems such as magnets and earthquakes. We discuss models for neuronal activity that allow for the extraction of testable, statistical predictions. We compare these models to experimental results, and go beyond critical exponents.

  13. Synaptic multistability and network synchronization induced by the neuron-glial interaction in the brain

    NASA Astrophysics Data System (ADS)

    Lazarevich, I. A.; Stasenko, S. V.; Kazantsev, V. B.

    2017-02-01

    The dynamics of a synaptic contact between neurons that forms a feedback loop through the interaction with glial cells of the brain surrounding the neurons is studied. It is shown that, depending on the character of the neuron-glial interaction, the dynamics of the signal transmission frequency in the synaptic contact can be bistable with two stable steady states or spiking with the regular generation of spikes with various amplitudes and durations. It is found that such a synaptic contact at the network level is responsible for the appearance of quasisynchronous network bursts.

  14. Black Holes as Brains: Neural Networks with Area Law Entropy

    NASA Astrophysics Data System (ADS)

    Dvali, Gia

    2018-04-01

    Motivated by the potential similarities between the underlying mechanisms of the enhanced memory storage capacity in black holes and in brain networks, we construct an artificial quantum neural network based on gravity-like synaptic connections and a symmetry structure that allows to describe the network in terms of geometry of a d-dimensional space. We show that the network possesses a critical state in which the gapless neurons emerge that appear to inhabit a (d-1)-dimensional surface, with their number given by the surface area. In the excitations of these neurons, the network can store and retrieve an exponentially large number of patterns within an arbitrarily narrow energy gap. The corresponding micro-state entropy of the brain network exhibits an area law. The neural network can be described in terms of a quantum field, via identifying the different neurons with the different momentum modes of the field, while identifying the synaptic connections among the neurons with the interactions among the corresponding momentum modes. Such a mapping allows to attribute a well-defined sense of geometry to an intrinsically non-local system, such as the neural network, and vice versa, it allows to represent the quantum field model as a neural network.

  15. Spectrum of Lyapunov exponents of non-smooth dynamical systems of integrate-and-fire type.

    PubMed

    Zhou, Douglas; Sun, Yi; Rangan, Aaditya V; Cai, David

    2010-04-01

    We discuss how to characterize long-time dynamics of non-smooth dynamical systems, such as integrate-and-fire (I&F) like neuronal network, using Lyapunov exponents and present a stable numerical method for the accurate evaluation of the spectrum of Lyapunov exponents for this large class of dynamics. These dynamics contain (i) jump conditions as in the firing-reset dynamics and (ii) degeneracy such as in the refractory period in which voltage-like variables of the network collapse to a single constant value. Using the networks of linear I&F neurons, exponential I&F neurons, and I&F neurons with adaptive threshold, we illustrate our method and discuss the rich dynamics of these networks.

  16. Multi-level characterization of balanced inhibitory-excitatory cortical neuron network derived from human pluripotent stem cells.

    PubMed

    Nadadhur, Aishwarya G; Emperador Melero, Javier; Meijer, Marieke; Schut, Desiree; Jacobs, Gerbren; Li, Ka Wan; Hjorth, J J Johannes; Meredith, Rhiannon M; Toonen, Ruud F; Van Kesteren, Ronald E; Smit, August B; Verhage, Matthijs; Heine, Vivi M

    2017-01-01

    Generation of neuronal cultures from induced pluripotent stem cells (hiPSCs) serve the studies of human brain disorders. However we lack neuronal networks with balanced excitatory-inhibitory activities, which are suitable for single cell analysis. We generated low-density networks of hPSC-derived GABAergic and glutamatergic cortical neurons. We used two different co-culture models with astrocytes. We show that these cultures have balanced excitatory-inhibitory synaptic identities using confocal microscopy, electrophysiological recordings, calcium imaging and mRNA analysis. These simple and robust protocols offer the opportunity for single-cell to multi-level analysis of patient hiPSC-derived cortical excitatory-inhibitory networks; thereby creating advanced tools to study disease mechanisms underlying neurodevelopmental disorders.

  17. The influence of hubs in the structure of a neuronal network during an epileptic seizure

    NASA Astrophysics Data System (ADS)

    Rodrigues, Abner Cardoso; Cerdeira, Hilda A.; Machado, Birajara Soares

    2016-02-01

    In this work, we propose changes in the structure of a neuronal network with the intention to provoke strong synchronization to simulate episodes of epileptic seizure. Starting with a network of Izhikevich neurons we slowly increase the number of connections in selected nodes in a controlled way, to produce (or not) hubs. We study how these structures alter the synchronization on the spike firings interval, on individual neurons as well as on mean values, as a function of the concentration of connections for random and non-random (hubs) distribution. We also analyze how the post-ictal signal varies for the different distributions. We conclude that a network with hubs is more appropriate to represent an epileptic state.

  18. Generalized Adaptive Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Tawel, Raoul

    1993-01-01

    Mathematical model of supervised learning by artificial neural network provides for simultaneous adjustments of both temperatures of neurons and synaptic weights, and includes feedback as well as feedforward synaptic connections. Extension of mathematical model described in "Adaptive Neurons For Artificial Neural Networks" (NPO-17803). Dynamics of neural network represented in new model by less-restrictive continuous formalism.

  19. Modeling somatic and dendritic spike mediated plasticity at the single neuron and network level.

    PubMed

    Bono, Jacopo; Clopath, Claudia

    2017-09-26

    Synaptic plasticity is thought to be the principal neuronal mechanism underlying learning. Models of plastic networks typically combine point neurons with spike-timing-dependent plasticity (STDP) as the learning rule. However, a point neuron does not capture the local non-linear processing of synaptic inputs allowed for by dendrites. Furthermore, experimental evidence suggests that STDP is not the only learning rule available to neurons. By implementing biophysically realistic neuron models, we study how dendrites enable multiple synaptic plasticity mechanisms to coexist in a single cell. In these models, we compare the conditions for STDP and for synaptic strengthening by local dendritic spikes. We also explore how the connectivity between two cells is affected by these plasticity rules and by different synaptic distributions. Finally, we show that how memory retention during associative learning can be prolonged in networks of neurons by including dendrites.Synaptic plasticity is the neuronal mechanism underlying learning. Here the authors construct biophysical models of pyramidal neurons that reproduce observed plasticity gradients along the dendrite and show that dendritic spike dependent LTP which is predominant in distal sections can prolong memory retention.

  20. The Influence of Neuronal Density and Maturation on Network Activity of Hippocampal Cell Cultures: A Methodological Study

    PubMed Central

    Menegon, Andrea; Ferrigno, Giancarlo; Pedrocchi, Alessandra

    2013-01-01

    It is known that cell density influences the maturation process of in vitro neuronal networks. Neuronal cultures plated with different cell densities differ in number of synapses per neuron and thus in single neuron synaptic transmission, which results in a density-dependent neuronal network activity. Although many authors provided detailed information about the effects of cell density on neuronal culture activity, a dedicated report of density and age influence on neuronal hippocampal culture activity has not yet been reported. Therefore, this work aims at providing reference data to researchers that set up an experimental study on hippocampal neuronal cultures, helping in planning and decoding the experiments. In this work, we analysed the effects of both neuronal density and culture age on functional attributes of maturing hippocampal cultures. We characterized the electrophysiological activity of neuronal cultures seeded at three different cell densities, recording their spontaneous electrical activity over maturation by means of MicroElectrode Arrays (MEAs). We had gather data from 86 independent hippocampal cultures to achieve solid statistic results, considering the high culture-to-culture variability. Network activity was evaluated in terms of simple spiking, burst and network burst features. We observed that electrical descriptors were characterized by a functional peak during maturation, followed by a stable phase (for sparse and medium density cultures) or by a decrease phase (for high dense neuronal cultures). Moreover, 900 cells/mm2 cultures showed characteristics suitable for long lasting experiments (e.g. chronic effect of drug treatments) while 1800 cells/mm2 cultures should be preferred for experiments that require intense electrical activity (e.g. to evaluate the effect of inhibitory molecules). Finally, cell cultures at 3600 cells/mm2 are more appropriate for experiments in which time saving is relevant (e.g. drug screenings). These results are intended to be a reference for the planning of in vitro neurophysiological and neuropharmacological experiments with MEAs. PMID:24386305

  1. The influence of neuronal density and maturation on network activity of hippocampal cell cultures: a methodological study.

    PubMed

    Biffi, Emilia; Regalia, Giulia; Menegon, Andrea; Ferrigno, Giancarlo; Pedrocchi, Alessandra

    2013-01-01

    It is known that cell density influences the maturation process of in vitro neuronal networks. Neuronal cultures plated with different cell densities differ in number of synapses per neuron and thus in single neuron synaptic transmission, which results in a density-dependent neuronal network activity. Although many authors provided detailed information about the effects of cell density on neuronal culture activity, a dedicated report of density and age influence on neuronal hippocampal culture activity has not yet been reported. Therefore, this work aims at providing reference data to researchers that set up an experimental study on hippocampal neuronal cultures, helping in planning and decoding the experiments. In this work, we analysed the effects of both neuronal density and culture age on functional attributes of maturing hippocampal cultures. We characterized the electrophysiological activity of neuronal cultures seeded at three different cell densities, recording their spontaneous electrical activity over maturation by means of MicroElectrode Arrays (MEAs). We had gather data from 86 independent hippocampal cultures to achieve solid statistic results, considering the high culture-to-culture variability. Network activity was evaluated in terms of simple spiking, burst and network burst features. We observed that electrical descriptors were characterized by a functional peak during maturation, followed by a stable phase (for sparse and medium density cultures) or by a decrease phase (for high dense neuronal cultures). Moreover, 900 cells/mm(2) cultures showed characteristics suitable for long lasting experiments (e.g. chronic effect of drug treatments) while 1800 cells/mm(2) cultures should be preferred for experiments that require intense electrical activity (e.g. to evaluate the effect of inhibitory molecules). Finally, cell cultures at 3600 cells/mm(2) are more appropriate for experiments in which time saving is relevant (e.g. drug screenings). These results are intended to be a reference for the planning of in vitro neurophysiological and neuropharmacological experiments with MEAs.

  2. Analysis of neuronal cells of dissociated primary culture on high-density CMOS electrode array

    PubMed Central

    Matsuda, Eiko; Mita, Takeshi; Hubert, Julien; Bakkum, Douglas; Frey, Urs; Hierlemann, Andreas; Takahashi, Hirokazu; Ikegami, Takashi

    2017-01-01

    Spontaneous development of neuronal cells was recorded around 4–34 days in vitro (DIV) with high-density CMOS array, which enables detailed study of the spatio-temporal activity of neuronal culture. We used the CMOS array to characterize the evolution of the inter-spike interval (ISI) distribution from putative single neurons, and estimate the network structure based on transfer entropy analysis, where each node corresponds to a single neuron. We observed that the ISI distributions gradually obeyed the power law with maturation of the network. The amount of information transferred between neurons increased at the early stage of development, but decreased as the network matured. These results suggest that both ISI and transfer entropy were very useful for characterizing the dynamic development of cultured neural cells over a few weeks. PMID:24109870

  3. Characterization of emergent synaptic topologies in noisy neural networks

    NASA Astrophysics Data System (ADS)

    Miller, Aaron James

    Learned behaviors are one of the key contributors to an animal's ultimate survival. It is widely believed that the brain's microcircuitry undergoes structural changes when a new behavior is learned. In particular, motor learning, during which an animal learns a sequence of muscular movements, often requires precisely-timed coordination between muscles and becomes very natural once ingrained. Experiments show that neurons in the motor cortex exhibit precisely-timed spike activity when performing a learned motor behavior, and constituent stereotypical elements of the behavior can last several hundred milliseconds. The subject of this manuscript concerns how organized synaptic structures that produce stereotypical spike sequences emerge from random, dynamical networks. After a brief introduction in Chapter 1, we begin Chapter 2 by introducing a spike-timing-dependent plasticity (STDP) rule that defines how the activity of the network drives changes in network topology. The rule is then applied to idealized networks of leaky integrate-and-fire neurons (LIF). These neurons are not subjected to the variability that typically characterize neurons in vivo. In noiseless networks, synapses develop closed loops of strong connectivity that reproduce stereotypical, precisely-timed spike patterns from an initially random network. We demonstrate the characteristics of the asymptotic synaptic configuration are dependent on the statistics of the initial random network. The spike timings of the neurons simulated in Chapter 2 are generated exactly by a computationally economical, nonlinear mapping which is extended to LIF neurons injected with fluctuating current in Chapter 3. Development of an economical mapping that incorporates noise provides a practical solution to the long simulation times required to produce asymptotic synaptic topologies in networks with STDP in the presence of realistic neuronal variability. The mapping relies on generating numerical solutions to the dynamics of a LIF neuron subjected to Gaussian white noise (GWN). The system reduces to the Ornstein-Uhlenbeck first passage time problem, the solution of which we build into the mapping method of Chapter 2. We demonstrate that simulations using the stochastic mapping have reduced computation time compared to traditional Runge-Kutta methods by more than a factor of 150. In Chapter 4, we use the stochastic mapping to study the dynamics of emerging synaptic topologies in noisy networks. With the addition of membrane noise, networks with dynamical synapses can admit states in which the distribution of the synaptic weights is static under spontaneous activity, but the random connectivity between neurons is dynamical. The widely cited problem of instabilities in networks with STDP is avoided with the implementation of a synaptic decay and an activation threshold on each synapse. When such networks are presented with stimulus modeled by a focused excitatory current, chain-like networks can emerge with the addition of an axon-remodeling plasticity rule, a topological constraint on the connectivity modeling the finite resources available to each neuron. The emergent topologies are the result of an iterative stochastic process. The dynamics of the growth process suggest a strong interplay between the network topology and the spike sequences they produce during development. Namely, the existence of an embedded spike sequence alters the distribution of synaptic weights through the entire network. The roles of model parameters that affect the interplay between network structure and activity are elucidated. Finally, we propose two mathematical growth models, which are complementary, that capture the essence of the growth dynamics observed in simulations. In Chapter 5, we present an extension of the stochastic mapping that allows the possibility of neuronal cooperation. We demonstrate that synaptic topologies admitting stereotypical sequences can emerge in yet higher, biologically realistic levels of membrane potential variability when neurons cooperate to innervate shared targets. The structure that is most robust to the variability is that of a synfire chain. The principles of growth dynamics detailed in Chapter 4 are the same that sculpt the emergent synfire topologies. We conclude by discussing avenues for extensions of these results.

  4. Molecular codes for neuronal individuality and cell assembly in the brain

    PubMed Central

    Yagi, Takeshi

    2012-01-01

    The brain contains an enormous, but finite, number of neurons. The ability of this limited number of neurons to produce nearly limitless neural information over a lifetime is typically explained by combinatorial explosion; that is, by the exponential amplification of each neuron's contribution through its incorporation into “cell assemblies” and neural networks. In development, each neuron expresses diverse cellular recognition molecules that permit the formation of the appropriate neural cell assemblies to elicit various brain functions. The mechanism for generating neuronal assemblies and networks must involve molecular codes that give neurons individuality and allow them to recognize one another and join appropriate networks. The extensive molecular diversity of cell-surface proteins on neurons is likely to contribute to their individual identities. The clustered protocadherins (Pcdh) is a large subfamily within the diverse cadherin superfamily. The clustered Pcdh genes are encoded in tandem by three gene clusters, and are present in all known vertebrate genomes. The set of clustered Pcdh genes is expressed in a random and combinatorial manner in each neuron. In addition, cis-tetramers composed of heteromultimeric clustered Pcdh isoforms represent selective binding units for cell-cell interactions. Here I present the mathematical probabilities for neuronal individuality based on the random and combinatorial expression of clustered Pcdh isoforms and their formation of cis-tetramers in each neuron. Notably, clustered Pcdh gene products are known to play crucial roles in correct axonal projections, synaptic formation, and neuronal survival. Their molecular and biological features induce a hypothesis that the diverse clustered Pcdh molecules provide the molecular code by which neuronal individuality and cell assembly permit the combinatorial explosion of networks that supports enormous processing capability and plasticity of the brain. PMID:22518100

  5. PTEN Loss Increases the Connectivity of Fast Synaptic Motifs and Functional Connectivity in a Developing Hippocampal Network.

    PubMed

    Barrows, Caitlynn M; McCabe, Matthew P; Chen, Hongmei; Swann, John W; Weston, Matthew C

    2017-09-06

    Changes in synaptic strength and connectivity are thought to be a major mechanism through which many gene variants cause neurological disease. Hyperactivation of the PI3K-mTOR signaling network, via loss of function of repressors such as PTEN, causes epilepsy in humans and animal models, and altered mTOR signaling may contribute to a broad range of neurological diseases. Changes in synaptic transmission have been reported in animal models of PTEN loss; however, the full extent of these changes, and their effect on network function, is still unknown. To better understand the scope of these changes, we recorded from pairs of mouse hippocampal neurons cultured in a two-neuron microcircuit configuration that allowed us to characterize all four major connection types within the hippocampus. Loss of PTEN caused changes in excitatory and inhibitory connectivity, and these changes were postsynaptic, presynaptic, and transynaptic, suggesting that disruption of PTEN has the potential to affect most connection types in the hippocampal circuit. Given the complexity of the changes at the synaptic level, we measured changes in network behavior after deleting Pten from neurons in an organotypic hippocampal slice network. Slices containing Pten -deleted neurons showed increased recruitment of neurons into network bursts. Importantly, these changes were not confined to Pten -deleted neurons, but involved the entire network, suggesting that the extensive changes in synaptic connectivity rewire the entire network in such a way that promotes a widespread increase in functional connectivity. SIGNIFICANCE STATEMENT Homozygous deletion of the Pten gene in neuronal subpopulations in the mouse serves as a valuable model of epilepsy caused by mTOR hyperactivation. To better understand how gene deletions lead to altered neuronal activity, we investigated the synaptic and network effects that occur 1 week after Pten deletion. PTEN loss increased the connectivity of all four types of hippocampal synaptic connections, including two forms of increased inhibition of inhibition, and increased network functional connectivity. These data suggest that single gene mutations that cause neurological diseases such as epilepsy may affect a surprising range of connection types. Moreover, given the robustness of homeostatic plasticity, these diverse effects on connection types may be necessary to cause network phenotypes such as increased synchrony. Copyright © 2017 the authors 0270-6474/17/378595-17$15.00/0.

  6. PTEN Loss Increases the Connectivity of Fast Synaptic Motifs and Functional Connectivity in a Developing Hippocampal Network

    PubMed Central

    McCabe, Matthew P.; Chen, Hongmei; Swann, John W.

    2017-01-01

    Changes in synaptic strength and connectivity are thought to be a major mechanism through which many gene variants cause neurological disease. Hyperactivation of the PI3K-mTOR signaling network, via loss of function of repressors such as PTEN, causes epilepsy in humans and animal models, and altered mTOR signaling may contribute to a broad range of neurological diseases. Changes in synaptic transmission have been reported in animal models of PTEN loss; however, the full extent of these changes, and their effect on network function, is still unknown. To better understand the scope of these changes, we recorded from pairs of mouse hippocampal neurons cultured in a two-neuron microcircuit configuration that allowed us to characterize all four major connection types within the hippocampus. Loss of PTEN caused changes in excitatory and inhibitory connectivity, and these changes were postsynaptic, presynaptic, and transynaptic, suggesting that disruption of PTEN has the potential to affect most connection types in the hippocampal circuit. Given the complexity of the changes at the synaptic level, we measured changes in network behavior after deleting Pten from neurons in an organotypic hippocampal slice network. Slices containing Pten-deleted neurons showed increased recruitment of neurons into network bursts. Importantly, these changes were not confined to Pten-deleted neurons, but involved the entire network, suggesting that the extensive changes in synaptic connectivity rewire the entire network in such a way that promotes a widespread increase in functional connectivity. SIGNIFICANCE STATEMENT Homozygous deletion of the Pten gene in neuronal subpopulations in the mouse serves as a valuable model of epilepsy caused by mTOR hyperactivation. To better understand how gene deletions lead to altered neuronal activity, we investigated the synaptic and network effects that occur 1 week after Pten deletion. PTEN loss increased the connectivity of all four types of hippocampal synaptic connections, including two forms of increased inhibition of inhibition, and increased network functional connectivity. These data suggest that single gene mutations that cause neurological diseases such as epilepsy may affect a surprising range of connection types. Moreover, given the robustness of homeostatic plasticity, these diverse effects on connection types may be necessary to cause network phenotypes such as increased synchrony. PMID:28751459

  7. [Functional organization and structure of the serotonergic neuronal network of terrestrial snail].

    PubMed

    Nikitin, E S; Balaban, P M

    2011-01-01

    The extension of knowledge how the brain works requires permanent improvement of methods of recording of neuronal activity and increase in the number of neurons recorded simultaneously to better understand the collective work of neuronal networks and assemblies. Conventional methods allow simultaneous intracellular recording up to 2-5 neurons and their membrane potentials, currents or monosynaptic connections or observation of spiking of neuronal groups with subsequent discrimination of individual spikes with loss of details of the dynamics of membrane potential. We recorded activity of a compact group of serotonergic neurons (up to 56 simultaneously) in the ganglion of a terrestrial mollusk using the method of optical recording of membrane potential that allowed to record individual action potentials in details with action potential parameters and to reveal morphology of the neurons rcorded. We demonstrated clear clustering in the group in relation with the dynamics of action potentials and phasic or tonic components in the neuronal responses to external electrophysiological and tactile stimuli. Also, we showed that identified neuron Pd2 could induce activation of a significant number of neurons in the group whereas neuron Pd4 did not induce any activation. However, its activation is delayed with regard to activation of the reacting group of neurons. Our data strongly support the concept of possible delegation of the integrative function by the network to a single neuron.

  8. Self-sustained asynchronous irregular states and Up-Down states in thalamic, cortical and thalamocortical networks of nonlinear integrate-and-fire neurons.

    PubMed

    Destexhe, Alain

    2009-12-01

    Randomly-connected networks of integrate-and-fire (IF) neurons are known to display asynchronous irregular (AI) activity states, which resemble the discharge activity recorded in the cerebral cortex of awake animals. However, it is not clear whether such activity states are specific to simple IF models, or if they also exist in networks where neurons are endowed with complex intrinsic properties similar to electrophysiological measurements. Here, we investigate the occurrence of AI states in networks of nonlinear IF neurons, such as the adaptive exponential IF (Brette-Gerstner-Izhikevich) model. This model can display intrinsic properties such as low-threshold spike (LTS), regular spiking (RS) or fast-spiking (FS). We successively investigate the oscillatory and AI dynamics of thalamic, cortical and thalamocortical networks using such models. AI states can be found in each case, sometimes with surprisingly small network size of the order of a few tens of neurons. We show that the presence of LTS neurons in cortex or in thalamus, explains the robust emergence of AI states for relatively small network sizes. Finally, we investigate the role of spike-frequency adaptation (SFA). In cortical networks with strong SFA in RS cells, the AI state is transient, but when SFA is reduced, AI states can be self-sustained for long times. In thalamocortical networks, AI states are found when the cortex is itself in an AI state, but with strong SFA, the thalamocortical network displays Up and Down state transitions, similar to intracellular recordings during slow-wave sleep or anesthesia. Self-sustained Up and Down states could also be generated by two-layer cortical networks with LTS cells. These models suggest that intrinsic properties such as adaptation and low-threshold bursting activity are crucial for the genesis and control of AI states in thalamocortical networks.

  9. Effect of Heterogeneity on Decorrelation Mechanisms in Spiking Neural Networks: A Neuromorphic-Hardware Study

    NASA Astrophysics Data System (ADS)

    Pfeil, Thomas; Jordan, Jakob; Tetzlaff, Tom; Grübl, Andreas; Schemmel, Johannes; Diesmann, Markus; Meier, Karlheinz

    2016-04-01

    High-level brain function, such as memory, classification, or reasoning, can be realized by means of recurrent networks of simplified model neurons. Analog neuromorphic hardware constitutes a fast and energy-efficient substrate for the implementation of such neural computing architectures in technical applications and neuroscientific research. The functional performance of neural networks is often critically dependent on the level of correlations in the neural activity. In finite networks, correlations are typically inevitable due to shared presynaptic input. Recent theoretical studies have shown that inhibitory feedback, abundant in biological neural networks, can actively suppress these shared-input correlations and thereby enable neurons to fire nearly independently. For networks of spiking neurons, the decorrelating effect of inhibitory feedback has so far been explicitly demonstrated only for homogeneous networks of neurons with linear subthreshold dynamics. Theory, however, suggests that the effect is a general phenomenon, present in any system with sufficient inhibitory feedback, irrespective of the details of the network structure or the neuronal and synaptic properties. Here, we investigate the effect of network heterogeneity on correlations in sparse, random networks of inhibitory neurons with nonlinear, conductance-based synapses. Emulations of these networks on the analog neuromorphic-hardware system Spikey allow us to test the efficiency of decorrelation by inhibitory feedback in the presence of hardware-specific heterogeneities. The configurability of the hardware substrate enables us to modulate the extent of heterogeneity in a systematic manner. We selectively study the effects of shared input and recurrent connections on correlations in membrane potentials and spike trains. Our results confirm that shared-input correlations are actively suppressed by inhibitory feedback also in highly heterogeneous networks exhibiting broad, heavy-tailed firing-rate distributions. In line with former studies, cell heterogeneities reduce shared-input correlations. Overall, however, correlations in the recurrent system can increase with the level of heterogeneity as a consequence of diminished effective negative feedback.

  10. Collective behavior of networks with linear (VLSI) integrate-and-fire neurons.

    PubMed

    Fusi, S; Mattia, M

    1999-04-01

    We analyze in detail the statistical properties of the spike emission process of a canonical integrate-and-fire neuron, with a linear integrator and a lower bound for the depolarization, as often used in VLSI implementations (Mead, 1989). The spike statistics of such neurons appear to be qualitatively similar to conventional (exponential) integrate-and-fire neurons, which exhibit a wide variety of characteristics observed in cortical recordings. We also show that, contrary to current opinion, the dynamics of a network composed of such neurons has two stable fixed points, even in the purely excitatory network, corresponding to two different states of reverberating activity. The analytical results are compared with numerical simulations and are found to be in good agreement.

  11. Arbitrary nonlinearity is sufficient to represent all functions by neural networks - A theorem

    NASA Technical Reports Server (NTRS)

    Kreinovich, Vladik YA.

    1991-01-01

    It is proved that if we have neurons implementing arbitrary linear functions and a neuron implementing one (arbitrary but smooth) nonlinear function g(x), then for every continuous function f(x sub 1,..., x sub m) of arbitrarily many variables, and for arbitrary e above 0, we can construct a network that consists of g-neurons and linear neurons, and computes f with precision e.

  12. Two fast and accurate heuristic RBF learning rules for data classification.

    PubMed

    Rouhani, Modjtaba; Javan, Dawood S

    2016-03-01

    This paper presents new Radial Basis Function (RBF) learning methods for classification problems. The proposed methods use some heuristics to determine the spreads, the centers and the number of hidden neurons of network in such a way that the higher efficiency is achieved by fewer numbers of neurons, while the learning algorithm remains fast and simple. To retain network size limited, neurons are added to network recursively until termination condition is met. Each neuron covers some of train data. The termination condition is to cover all training data or to reach the maximum number of neurons. In each step, the center and spread of the new neuron are selected based on maximization of its coverage. Maximization of coverage of the neurons leads to a network with fewer neurons and indeed lower VC dimension and better generalization property. Using power exponential distribution function as the activation function of hidden neurons, and in the light of new learning approaches, it is proved that all data became linearly separable in the space of hidden layer outputs which implies that there exist linear output layer weights with zero training error. The proposed methods are applied to some well-known datasets and the simulation results, compared with SVM and some other leading RBF learning methods, show their satisfactory and comparable performance. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Structural and Functional Alterations in Neocortical Circuits after Mild Traumatic Brain Injury

    NASA Astrophysics Data System (ADS)

    Vascak, Michal

    National concern over traumatic brain injury (TBI) is growing rapidly. Recent focus is on mild TBI (mTBI), which is the most prevalent injury level in both civilian and military demographics. A preeminent sequelae of mTBI is cognitive network disruption. Advanced neuroimaging of mTBI victims supports this premise, revealing alterations in activation and structure-function of excitatory and inhibitory neuronal systems, which are essential for network processing. However, clinical neuroimaging cannot resolve the cellular and molecular substrates underlying such changes. Therefore, to understand the full scope of mTBI-induced alterations it is necessary to study cortical networks on the microscopic level, where neurons form local networks that are the fundamental computational modules supporting cognition. Recently, in a well-controlled animal model of mTBI, we demonstrated in the excitatory pyramidal neuron system, isolated diffuse axonal injury (DAI), in concert with electrophysiological abnormalities in nearby intact (non-DAI) neurons. These findings were consistent with altered axon initial segment (AIS) intrinsic activity functionally associated with structural plasticity, and/or disturbances in extrinsic systems related to parvalbumin (PV)-expressing interneurons that form GABAergic synapses along the pyramidal neuron perisomatic/AIS domains. The AIS and perisomatic GABAergic synapses are domains critical for regulating neuronal activity and E-I balance. In this dissertation, we focus on the neocortical excitatory pyramidal neuron/inhibitory PV+ interneuron local network following mTBI. Our central hypothesis is that mTBI disrupts neuronal network structure and function causing imbalance of excitatory and inhibitory systems. To address this hypothesis we exploited transgenic and cre/lox mouse models of mTBI, employing approaches that couple state-of-the-art bioimaging with electrophysiology to determine the structuralfunctional alterations of excitatory and inhibitory systems in the neocortex.

  14. Dopamine Attenuates Ketamine-Induced Neuronal Apoptosis in the Developing Rat Retina Independent of Early Synchronized Spontaneous Network Activity.

    PubMed

    Dong, Jing; Gao, Lingqi; Han, Junde; Zhang, Junjie; Zheng, Jijian

    2017-07-01

    Deprivation of spontaneous rhythmic electrical activity in early development by anesthesia administration, among other interventions, induces neuronal apoptosis. However, it is unclear whether enhancement of neuronal electrical activity attenuates neuronal apoptosis in either normal development or after anesthesia exposure. The present study investigated the effects of dopamine, an enhancer of spontaneous rhythmic electrical activity, on ketamine-induced neuronal apoptosis in the developing rat retina. TUNEL and immunohistochemical assays indicated that ketamine time- and dose-dependently aggravated physiological and ketamine-induced apoptosis and inhibited early-synchronized spontaneous network activity. Dopamine administration reversed ketamine-induced neuronal apoptosis, but did not reverse the inhibitory effects of ketamine on early synchronized spontaneous network activity despite enhancing it in controls. Blockade of D1, D2, and A2A receptors and inhibition of cAMP/PKA signaling partially antagonized the protective effect of dopamine against ketamine-induced apoptosis. Together, these data indicate that dopamine attenuates ketamine-induced neuronal apoptosis in the developing rat retina by activating the D1, D2, and A2A receptors, and upregulating cAMP/PKA signaling, rather than through modulation of early synchronized spontaneous network activity.

  15. Comparisons of Neuronal and Excitatory Network Properties between the Rat Brainstem Nuclei that Participate in Vertical and Horizontal Gaze Holding

    PubMed Central

    Sugimura, Taketoshi; Yanagawa, Yuchio

    2017-01-01

    Gaze holding is primarily controlled by neural structures including the prepositus hypoglossi nucleus (PHN) for horizontal gaze and the interstitial nucleus of Cajal (INC) for vertical and torsional gaze. In contrast to the accumulating findings of the PHN, there is no report regarding the membrane properties of INC neurons or the local networks in the INC. In this study, to verify whether the neural structure of the INC is similar to that of the PHN, we investigated the neuronal and network properties of the INC using whole-cell recordings in rat brainstem slices. Three types of afterhyperpolarization (AHP) profiles and five firing patterns observed in PHN neurons were also observed in INC neurons. However, the overall distributions based on the AHP profile and the firing patterns of INC neurons were different from those of PHN neurons. The application of burst stimulation to a nearby site of a recorded INC neuron induced an increase in the frequency of spontaneous EPSCs. The duration of the increased EPSC frequency of INC neurons was not significantly different from that of PHN neurons. The percent of duration reduction induced by a Ca2+-permeable AMPA (CP-AMPA) receptor antagonist was significantly smaller in the INC than in the PHN. These findings suggest that local excitatory networks that activate sustained EPSC responses also exist in the INC, but their activation mechanisms including the contribution of CP-AMPA receptors differ between the INC and the PHN. PMID:28966973

  16. Multi-layer network utilizing rewarded spike time dependent plasticity to learn a foraging task

    PubMed Central

    2017-01-01

    Neural networks with a single plastic layer employing reward modulated spike time dependent plasticity (STDP) are capable of learning simple foraging tasks. Here we demonstrate advanced pattern discrimination and continuous learning in a network of spiking neurons with multiple plastic layers. The network utilized both reward modulated and non-reward modulated STDP and implemented multiple mechanisms for homeostatic regulation of synaptic efficacy, including heterosynaptic plasticity, gain control, output balancing, activity normalization of rewarded STDP and hard limits on synaptic strength. We found that addition of a hidden layer of neurons employing non-rewarded STDP created neurons that responded to the specific combinations of inputs and thus performed basic classification of the input patterns. When combined with a following layer of neurons implementing rewarded STDP, the network was able to learn, despite the absence of labeled training data, discrimination between rewarding patterns and the patterns designated as punishing. Synaptic noise allowed for trial-and-error learning that helped to identify the goal-oriented strategies which were effective in task solving. The study predicts a critical set of properties of the spiking neuronal network with STDP that was sufficient to solve a complex foraging task involving pattern classification and decision making. PMID:28961245

  17. Synaptic Plasticity Enables Adaptive Self-Tuning Critical Networks

    PubMed Central

    Stepp, Nigel; Plenz, Dietmar; Srinivasa, Narayan

    2015-01-01

    During rest, the mammalian cortex displays spontaneous neural activity. Spiking of single neurons during rest has been described as irregular and asynchronous. In contrast, recent in vivo and in vitro population measures of spontaneous activity, using the LFP, EEG, MEG or fMRI suggest that the default state of the cortex is critical, manifested by spontaneous, scale-invariant, cascades of activity known as neuronal avalanches. Criticality keeps a network poised for optimal information processing, but this view seems to be difficult to reconcile with apparently irregular single neuron spiking. Here, we simulate a 10,000 neuron, deterministic, plastic network of spiking neurons. We show that a combination of short- and long-term synaptic plasticity enables these networks to exhibit criticality in the face of intrinsic, i.e. self-sustained, asynchronous spiking. Brief external perturbations lead to adaptive, long-term modification of intrinsic network connectivity through long-term excitatory plasticity, whereas long-term inhibitory plasticity enables rapid self-tuning of the network back to a critical state. The critical state is characterized by a branching parameter oscillating around unity, a critical exponent close to -3/2 and a long tail distribution of a self-similarity parameter between 0.5 and 1. PMID:25590427

  18. Extracting neuronal functional network dynamics via adaptive Granger causality analysis.

    PubMed

    Sheikhattar, Alireza; Miran, Sina; Liu, Ji; Fritz, Jonathan B; Shamma, Shihab A; Kanold, Patrick O; Babadi, Behtash

    2018-04-24

    Quantifying the functional relations between the nodes in a network based on local observations is a key challenge in studying complex systems. Most existing time series analysis techniques for this purpose provide static estimates of the network properties, pertain to stationary Gaussian data, or do not take into account the ubiquitous sparsity in the underlying functional networks. When applied to spike recordings from neuronal ensembles undergoing rapid task-dependent dynamics, they thus hinder a precise statistical characterization of the dynamic neuronal functional networks underlying adaptive behavior. We develop a dynamic estimation and inference paradigm for extracting functional neuronal network dynamics in the sense of Granger, by integrating techniques from adaptive filtering, compressed sensing, point process theory, and high-dimensional statistics. We demonstrate the utility of our proposed paradigm through theoretical analysis, algorithm development, and application to synthetic and real data. Application of our techniques to two-photon Ca 2+ imaging experiments from the mouse auditory cortex reveals unique features of the functional neuronal network structures underlying spontaneous activity at unprecedented spatiotemporal resolution. Our analysis of simultaneous recordings from the ferret auditory and prefrontal cortical areas suggests evidence for the role of rapid top-down and bottom-up functional dynamics across these areas involved in robust attentive behavior.

  19. Emergent spatial synaptic structure from diffusive plasticity.

    PubMed

    Sweeney, Yann; Clopath, Claudia

    2017-04-01

    Some neurotransmitters can diffuse freely across cell membranes, influencing neighbouring neurons regardless of their synaptic coupling. This provides a means of neural communication, alternative to synaptic transmission, which can influence the way in which neural networks process information. Here, we ask whether diffusive neurotransmission can also influence the structure of synaptic connectivity in a network undergoing plasticity. We propose a form of Hebbian synaptic plasticity which is mediated by a diffusive neurotransmitter. Whenever a synapse is modified at an individual neuron through our proposed mechanism, similar but smaller modifications occur in synapses connecting to neighbouring neurons. The effects of this diffusive plasticity are explored in networks of rate-based neurons. This leads to the emergence of spatial structure in the synaptic connectivity of the network. We show that this spatial structure can coexist with other forms of structure in the synaptic connectivity, such as with groups of strongly interconnected neurons that form in response to correlated external drive. Finally, we explore diffusive plasticity in a simple feedforward network model of receptive field development. We show that, as widely observed across sensory cortex, the preferred stimulus identity of neurons in our network become spatially correlated due to diffusion. Our proposed mechanism of diffusive plasticity provides an efficient mechanism for generating these spatial correlations in stimulus preference which can flexibly interact with other forms of synaptic organisation. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  20. Control strategies of 3-cell Central Pattern Generator via global stimuli

    NASA Astrophysics Data System (ADS)

    Lozano, Álvaro; Rodríguez, Marcos; Barrio, Roberto

    2016-03-01

    The study of the synchronization patterns of small neuron networks that control several biological processes has become an interesting growing discipline. Some of these synchronization patterns of individual neurons are related to some undesirable neurological diseases, and they are believed to play a crucial role in the emergence of pathological rhythmic brain activity in different diseases, like Parkinson’s disease. We show how, with a suitable combination of short and weak global inhibitory and excitatory stimuli over the whole network, we can switch between different stable bursting patterns in small neuron networks (in our case a 3-neuron network). We develop a systematic study showing and explaining the effects of applying the pulses at different moments. Moreover, we compare the technique on a completely symmetric network and on a slightly perturbed one (a much more realistic situation). The present approach of using global stimuli may allow to avoid undesirable synchronization patterns with nonaggressive stimuli.

  1. Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers.

    PubMed

    Jordan, Jakob; Ippen, Tammo; Helias, Moritz; Kitayama, Itaru; Sato, Mitsuhisa; Igarashi, Jun; Diesmann, Markus; Kunkel, Susanne

    2018-01-01

    State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems.

  2. The connection-set algebra--a novel formalism for the representation of connectivity structure in neuronal network models.

    PubMed

    Djurfeldt, Mikael

    2012-07-01

    The connection-set algebra (CSA) is a novel and general formalism for the description of connectivity in neuronal network models, from small-scale to large-scale structure. The algebra provides operators to form more complex sets of connections from simpler ones and also provides parameterization of such sets. CSA is expressive enough to describe a wide range of connection patterns, including multiple types of random and/or geometrically dependent connectivity, and can serve as a concise notation for network structure in scientific writing. CSA implementations allow for scalable and efficient representation of connectivity in parallel neuronal network simulators and could even allow for avoiding explicit representation of connections in computer memory. The expressiveness of CSA makes prototyping of network structure easy. A C+ + version of the algebra has been implemented and used in a large-scale neuronal network simulation (Djurfeldt et al., IBM J Res Dev 52(1/2):31-42, 2008b) and an implementation in Python has been publicly released.

  3. Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers

    PubMed Central

    Jordan, Jakob; Ippen, Tammo; Helias, Moritz; Kitayama, Itaru; Sato, Mitsuhisa; Igarashi, Jun; Diesmann, Markus; Kunkel, Susanne

    2018-01-01

    State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems. PMID:29503613

  4. Self-organized criticality occurs in non-conservative neuronal networks during `up' states

    NASA Astrophysics Data System (ADS)

    Millman, Daniel; Mihalas, Stefan; Kirkwood, Alfredo; Niebur, Ernst

    2010-10-01

    During sleep, under anaesthesia and in vitro, cortical neurons in sensory, motor, association and executive areas fluctuate between so-called up and down states, which are characterized by distinct membrane potentials and spike rates. Another phenomenon observed in preparations similar to those that exhibit up and down states-such as anaesthetized rats, brain slices and cultures devoid of sensory input, as well as awake monkey cortex-is self-organized criticality (SOC). SOC is characterized by activity `avalanches' with a branching parameter near unity and size distribution that obeys a power law with a critical exponent of about -3/2. Recent work has demonstrated SOC in conservative neuronal network models, but critical behaviour breaks down when biologically realistic `leaky' neurons are introduced. Here, we report robust SOC behaviour in networks of non-conservative leaky integrate-and-fire neurons with short-term synaptic depression. We show analytically and numerically that these networks typically have two stable activity levels, corresponding to up and down states, that the networks switch spontaneously between these states and that up states are critical and down states are subcritical.

  5. Chimera-like states in a neuronal network model of the cat brain

    NASA Astrophysics Data System (ADS)

    Santos, M. S.; Szezech, J. D.; Borges, F. S.; Iarosz, K. C.; Caldas, I. L.; Batista, A. M.; Viana, R. L.; Kurths, J.

    2017-08-01

    Neuronal systems have been modeled by complex networks in different description levels. Recently, it has been verified that networks can simultaneously exhibit one coherent and other incoherent domain, known as chimera states. In this work, we study the existence of chimera states in a network considering the connectivity matrix based on the cat cerebral cortex. The cerebral cortex of the cat can be separated in 65 cortical areas organised into the four cognitive regions: visual, auditory, somatosensory-motor and frontolimbic. We consider a network where the local dynamics is given by the Hindmarsh-Rose model. The Hindmarsh-Rose equations are a well known model of neuronal activity that has been considered to simulate membrane potential in neuron. Here, we analyse under which conditions chimera states are present, as well as the affects induced by intensity of coupling on them. We observe the existence of chimera states in that incoherent structure can be composed of desynchronised spikes or desynchronised bursts. Moreover, we find that chimera states with desynchronised bursts are more robust to neuronal noise than with desynchronised spikes.

  6. Engineered 3D vascular and neuronal networks in a microfluidic platform.

    PubMed

    Osaki, Tatsuya; Sivathanu, Vivek; Kamm, Roger D

    2018-03-26

    Neurovascular coupling plays a key role in the pathogenesis of neurodegenerative disorders including motor neuron disease (MND). In vitro models provide an opportunity to understand the pathogenesis of MND, and offer the potential for drug screening. Here, we describe a new 3D microvascular and neuronal network model in a microfluidic platform to investigate interactions between these two systems. Both 3D networks were established by co-culturing human embryonic stem (ES)-derived MN spheroids and endothelial cells (ECs) in microfluidic devices. Co-culture with ECs improves neurite elongation and neuronal connectivity as measured by Ca 2+ oscillation. This improvement was regulated not only by paracrine signals such as brain-derived neurotrophic factor secreted by ECs but also through direct cell-cell interactions via the delta-notch pathway, promoting neuron differentiation and neuroprotection. Bi-directional signaling was observed in that the neural networks also affected vascular network formation under perfusion culture. This in vitro model could enable investigations of neuro-vascular coupling, essential to understanding the pathogenesis of neurodegenerative diseases including MNDs such as amyotrophic lateral sclerosis.

  7. Mechano-sensitization of mammalian neuronal networks through expression of the bacterial large-conductance mechanosensitive ion channel

    PubMed Central

    Contestabile, Andrea; Moroni, Monica; Hallinan, Grace I.; Palazzolo, Gemma; Chad, John; Deinhardt, Katrin; Carugo, Dario

    2018-01-01

    ABSTRACT Development of remote stimulation techniques for neuronal tissues represents a challenging goal. Among the potential methods, mechanical stimuli are the most promising vectors to convey information non-invasively into intact brain tissue. In this context, selective mechano-sensitization of neuronal circuits would pave the way to develop a new cell-type-specific stimulation approach. We report here, for the first time, the development and characterization of mechano-sensitized neuronal networks through the heterologous expression of an engineered bacterial large-conductance mechanosensitive ion channel (MscL). The neuronal functional expression of the MscL was validated through patch-clamp recordings upon application of calibrated suction pressures. Moreover, we verified the effective development of in-vitro neuronal networks expressing the engineered MscL in terms of cell survival, number of synaptic puncta and spontaneous network activity. The pure mechanosensitivity of the engineered MscL, with its wide genetic modification library, may represent a versatile tool to further develop a mechano-genetic approach. This article has an associated First Person interview with the first author of the paper. PMID:29361543

  8. Computational Modeling of Single Neuron Extracellular Electric Potentials and Network Local Field Potentials using LFPsim.

    PubMed

    Parasuram, Harilal; Nair, Bipin; D'Angelo, Egidio; Hines, Michael; Naldi, Giovanni; Diwakar, Shyam

    2016-01-01

    Local Field Potentials (LFPs) are population signals generated by complex spatiotemporal interaction of current sources and dipoles. Mathematical computations of LFPs allow the study of circuit functions and dysfunctions via simulations. This paper introduces LFPsim, a NEURON-based tool for computing population LFP activity and single neuron extracellular potentials. LFPsim was developed to be used on existing cable compartmental neuron and network models. Point source, line source, and RC based filter approximations can be used to compute extracellular activity. As a demonstration of efficient implementation, we showcase LFPs from mathematical models of electrotonically compact cerebellum granule neurons and morphologically complex neurons of the neocortical column. LFPsim reproduced neocortical LFP at 8, 32, and 56 Hz via current injection, in vitro post-synaptic N2a, N2b waves and in vivo T-C waves in cerebellum granular layer. LFPsim also includes a simulation of multi-electrode array of LFPs in network populations to aid computational inference between biophysical activity in neural networks and corresponding multi-unit activity resulting in extracellular and evoked LFP signals.

  9. Graph-based unsupervised segmentation algorithm for cultured neuronal networks' structure characterization and modeling.

    PubMed

    de Santos-Sierra, Daniel; Sendiña-Nadal, Irene; Leyva, Inmaculada; Almendral, Juan A; Ayali, Amir; Anava, Sarit; Sánchez-Ávila, Carmen; Boccaletti, Stefano

    2015-06-01

    Large scale phase-contrast images taken at high resolution through the life of a cultured neuronal network are analyzed by a graph-based unsupervised segmentation algorithm with a very low computational cost, scaling linearly with the image size. The processing automatically retrieves the whole network structure, an object whose mathematical representation is a matrix in which nodes are identified neurons or neurons' clusters, and links are the reconstructed connections between them. The algorithm is also able to extract any other relevant morphological information characterizing neurons and neurites. More importantly, and at variance with other segmentation methods that require fluorescence imaging from immunocytochemistry techniques, our non invasive measures entitle us to perform a longitudinal analysis during the maturation of a single culture. Such an analysis furnishes the way of individuating the main physical processes underlying the self-organization of the neurons' ensemble into a complex network, and drives the formulation of a phenomenological model yet able to describe qualitatively the overall scenario observed during the culture growth. © 2014 International Society for Advancement of Cytometry.

  10. The many faces of REST oversee epigenetic programming of neuronal genes.

    PubMed

    Ballas, Nurit; Mandel, Gail

    2005-10-01

    Nervous system development relies on a complex signaling network to engineer the orderly transitions that lead to the acquisition of a neural cell fate. Progression from the non-neuronal pluripotent stem cell to a restricted neural lineage is characterized by distinct patterns of gene expression, particularly the restriction of neuronal gene expression to neurons. Concurrently, cells outside the nervous system acquire and maintain a non-neuronal fate that permanently excludes expression of neuronal genes. Studies of the transcriptional repressor REST, which regulates a large network of neuronal genes, provide a paradigm for elucidating the link between epigenetic mechanisms and neurogenesis. REST orchestrates a set of epigenetic modifications that are distinct between non-neuronal cells that give rise to neurons and those that are destined to remain as nervous system outsiders.

  11. Neuronal replacement therapy: previous achievements and challenges ahead

    NASA Astrophysics Data System (ADS)

    Grade, Sofia; Götz, Magdalena

    2017-10-01

    Lifelong neurogenesis and incorporation of newborn neurons into mature neuronal circuits operates in specialized niches of the mammalian brain and serves as role model for neuronal replacement strategies. However, to which extent can the remaining brain parenchyma, which never incorporates new neurons during the adulthood, be as plastic and readily accommodate neurons in networks that suffered neuronal loss due to injury or neurological disease? Which microenvironment is permissive for neuronal replacement and synaptic integration and which cells perform best? Can lost function be restored and how adequate is the participation in the pre-existing circuitry? Could aberrant connections cause malfunction especially in networks dominated by excitatory neurons, such as the cerebral cortex? These questions show how important connectivity and circuitry aspects are for regenerative medicine, which is the focus of this review. We will discuss the impressive advances in neuronal replacement strategies and success from exogenous as well as endogenous cell sources. Both have seen key novel technologies, like the groundbreaking discovery of induced pluripotent stem cells and direct neuronal reprogramming, offering alternatives to the transplantation of fetal neurons, and both herald great expectations. For these to become reality, neuronal circuitry analysis is key now. As our understanding of neuronal circuits increases, neuronal replacement therapy should fulfill those prerequisites in network structure and function, in brain-wide input and output. Now is the time to incorporate neural circuitry research into regenerative medicine if we ever want to truly repair brain injury.

  12. [Extinction and Reconsolidation of Memory].

    PubMed

    Zuzina, A B; Balaban, P M

    2015-01-01

    Retrieval of memory followed by reconsolidation can strengthen a memory, while retrieval followed by extinction results in a decrease of memory performance due to weakening of existing memory or formation of a competing memory. In our study we analyzed the behavior and responses of identified neurons involved in the network underlying aversive learning in terrestrial snail Helix, and made an attempt to describe the conditions in which the retrieval of memory leads either to extinction or reconsolidation. In the network underlying the withdrawal behavior, sensory neurons, premotor interneurons, motor neurons, and modulatory for this network serotonergic neurons are identified and recordings from representatives of these groups were made before and after aversive learning. In the network underlying feeding behavior, the premotor modulatory serotonergic interneurons and motor neurons involved in motor program of feeding are identified. Analysis of changes in neural activity after aversive learning showed that modulatory neurons of feeding behavior do not demonstrate any changes (sometimes a decrease of responses to food was observed), while responses to food in withdrawal behavior premotor interneurons changed qualitatively, from under threshold EPSPs to spike discharges. Using a specific for serotonergic neurons neurotoxin 5,7-DiHT it was shown previously that the serotonergic system is necessary for the aversive learning, but is not necessary for maintenance and retrieval of this memory. These results suggest that the serotonergic neurons that are necessary as part of a reinforcement for developing the associative changes in the network may be not necessary for the retrieval of memory. The hypothesis presented in this review concerns the activity of the "reinforcement" serotonergic neurons that is suggested to be the gate condition for the choice between extinction/reconsolidation triggered by memory retrieval: if these serotonergic neurons do not respond during the retrieval due to adaptation, habituation, changes in environment, etc., then we will observe the extinction; while if these neurons respond to the CS during memory retrieval, we will observe the reconsolidation phenomenon.

  13. Hox repertoires for motor neuron diversity and connectivity gated by a single accessory factor, FoxP1.

    PubMed

    Dasen, Jeremy S; De Camilli, Alessandro; Wang, Bin; Tucker, Philip W; Jessell, Thomas M

    2008-07-25

    The precision with which motor neurons innervate target muscles depends on a regulatory network of Hox transcription factors that translates neuronal identity into patterns of connectivity. We show that a single transcription factor, FoxP1, coordinates motor neuron subtype identity and connectivity through its activity as a Hox accessory factor. FoxP1 is expressed in Hox-sensitive motor columns and acts as a dose-dependent determinant of columnar fate. Inactivation of Foxp1 abolishes the output of the motor neuron Hox network, reverting the spinal motor system to an ancestral state. The loss of FoxP1 also changes the pattern of motor neuron connectivity, and in the limb motor axons appear to select their trajectories and muscle targets at random. Our findings show that FoxP1 is a crucial determinant of motor neuron diversification and connectivity, and clarify how this Hox regulatory network controls the formation of a topographic neural map.

  14. Simultaneous profiling of activity patterns in multiple neuronal subclasses.

    PubMed

    Parrish, R Ryley; Grady, John; Codadu, Neela K; Trevelyan, Andrew J; Racca, Claudia

    2018-06-01

    Neuronal networks typically comprise heterogeneous populations of neurons. A core objective when seeking to understand such networks, therefore, is to identify what roles these different neuronal classes play. Acquiring single cell electrophysiology data for multiple cell classes can prove to be a large and daunting task. Alternatively, Ca 2+ network imaging provides activity profiles of large numbers of neurons simultaneously, but without distinguishing between cell classes. We therefore developed a strategy for combining cellular electrophysiology, Ca 2+ network imaging, and immunohistochemistry to provide activity profiles for multiple cell classes at once. This involves cross-referencing easily identifiable landmarks between imaging of the live and fixed tissue, and then using custom MATLAB functions to realign the two imaging data sets, to correct for distortions of the tissue introduced by the fixation or immunohistochemical processing. We illustrate the methodology for analyses of activity profiles during epileptiform events recorded in mouse brain slices. We further demonstrate the activity profile of a population of parvalbumin-positive interneurons prior, during, and following a seizure-like event. Current approaches to Ca 2+ network imaging analyses are severely limited in their ability to subclassify neurons, and often rely on transgenic approaches to identify cell classes. In contrast, our methodology is a generic, affordable, and flexible technique to characterize neuronal behaviour with respect to classification based on morphological and neurochemical identity. We present a new approach for analysing Ca 2+ network imaging datasets, and use this to explore the parvalbumin-positive interneuron activity during epileptiform events. Copyright © 2018 Elsevier B.V. All rights reserved.

  15. Neural Dynamics as Sampling: A Model for Stochastic Computation in Recurrent Networks of Spiking Neurons

    PubMed Central

    Buesing, Lars; Bill, Johannes; Nessler, Bernhard; Maass, Wolfgang

    2011-01-01

    The organization of computations in networks of spiking neurons in the brain is still largely unknown, in particular in view of the inherently stochastic features of their firing activity and the experimentally observed trial-to-trial variability of neural systems in the brain. In principle there exists a powerful computational framework for stochastic computations, probabilistic inference by sampling, which can explain a large number of macroscopic experimental data in neuroscience and cognitive science. But it has turned out to be surprisingly difficult to create a link between these abstract models for stochastic computations and more detailed models of the dynamics of networks of spiking neurons. Here we create such a link and show that under some conditions the stochastic firing activity of networks of spiking neurons can be interpreted as probabilistic inference via Markov chain Monte Carlo (MCMC) sampling. Since common methods for MCMC sampling in distributed systems, such as Gibbs sampling, are inconsistent with the dynamics of spiking neurons, we introduce a different approach based on non-reversible Markov chains that is able to reflect inherent temporal processes of spiking neuronal activity through a suitable choice of random variables. We propose a neural network model and show by a rigorous theoretical analysis that its neural activity implements MCMC sampling of a given distribution, both for the case of discrete and continuous time. This provides a step towards closing the gap between abstract functional models of cortical computation and more detailed models of networks of spiking neurons. PMID:22096452

  16. Oscillations contribute to memory consolidation by changing criticality and stability in the brain

    NASA Astrophysics Data System (ADS)

    Wu, Jiaxing; Skilling, Quinton; Ognjanovski, Nicolette; Aton, Sara; Zochowski, Michal

    Oscillations are a universal feature of every level of brain dynamics and have been shown to contribute to many brain functions. To investigate the fundamental mechanism underpinning oscillatory activity, the properties of heterogeneous networks are compared in situations with and without oscillations. Our results show that both network criticality and stability are changed in the presence of oscillations. Criticality describes the network state of neuronal avalanche, a cascade of bursts of action potential firing in neural network. Stability measures how stable the spike timing relationship between neuron pairs is over time. Using a detailed spiking model, we found that the branching parameter σ changes relative to oscillation and structural network properties, corresponding to transmission among different critical states. Also, analysis of functional network structures shows that the oscillation helps to stabilize neuronal representation of memory. Further, quantitatively similar results are observed in biological data recorded in vivo. In summary, we have observed that, by regulating the neuronal firing pattern, oscillations affect both criticality and stability properties of the network, and thus contribute to memory formation.

  17. Properties of spin-1/2 triangular-lattice antiferromagnets CuY2Ge2O8 and CuLa2Ge2O8

    NASA Astrophysics Data System (ADS)

    Cho, Hwanbeom; Kratochvílová, Marie; Sim, Hasung; Choi, Ki-Young; Kim, Choong Hyun; Paulsen, Carley; Avdeev, Maxim; Peets, Darren C.; Jo, Younghun; Lee, Sanghyun; Noda, Yukio; Lawler, Michael J.; Park, Je-Geun

    2017-04-01

    We found new two-dimensional (2D) quantum (S =1 /2 ) antiferromagnetic systems: Cu R E2G e2O8 (R E =Y and La). According to our analysis of high-resolution x-ray and neutron diffraction experiments, the Cu network of Cu R E2G e2O8 (R E =Y and La) exhibits a 2D triangular lattice linked via weak bonds along the perpendicular b axis. Our bulk characterizations from 0.08 to 400 K show that they undergo a long-range order at 0.51(1) and 1.09(4) K for the Y and La systems, respectively. Interestingly, they also exhibit field induced phase transitions. For theoretical understanding, we carried out the density functional theory (DFT) band calculations to find that they are typical charge-transfer-type insulators with a gap of Eg≅2 eV . Taken together, our observations make Cu R E2G e2O8 (R E =Y and La) additional examples of low-dimensional quantum spin triangular antiferromagnets with the low-temperature magnetic ordering.

  18. Guiding neuron development with planar surface gradients of substrate cues deposited using microfluidic devices.

    PubMed

    Millet, Larry J; Stewart, Matthew E; Nuzzo, Ralph G; Gillette, Martha U

    2010-06-21

    Wiring the nervous system relies on the interplay of intrinsic and extrinsic signaling molecules that control neurite extension, neuronal polarity, process maturation and experience-dependent refinement. Extrinsic signals establish and enrich neuron-neuron interactions during development. Understanding how such extrinsic cues direct neurons to establish neural connections in vitro will facilitate the development of organized neural networks for investigating the development and function of nervous system networks. Producing ordered networks of neurons with defined connectivity in vitro presents special technical challenges because the results must be compliant with the biological requirements of rewiring neural networks. Here we demonstrate the ability to form stable, instructive surface-bound gradients of laminin that guide postnatal hippocampal neuron development in vitro. Our work uses a three-channel, interconnected microfluidic device that permits the production of adlayers of planar substrates through the combination of laminar flow, diffusion and physisorption. Through simple flow modifications, a variety of patterns and gradients of laminin (LN) and fluorescein isothiocyanate-conjugated poly-l-lysine (FITC-PLL) were deposited to present neurons with an instructive substratum to guide neuronal development. We present three variations in substrate design that produce distinct growth regimens for postnatal neurons in dispersed cell cultures. In the first approach, diffusion-mediated gradients of LN were formed on cover slips to guide neurons toward increasing LN concentrations. In the second approach, a combined gradient of LN and FITC-PLL was produced using aspiration-driven laminar flow to restrict neuronal growth to a 15 microm wide growth zone at the center of the two superimposed gradients. The last approach demonstrates the capacity to combine binary lines of FITC-PLL in conjunction with surface gradients of LN and bovine serum albumin (BSA) to produce substrate adlayers that provide additional levels of control over growth. This work demonstrates the advantages of spatio-temporal fluid control for patterning surface-bound gradients using a simple microfluidics-based substrate deposition procedure. We anticipate that this microfluidics-based patterning approach will provide instructive patterns and surface-bound gradients to enable a new level of control in guiding neuron development and network formation.

  19. DELTAMETHRIN AND ESFENVALERATE INHIBIT SPONTANEOUS NETWORK ACTIVITY IN RAT CORTICAL NEURONS IN VITRO.

    EPA Science Inventory

    Understanding pyrethroid actions on neuronal networks will help to establish a mode of action for these compounds, which is needed for cumulative risk decisions under the Food Quality Protection Act of 1996. However, pyrethroid effects on spontaneous activity in networks of inter...

  20. Cytokines and cytokine networks target neurons to modulate long-term potentiation.

    PubMed

    Prieto, G Aleph; Cotman, Carl W

    2017-04-01

    Cytokines play crucial roles in the communication between brain cells including neurons and glia, as well as in the brain-periphery interactions. In the brain, cytokines modulate long-term potentiation (LTP), a cellular correlate of memory. Whether cytokines regulate LTP by direct effects on neurons or by indirect mechanisms mediated by non-neuronal cells is poorly understood. Elucidating neuron-specific effects of cytokines has been challenging because most brain cells express cytokine receptors. Moreover, cytokines commonly increase the expression of multiple cytokines in their target cells, thus increasing the complexity of brain cytokine networks even after single-cytokine challenges. Here, we review evidence on both direct and indirect-mediated modulation of LTP by cytokines. We also describe novel approaches based on neuron- and synaptosome-enriched systems to identify cytokines able to directly modulate LTP, by targeting neurons and synapses. These approaches can test multiple samples in parallel, thus allowing the study of multiple cytokines simultaneously. Hence, a cytokine networks perspective coupled with neuron-specific analysis may contribute to delineation of maps of the modulation of LTP by cytokines. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Cytokines and cytokine networks target neurons to modulate long-term potentiation

    PubMed Central

    Prieto, G. Aleph; Cotman, Carl W.

    2017-01-01

    Cytokines play crucial roles in the communication between brain cells including neurons and glia, as well as in the brain-periphery interactions. In the brain, cytokines modulate long-term potentiation (LTP), a cellular correlate of memory. Whether cytokines regulate LTP by direct effects on neurons or by indirect mechanisms mediated by non-neuronal cells is poorly understood. Elucidating neuron-specific effects of cytokines has been challenging because most brain cells express cytokine receptors. Moreover, cytokines commonly increase the expression of multiple cytokines in their target cells, thus increasing the complexity of brain cytokine networks even after single-cytokine challenges. Here, we review evidence on both direct and indirect-mediated modulation of LTP by cytokines. We also describe novel approaches based on neuron- and synaptosome-enriched systems to identify cytokines able to directly modulate LTP, by targeting neurons and synapses. These approaches can test multiple samples in parallel, thus allowing the study of multiple cytokines simultaneously. Hence, a cytokine networks perspective coupled with neuron-specific analysis may contribute to delineation of maps of the modulation of LTP by cytokines. PMID:28377062

  2. Network-induced chaos in integrate-and-fire neuronal ensembles.

    PubMed

    Zhou, Douglas; Rangan, Aaditya V; Sun, Yi; Cai, David

    2009-09-01

    It has been shown that a single standard linear integrate-and-fire (IF) neuron under a general time-dependent stimulus cannot possess chaotic dynamics despite the firing-reset discontinuity. Here we address the issue of whether conductance-based, pulsed-coupled network interactions can induce chaos in an IF neuronal ensemble. Using numerical methods, we demonstrate that all-to-all, homogeneously pulse-coupled IF neuronal networks can indeed give rise to chaotic dynamics under an external periodic current drive. We also provide a precise characterization of the largest Lyapunov exponent for these high dimensional nonsmooth dynamical systems. In addition, we present a stable and accurate numerical algorithm for evaluating the largest Lyapunov exponent, which can overcome difficulties encountered by traditional methods for these nonsmooth dynamical systems with degeneracy induced by, e.g., refractoriness of neurons.

  3. Bistability induces episodic spike communication by inhibitory neurons in neuronal networks.

    PubMed

    Kazantsev, V B; Asatryan, S Yu

    2011-09-01

    Bistability is one of the important features of nonlinear dynamical systems. In neurodynamics, bistability has been found in basic Hodgkin-Huxley equations describing the cell membrane dynamics. When the neuron is clamped near its threshold, the stable rest potential may coexist with the stable limit cycle describing periodic spiking. However, this effect is often neglected in network computations where the neurons are typically reduced to threshold firing units (e.g., integrate-and-fire models). We found that the bistability may induce spike communication by inhibitory coupled neurons in the spiking network. The communication is realized in the form of episodic discharges with synchronous (correlated) spikes during the episodes. A spiking phase map is constructed to describe the synchronization and to estimate basic spike phase locking modes.

  4. Interplay of intrinsic and synaptic conductances in the generation of high-frequency oscillations in interneuronal networks with irregular spiking.

    PubMed

    Baroni, Fabiano; Burkitt, Anthony N; Grayden, David B

    2014-05-01

    High-frequency oscillations (above 30 Hz) have been observed in sensory and higher-order brain areas, and are believed to constitute a general hallmark of functional neuronal activation. Fast inhibition in interneuronal networks has been suggested as a general mechanism for the generation of high-frequency oscillations. Certain classes of interneurons exhibit subthreshold oscillations, but the effect of this intrinsic neuronal property on the population rhythm is not completely understood. We study the influence of intrinsic damped subthreshold oscillations in the emergence of collective high-frequency oscillations, and elucidate the dynamical mechanisms that underlie this phenomenon. We simulate neuronal networks composed of either Integrate-and-Fire (IF) or Generalized Integrate-and-Fire (GIF) neurons. The IF model displays purely passive subthreshold dynamics, while the GIF model exhibits subthreshold damped oscillations. Individual neurons receive inhibitory synaptic currents mediated by spiking activity in their neighbors as well as noisy synaptic bombardment, and fire irregularly at a lower rate than population frequency. We identify three factors that affect the influence of single-neuron properties on synchronization mediated by inhibition: i) the firing rate response to the noisy background input, ii) the membrane potential distribution, and iii) the shape of Inhibitory Post-Synaptic Potentials (IPSPs). For hyperpolarizing inhibition, the GIF IPSP profile (factor iii)) exhibits post-inhibitory rebound, which induces a coherent spike-mediated depolarization across cells that greatly facilitates synchronous oscillations. This effect dominates the network dynamics, hence GIF networks display stronger oscillations than IF networks. However, the restorative current in the GIF neuron lowers firing rates and narrows the membrane potential distribution (factors i) and ii), respectively), which tend to decrease synchrony. If inhibition is shunting instead of hyperpolarizing, post-inhibitory rebound is not elicited and factors i) and ii) dominate, yielding lower synchrony in GIF networks than in IF networks.

  5. Interplay of Intrinsic and Synaptic Conductances in the Generation of High-Frequency Oscillations in Interneuronal Networks with Irregular Spiking

    PubMed Central

    Baroni, Fabiano; Burkitt, Anthony N.; Grayden, David B.

    2014-01-01

    High-frequency oscillations (above 30 Hz) have been observed in sensory and higher-order brain areas, and are believed to constitute a general hallmark of functional neuronal activation. Fast inhibition in interneuronal networks has been suggested as a general mechanism for the generation of high-frequency oscillations. Certain classes of interneurons exhibit subthreshold oscillations, but the effect of this intrinsic neuronal property on the population rhythm is not completely understood. We study the influence of intrinsic damped subthreshold oscillations in the emergence of collective high-frequency oscillations, and elucidate the dynamical mechanisms that underlie this phenomenon. We simulate neuronal networks composed of either Integrate-and-Fire (IF) or Generalized Integrate-and-Fire (GIF) neurons. The IF model displays purely passive subthreshold dynamics, while the GIF model exhibits subthreshold damped oscillations. Individual neurons receive inhibitory synaptic currents mediated by spiking activity in their neighbors as well as noisy synaptic bombardment, and fire irregularly at a lower rate than population frequency. We identify three factors that affect the influence of single-neuron properties on synchronization mediated by inhibition: i) the firing rate response to the noisy background input, ii) the membrane potential distribution, and iii) the shape of Inhibitory Post-Synaptic Potentials (IPSPs). For hyperpolarizing inhibition, the GIF IPSP profile (factor iii)) exhibits post-inhibitory rebound, which induces a coherent spike-mediated depolarization across cells that greatly facilitates synchronous oscillations. This effect dominates the network dynamics, hence GIF networks display stronger oscillations than IF networks. However, the restorative current in the GIF neuron lowers firing rates and narrows the membrane potential distribution (factors i) and ii), respectively), which tend to decrease synchrony. If inhibition is shunting instead of hyperpolarizing, post-inhibitory rebound is not elicited and factors i) and ii) dominate, yielding lower synchrony in GIF networks than in IF networks. PMID:24784237

  6. Signaling in large-scale neural networks.

    PubMed

    Berg, Rune W; Hounsgaard, Jørn

    2009-02-01

    We examine the recent finding that neurons in spinal motor circuits enter a high conductance state during functional network activity. The underlying concomitant increase in random inhibitory and excitatory synaptic activity leads to stochastic signal processing. The possible advantages of this metabolically costly organization are analyzed by comparing with synaptically less intense networks driven by the intrinsic response properties of the network neurons.

  7. A Small World of Neuronal Synchrony

    PubMed Central

    Yu, Shan; Huang, Debin; Singer, Wolf

    2008-01-01

    A small-world network has been suggested to be an efficient solution for achieving both modular and global processing—a property highly desirable for brain computations. Here, we investigated functional networks of cortical neurons using correlation analysis to identify functional connectivity. To reconstruct the interaction network, we applied the Ising model based on the principle of maximum entropy. This allowed us to assess the interactions by measuring pairwise correlations and to assess the strength of coupling from the degree of synchrony. Visual responses were recorded in visual cortex of anesthetized cats, simultaneously from up to 24 neurons. First, pairwise correlations captured most of the patterns in the population's activity and, therefore, provided a reliable basis for the reconstruction of the interaction networks. Second, and most importantly, the resulting networks had small-world properties; the average path lengths were as short as in simulated random networks, but the clustering coefficients were larger. Neurons differed considerably with respect to the number and strength of interactions, suggesting the existence of “hubs” in the network. Notably, there was no evidence for scale-free properties. These results suggest that cortical networks are optimized for the coexistence of local and global computations: feature detection and feature integration or binding. PMID:18400792

  8. Three-dimensional spatial modeling of spines along dendritic networks in human cortical pyramidal neurons

    PubMed Central

    Larrañaga, Pedro; Benavides-Piccione, Ruth; Fernaud-Espinosa, Isabel; DeFelipe, Javier; Bielza, Concha

    2017-01-01

    We modeled spine distribution along the dendritic networks of pyramidal neurons in both basal and apical dendrites. To do this, we applied network spatial analysis because spines can only lie on the dendritic shaft. We expanded the existing 2D computational techniques for spatial analysis along networks to perform a 3D network spatial analysis. We analyzed five detailed reconstructions of adult human pyramidal neurons of the temporal cortex with a total of more than 32,000 spines. We confirmed that there is a spatial variation in spine density that is dependent on the distance to the cell body in all dendrites. Considering the dendritic arborizations of each pyramidal cell as a group of instances of the same observation (the neuron), we used replicated point patterns together with network spatial analysis for the first time to search for significant differences in the spine distribution of basal dendrites between different cells and between all the basal and apical dendrites. To do this, we used a recent variant of Ripley’s K function defined to work along networks. The results showed that there were no significant differences in spine distribution along basal arbors of the same neuron and along basal arbors of different pyramidal neurons. This suggests that dendritic spine distribution in basal dendritic arbors adheres to common rules. However, we did find significant differences in spine distribution along basal versus apical networks. Therefore, not only do apical and basal dendritic arborizations have distinct morphologies but they also obey different rules of spine distribution. Specifically, the results suggested that spines are more clustered along apical than in basal dendrites. Collectively, the results further highlighted that synaptic input information processing is different between these two dendritic domains. PMID:28662210

  9. Three-dimensional spatial modeling of spines along dendritic networks in human cortical pyramidal neurons.

    PubMed

    Anton-Sanchez, Laura; Larrañaga, Pedro; Benavides-Piccione, Ruth; Fernaud-Espinosa, Isabel; DeFelipe, Javier; Bielza, Concha

    2017-01-01

    We modeled spine distribution along the dendritic networks of pyramidal neurons in both basal and apical dendrites. To do this, we applied network spatial analysis because spines can only lie on the dendritic shaft. We expanded the existing 2D computational techniques for spatial analysis along networks to perform a 3D network spatial analysis. We analyzed five detailed reconstructions of adult human pyramidal neurons of the temporal cortex with a total of more than 32,000 spines. We confirmed that there is a spatial variation in spine density that is dependent on the distance to the cell body in all dendrites. Considering the dendritic arborizations of each pyramidal cell as a group of instances of the same observation (the neuron), we used replicated point patterns together with network spatial analysis for the first time to search for significant differences in the spine distribution of basal dendrites between different cells and between all the basal and apical dendrites. To do this, we used a recent variant of Ripley's K function defined to work along networks. The results showed that there were no significant differences in spine distribution along basal arbors of the same neuron and along basal arbors of different pyramidal neurons. This suggests that dendritic spine distribution in basal dendritic arbors adheres to common rules. However, we did find significant differences in spine distribution along basal versus apical networks. Therefore, not only do apical and basal dendritic arborizations have distinct morphologies but they also obey different rules of spine distribution. Specifically, the results suggested that spines are more clustered along apical than in basal dendrites. Collectively, the results further highlighted that synaptic input information processing is different between these two dendritic domains.

  10. Response sensitivity of barrel neuron subpopulations to simulated thalamic input.

    PubMed

    Pesavento, Michael J; Rittenhouse, Cynthia D; Pinto, David J

    2010-06-01

    Our goal is to examine the relationship between neuron- and network-level processing in the context of a well-studied cortical function, the processing of thalamic input by whisker-barrel circuits in rodent neocortex. Here we focus on neuron-level processing and investigate the responses of excitatory and inhibitory barrel neurons to simulated thalamic inputs applied using the dynamic clamp method in brain slices. Simulated inputs are modeled after real thalamic inputs recorded in vivo in response to brief whisker deflections. Our results suggest that inhibitory neurons require more input to reach firing threshold, but then fire earlier, with less variability, and respond to a broader range of inputs than do excitatory neurons. Differences in the responses of barrel neuron subtypes depend on their intrinsic membrane properties. Neurons with a low input resistance require more input to reach threshold but then fire earlier than neurons with a higher input resistance, regardless of the neuron's classification. Our results also suggest that the response properties of excitatory versus inhibitory barrel neurons are consistent with the response sensitivities of the ensemble barrel network. The short response latency of inhibitory neurons may serve to suppress ensemble barrel responses to asynchronous thalamic input. Correspondingly, whereas neurons acting as part of the barrel circuit in vivo are highly selective for temporally correlated thalamic input, excitatory barrel neurons acting alone in vitro are less so. These data suggest that network-level processing of thalamic input in barrel cortex depends on neuron-level processing of the same input by excitatory and inhibitory barrel neurons.

  11. Validation of long-term primary neuronal cultures and network activity through the integration of reversibly bonded microbioreactors and MEA substrates.

    PubMed

    Biffi, Emilia; Menegon, Andrea; Piraino, Francesco; Pedrocchi, Alessandra; Fiore, Gianfranco B; Rasponi, Marco

    2012-01-01

    In vitro recording of neuronal electrical activity is a widely used technique to understand brain functions and to study the effect of drugs on the central nervous system. The integration of microfluidic devices with microelectrode arrays (MEAs) enables the recording of networks activity in a controlled microenvironment. In this work, an integrated microfluidic system for neuronal cultures was developed, reversibly coupling a PDMS microfluidic device with a commercial flat MEA through magnetic forces. Neurons from mouse embryos were cultured in a 100 µm channel and their activity was followed up to 18 days in vitro. The maturation of the networks and their morphological and functional characteristics were comparable with those of networks cultured in macro-environments and described in literature. In this work, we successfully demonstrated the ability of long-term culturing of primary neuronal cells in a reversible bonded microfluidic device (based on magnetism) that will be fundamental for neuropharmacological studies. Copyright © 2011 Wiley Periodicals, Inc.

  12. Importance of being Nernst: Synaptic activity and functional relevance in stem cell-derived neurons

    PubMed Central

    Bradford, Aaron B; McNutt, Patrick M

    2015-01-01

    Functional synaptogenesis and network emergence are signature endpoints of neurogenesis. These behaviors provide higher-order confirmation that biochemical and cellular processes necessary for neurotransmitter release, post-synaptic detection and network propagation of neuronal activity have been properly expressed and coordinated among cells. The development of synaptic neurotransmission can therefore be considered a defining property of neurons. Although dissociated primary neuron cultures readily form functioning synapses and network behaviors in vitro, continuously cultured neurogenic cell lines have historically failed to meet these criteria. Therefore, in vitro-derived neuron models that develop synaptic transmission are critically needed for a wide array of studies, including molecular neuroscience, developmental neurogenesis, disease research and neurotoxicology. Over the last decade, neurons derived from various stem cell lines have shown varying ability to develop into functionally mature neurons. In this review, we will discuss the neurogenic potential of various stem cells populations, addressing strengths and weaknesses of each, with particular attention to the emergence of functional behaviors. We will propose methods to functionally characterize new stem cell-derived neuron (SCN) platforms to improve their reliability as physiological relevant models. Finally, we will review how synaptically active SCNs can be applied to accelerate research in a variety of areas. Ultimately, emphasizing the critical importance of synaptic activity and network responses as a marker of neuronal maturation is anticipated to result in in vitro findings that better translate to efficacious clinical treatments. PMID:26240679

  13. Color encoding in biologically-inspired convolutional neural networks.

    PubMed

    Rafegas, Ivet; Vanrell, Maria

    2018-05-11

    Convolutional Neural Networks have been proposed as suitable frameworks to model biological vision. Some of these artificial networks showed representational properties that rival primate performances in object recognition. In this paper we explore how color is encoded in a trained artificial network. It is performed by estimating a color selectivity index for each neuron, which allows us to describe the neuron activity to a color input stimuli. The index allows us to classify whether they are color selective or not and if they are of a single or double color. We have determined that all five convolutional layers of the network have a large number of color selective neurons. Color opponency clearly emerges in the first layer, presenting 4 main axes (Black-White, Red-Cyan, Blue-Yellow and Magenta-Green), but this is reduced and rotated as we go deeper into the network. In layer 2 we find a denser hue sampling of color neurons and opponency is reduced almost to one new main axis, the Bluish-Orangish coinciding with the dataset bias. In layers 3, 4 and 5 color neurons are similar amongst themselves, presenting different type of neurons that detect specific colored objects (e.g., orangish faces), specific surrounds (e.g., blue sky) or specific colored or contrasted object-surround configurations (e.g. blue blob in a green surround). Overall, our work concludes that color and shape representation are successively entangled through all the layers of the studied network, revealing certain parallelisms with the reported evidences in primate brains that can provide useful insight into intermediate hierarchical spatio-chromatic representations. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. Identification of neuron-related genes for cell therapy of neurological disorders by network analysis.

    PubMed

    Su, Li-Ning; Song, Xiao-Qing; Wei, Hui-Ping; Yin, Hai-Feng

    Bone mesenchymal stem cells (BMSCs) differentiated into neurons have been widely proposed for use in cell therapy of many neurological disorders. It is therefore important to understand the molecular mechanisms underlying this differentiation. We screened differentially expressed genes between immature neural tissues and untreated BMSCs to identify the genes responsible for neuronal differentiation from BMSCs. GSE68243 gene microarray data of rat BMSCs and GSE18860 gene microarray data of rat neurons were received from the Gene Expression Omnibus database. Transcriptome Analysis Console software showed that 1248 genes were up-regulated and 1273 were down-regulated in neurons compared with BMSCs. Gene Ontology functional enrichment, protein-protein interaction networks, functional modules, and hub genes were analyzed using DAVID, STRING 10, BiNGO tool, and Network Analyzer software, revealing that nine hub genes, Nrcam, Sema3a, Mapk8, Dlg4, Slit1, Creb1, Ntrk2, Cntn2, and Pax6, may play a pivotal role in neuronal differentiation from BMSCs. Seven genes, Dcx, Nrcam, sema3a, Cntn2, Slit1, Ephb1, and Pax6, were shown to be hub nodes within the neuronal development network, while six genes, Fgf2, Tgfβ1, Vegfa, Serpine1, Il6, and Stat1, appeared to play an important role in suppressing neuronal differentiation. However, additional studies are required to confirm these results.

  15. Reciprocal cholinergic and GABAergic modulation of the small ventrolateral pacemaker neurons of Drosophila's circadian clock neuron network.

    PubMed

    Lelito, Katherine R; Shafer, Orie T

    2012-04-01

    The relatively simple clock neuron network of Drosophila is a valuable model system for the neuronal basis of circadian timekeeping. Unfortunately, many key neuronal classes of this network are inaccessible to electrophysiological analysis. We have therefore adopted the use of genetically encoded sensors to address the physiology of the fly's circadian clock network. Using genetically encoded Ca(2+) and cAMP sensors, we have investigated the physiological responses of two specific classes of clock neuron, the large and small ventrolateral neurons (l- and s-LN(v)s), to two neurotransmitters implicated in their modulation: acetylcholine (ACh) and γ-aminobutyric acid (GABA). Live imaging of l-LN(v) cAMP and Ca(2+) dynamics in response to cholinergic agonist and GABA application were well aligned with published electrophysiological data, indicating that our sensors were capable of faithfully reporting acute physiological responses to these transmitters within single adult clock neuron soma. We extended these live imaging methods to s-LN(v)s, critical neuronal pacemakers whose physiological properties in the adult brain are largely unknown. Our s-LN(v) experiments revealed the predicted excitatory responses to bath-applied cholinergic agonists and the predicted inhibitory effects of GABA and established that the antagonism of ACh and GABA extends to their effects on cAMP signaling. These data support recently published but physiologically untested models of s-LN(v) modulation and lead to the prediction that cholinergic and GABAergic inputs to s-LN(v)s will have opposing effects on the phase and/or period of the molecular clock within these critical pacemaker neurons.

  16. Regulatory Mechanisms Controlling Maturation of Serotonin Neuron Identity and Function

    PubMed Central

    Spencer, William C.; Deneris, Evan S.

    2017-01-01

    The brain serotonin (5-hydroxytryptamine; 5-HT) system has been extensively studied for its role in normal physiology and behavior, as well as, neuropsychiatric disorders. The broad influence of 5-HT on brain function, is in part due to the vast connectivity pattern of 5-HT-producing neurons throughout the CNS. 5-HT neurons are born and terminally specified midway through embryogenesis, then enter a protracted period of maturation, where they functionally integrate into CNS circuitry and then are maintained throughout life. The transcriptional regulatory networks controlling progenitor cell generation and terminal specification of 5-HT neurons are relatively well-understood, yet the factors controlling 5-HT neuron maturation are only recently coming to light. In this review, we first provide an update on the regulatory network controlling 5-HT neuron development, then delve deeper into the properties and regulatory strategies governing 5-HT neuron maturation. In particular, we discuss the role of the 5-HT neuron terminal selector transcription factor (TF) Pet-1 as a key regulator of 5-HT neuron maturation. Pet-1 was originally shown to positively regulate genes needed for 5-HT synthesis, reuptake and vesicular transport, hence 5-HT neuron-type transmitter identity. It has now been shown to regulate, both positively and negatively, many other categories of genes in 5-HT neurons including ion channels, GPCRs, transporters, neuropeptides, and other transcription factors. Its function as a terminal selector results in the maturation of 5-HT neuron excitability, firing characteristics, and synaptic modulation by several neurotransmitters. Furthermore, there is a temporal requirement for Pet-1 in the control of postmitotic gene expression trajectories thus indicating a direct role in 5-HT neuron maturation. Proper regulation of the maturation of cellular identity is critical for normal neuronal functioning and perturbations in the gene regulatory networks controlling these processes may result in long-lasting changes in brain function in adulthood. Further study of 5-HT neuron gene regulatory networks is likely to provide additional insight into how neurons acquire their mature identities and how terminal selector-type TFs function in postmitotic vertebrate neurons. PMID:28769770

  17. Regulatory Mechanisms Controlling Maturation of Serotonin Neuron Identity and Function.

    PubMed

    Spencer, William C; Deneris, Evan S

    2017-01-01

    The brain serotonin (5-hydroxytryptamine; 5-HT) system has been extensively studied for its role in normal physiology and behavior, as well as, neuropsychiatric disorders. The broad influence of 5-HT on brain function, is in part due to the vast connectivity pattern of 5-HT-producing neurons throughout the CNS. 5-HT neurons are born and terminally specified midway through embryogenesis, then enter a protracted period of maturation, where they functionally integrate into CNS circuitry and then are maintained throughout life. The transcriptional regulatory networks controlling progenitor cell generation and terminal specification of 5-HT neurons are relatively well-understood, yet the factors controlling 5-HT neuron maturation are only recently coming to light. In this review, we first provide an update on the regulatory network controlling 5-HT neuron development, then delve deeper into the properties and regulatory strategies governing 5-HT neuron maturation. In particular, we discuss the role of the 5-HT neuron terminal selector transcription factor (TF) Pet-1 as a key regulator of 5-HT neuron maturation. Pet-1 was originally shown to positively regulate genes needed for 5-HT synthesis, reuptake and vesicular transport, hence 5-HT neuron-type transmitter identity. It has now been shown to regulate, both positively and negatively, many other categories of genes in 5-HT neurons including ion channels, GPCRs, transporters, neuropeptides, and other transcription factors. Its function as a terminal selector results in the maturation of 5-HT neuron excitability, firing characteristics, and synaptic modulation by several neurotransmitters. Furthermore, there is a temporal requirement for Pet-1 in the control of postmitotic gene expression trajectories thus indicating a direct role in 5-HT neuron maturation. Proper regulation of the maturation of cellular identity is critical for normal neuronal functioning and perturbations in the gene regulatory networks controlling these processes may result in long-lasting changes in brain function in adulthood. Further study of 5-HT neuron gene regulatory networks is likely to provide additional insight into how neurons acquire their mature identities and how terminal selector-type TFs function in postmitotic vertebrate neurons.

  18. Development and application of an optogenetic platform for controlling and imaging a large number of individual neurons

    NASA Astrophysics Data System (ADS)

    Mohammed, Ali Ibrahim Ali

    The understanding and treatment of brain disorders as well as the development of intelligent machines is hampered by the lack of knowledge of how the brain fundamentally functions. Over the past century, we have learned much about how individual neurons and neural networks behave, however new tools are critically needed to interrogate how neural networks give rise to complex brain processes and disease conditions. Recent innovations in molecular techniques, such as optogenetics, have enabled neuroscientists unprecedented precision to excite, inhibit and record defined neurons. The impressive sensitivity of currently available optogenetic sensors and actuators has now enabled the possibility of analyzing a large number of individual neurons in the brains of behaving animals. To promote the use of these optogenetic tools, this thesis integrates cutting edge optogenetic molecular sensors which is ultrasensitive for imaging neuronal activity with custom wide field optical microscope to analyze a large number of individual neurons in living brains. Wide-field microscopy provides a large field of view and better spatial resolution approaching the Abbe diffraction limit of fluorescent microscope. To demonstrate the advantages of this optical platform, we imaged a deep brain structure, the Hippocampus, and tracked hundreds of neurons over time while mouse was performing a memory task to investigate how those individual neurons related to behavior. In addition, we tested our optical platform in investigating transient neural network changes upon mechanical perturbation related to blast injuries. In this experiment, all blasted mice show a consistent change in neural network. A small portion of neurons showed a sustained calcium increase for an extended period of time, whereas the majority lost their activities. Finally, using optogenetic silencer to control selective motor cortex neurons, we examined their contributions to the network pathology of basal ganglia related to Parkinson's disease. We found that inhibition of motor cortex does not alter exaggerated beta oscillations in the striatum that are associated with parkinsonianism. Together, these results demonstrate the potential of developing integrated optogenetic system to advance our understanding of the principles underlying neural network computation, which would have broad applications from advancing artificial intelligence to disease diagnosis and treatment.

  19. Network inference from functional experimental data (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Desrosiers, Patrick; Labrecque, Simon; Tremblay, Maxime; Bélanger, Mathieu; De Dorlodot, Bertrand; Côté, Daniel C.

    2016-03-01

    Functional connectivity maps of neuronal networks are critical tools to understand how neurons form circuits, how information is encoded and processed by neurons, how memory is shaped, and how these basic processes are altered under pathological conditions. Current light microscopy allows to observe calcium or electrical activity of thousands of neurons simultaneously, yet assessing comprehensive connectivity maps directly from such data remains a non-trivial analytical task. There exist simple statistical methods, such as cross-correlation and Granger causality, but they only detect linear interactions between neurons. Other more involved inference methods inspired by information theory, such as mutual information and transfer entropy, identify more accurately connections between neurons but also require more computational resources. We carried out a comparative study of common connectivity inference methods. The relative accuracy and computational cost of each method was determined via simulated fluorescence traces generated with realistic computational models of interacting neurons in networks of different topologies (clustered or non-clustered) and sizes (10-1000 neurons). To bridge the computational and experimental works, we observed the intracellular calcium activity of live hippocampal neuronal cultures infected with the fluorescent calcium marker GCaMP6f. The spontaneous activity of the networks, consisting of 50-100 neurons per field of view, was recorded from 20 to 50 Hz on a microscope controlled by a homemade software. We implemented all connectivity inference methods in the software, which rapidly loads calcium fluorescence movies, segments the images, extracts the fluorescence traces, and assesses the functional connections (with strengths and directions) between each pair of neurons. We used this software to assess, in real time, the functional connectivity from real calcium imaging data in basal conditions, under plasticity protocols, and epileptic conditions.

  20. Memory formation orchestrates the wiring of adult-born hippocampal neurons into brain circuits.

    PubMed

    Petsophonsakul, Petnoi; Richetin, Kevin; Andraini, Trinovita; Roybon, Laurent; Rampon, Claire

    2017-08-01

    During memory formation, structural rearrangements of dendritic spines provide a mean to durably modulate synaptic connectivity within neuronal networks. New neurons generated throughout the adult life in the dentate gyrus of the hippocampus contribute to learning and memory. As these neurons become incorporated into the network, they generate huge numbers of new connections that modify hippocampal circuitry and functioning. However, it is yet unclear as to how the dynamic process of memory formation influences their synaptic integration into neuronal circuits. New memories are established according to a multistep process during which new information is first acquired and then consolidated to form a stable memory trace. Upon recall, memory is transiently destabilized and vulnerable to modification. Using contextual fear conditioning, we found that learning was associated with an acceleration of dendritic spines formation of adult-born neurons, and that spine connectivity becomes strengthened after memory consolidation. Moreover, we observed that afferent connectivity onto adult-born neurons is enhanced after memory retrieval, while extinction training induces a change of spine shapes. Together, these findings reveal that the neuronal activity supporting memory processes strongly influences the structural dendritic integration of adult-born neurons into pre-existing neuronal circuits. Such change of afferent connectivity is likely to impact the overall wiring of hippocampal network, and consequently, to regulate hippocampal function.

  1. Evolvable Neuronal Paths: A Novel Basis for Information and Search in the Brain

    PubMed Central

    Fernando, Chrisantha; Vasas, Vera; Szathmáry, Eörs; Husbands, Phil

    2011-01-01

    We propose a previously unrecognized kind of informational entity in the brain that is capable of acting as the basis for unlimited hereditary variation in neuronal networks. This unit is a path of activity through a network of neurons, analogous to a path taken through a hidden Markov model. To prove in principle the capabilities of this new kind of informational substrate, we show how a population of paths can be used as the hereditary material for a neuronally implemented genetic algorithm, (the swiss-army knife of black-box optimization techniques) which we have proposed elsewhere could operate at somatic timescales in the brain. We compare this to the same genetic algorithm that uses a standard ‘genetic’ informational substrate, i.e. non-overlapping discrete genotypes, on a range of optimization problems. A path evolution algorithm (PEA) is defined as any algorithm that implements natural selection of paths in a network substrate. A PEA is a previously unrecognized type of natural selection that is well suited for implementation by biological neuronal networks with structural plasticity. The important similarities and differences between a standard genetic algorithm and a PEA are considered. Whilst most experiments are conducted on an abstract network model, at the conclusion of the paper a slightly more realistic neuronal implementation of a PEA is outlined based on Izhikevich spiking neurons. Finally, experimental predictions are made for the identification of such informational paths in the brain. PMID:21887266

  2. Simple and Inexpensive Paper-Based Astrocyte Co-culture to Improve Survival of Low-Density Neuronal Networks

    PubMed Central

    Aebersold, Mathias J.; Thompson-Steckel, Greta; Joutang, Adriane; Schneider, Moritz; Burchert, Conrad; Forró, Csaba; Weydert, Serge; Han, Hana; Vörös, János

    2018-01-01

    Bottom-up neuroscience aims to engineer well-defined networks of neurons to investigate the functions of the brain. By reducing the complexity of the brain to achievable target questions, such in vitro bioassays better control experimental variables and can serve as a versatile tool for fundamental and pharmacological research. Astrocytes are a cell type critical to neuronal function, and the addition of astrocytes to neuron cultures can improve the quality of in vitro assays. Here, we present cellulose as an astrocyte culture substrate. Astrocytes cultured on the cellulose fiber matrix thrived and formed a dense 3D network. We devised a novel co-culture platform by suspending the easy-to-handle astrocytic paper cultures above neuronal networks of low densities typically needed for bottom-up neuroscience. There was significant improvement in neuronal viability after 5 days in vitro at densities ranging from 50,000 cells/cm2 down to isolated cells at 1,000 cells/cm2. Cultures exhibited spontaneous spiking even at the very low densities, with a significantly greater spike frequency per cell compared to control mono-cultures. Applying the co-culture platform to an engineered network of neurons on a patterned substrate resulted in significantly improved viability and almost doubled the density of live cells. Lastly, the shape of the cellulose substrate can easily be customized to a wide range of culture vessels, making the platform versatile for different applications that will further enable research in bottom-up neuroscience and drug development. PMID:29535595

  3. Barreloid Borders and Neuronal Activity Shape Panglial Gap Junction-Coupled Networks in the Mouse Thalamus.

    PubMed

    Claus, Lena; Philippot, Camille; Griemsmann, Stephanie; Timmermann, Aline; Jabs, Ronald; Henneberger, Christian; Kettenmann, Helmut; Steinhäuser, Christian

    2018-01-01

    The ventral posterior nucleus of the thalamus plays an important role in somatosensory information processing. It contains elongated cellular domains called barreloids, which are the structural basis for the somatotopic organization of vibrissae representation. So far, the organization of glial networks in these barreloid structures and its modulation by neuronal activity has not been studied. We have developed a method to visualize thalamic barreloid fields in acute slices. Combining electrophysiology, immunohistochemistry, and electroporation in transgenic mice with cell type-specific fluorescence labeling, we provide the first structure-function analyses of barreloidal glial gap junction networks. We observed coupled networks, which comprised both astrocytes and oligodendrocytes. The spread of tracers or a fluorescent glucose derivative through these networks was dependent on neuronal activity and limited by the barreloid borders, which were formed by uncoupled or weakly coupled oligodendrocytes. Neuronal somata were distributed homogeneously across barreloid fields with their processes running in parallel to the barreloid borders. Many astrocytes and oligodendrocytes were not part of the panglial networks. Thus, oligodendrocytes are the cellular elements limiting the communicating panglial network to a single barreloid, which might be important to ensure proper metabolic support to active neurons located within a particular vibrissae signaling pathway. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  4. Linking Resting-State Networks in the Prefrontal Cortex to Executive Function: A Functional Near Infrared Spectroscopy Study.

    PubMed

    Zhao, Jia; Liu, Jiangang; Jiang, Xin; Zhou, Guifei; Chen, Guowei; Ding, Xiao P; Fu, Genyue; Lee, Kang

    2016-01-01

    Executive function (EF) plays vital roles in our everyday adaptation to the ever-changing environment. However, limited existing studies have linked EF to the resting-state brain activity. The functional connectivity in the resting state between the sub-regions of the brain can reveal the intrinsic neural mechanisms involved in cognitive processing of EF without disturbance from external stimuli. The present study investigated the relations between the behavioral executive function (EF) scores and the resting-state functional network topological properties in the Prefrontal Cortex (PFC). We constructed complex brain functional networks in the PFC from 90 healthy young adults using functional near infrared spectroscopy (fNIRS). We calculated the correlations between the typical network topological properties (regional topological properties and global topological properties) and the scores of both the Total EF and components of EF measured by computer-based Cambridge Neuropsychological Test Automated Battery (CANTAB). We found that the Total EF scores were positively correlated with regional properties in the right dorsal superior frontal gyrus (SFG), whereas the opposite pattern was found in the right triangular inferior frontal gyrus (IFG). Different EF components were related to different regional properties in various PFC areas, such as planning in the right middle frontal gyrus (MFG), working memory mainly in the right MFG and triangular IFG, short-term memory in the left dorsal SFG, and task switch in the right MFG. In contrast, there were no significant findings for global topological properties. Our findings suggested that the PFC plays an important role in individuals' behavioral performance in the executive function tasks. Further, the resting-state functional network can reveal the intrinsic neural mechanisms involved in behavioral EF abilities.

  5. Spiking, Bursting, and Population Dynamics in a Network of Growth Transform Neurons.

    PubMed

    Gangopadhyay, Ahana; Chakrabartty, Shantanu

    2018-06-01

    This paper investigates the dynamical properties of a network of neurons, each of which implements an asynchronous mapping based on polynomial growth transforms. In the first part of this paper, we present a geometric approach for visualizing the dynamics of the network where each of the neurons traverses a trajectory in a dual optimization space, whereas the network itself traverses a trajectory in an equivalent primal optimization space. We show that as the network learns to solve basic classification tasks, different choices of primal-dual mapping produce unique but interpretable neural dynamics like noise shaping, spiking, and bursting. While the proposed framework is general enough, in this paper, we demonstrate its use for designing support vector machines (SVMs) that exhibit noise-shaping properties similar to those of modulators, and for designing SVMs that learn to encode information using spikes and bursts. It is demonstrated that the emergent switching, spiking, and burst dynamics produced by each neuron encodes its respective margin of separation from a classification hyperplane whose parameters are encoded by the network population dynamics. We believe that the proposed growth transform neuron model and the underlying geometric framework could serve as an important tool to connect well-established machine learning algorithms like SVMs to neuromorphic principles like spiking, bursting, population encoding, and noise shaping.

  6. Macroscopic phase-resetting curves for spiking neural networks

    NASA Astrophysics Data System (ADS)

    Dumont, Grégory; Ermentrout, G. Bard; Gutkin, Boris

    2017-10-01

    The study of brain rhythms is an open-ended, and challenging, subject of interest in neuroscience. One of the best tools for the understanding of oscillations at the single neuron level is the phase-resetting curve (PRC). Synchronization in networks of neurons, effects of noise on the rhythms, effects of transient stimuli on the ongoing rhythmic activity, and many other features can be understood by the PRC. However, most macroscopic brain rhythms are generated by large populations of neurons, and so far it has been unclear how the PRC formulation can be extended to these more common rhythms. In this paper, we describe a framework to determine a macroscopic PRC (mPRC) for a network of spiking excitatory and inhibitory neurons that generate a macroscopic rhythm. We take advantage of a thermodynamic approach combined with a reduction method to simplify the network description to a small number of ordinary differential equations. From this simplified but exact reduction, we can compute the mPRC via the standard adjoint method. Our theoretical findings are illustrated with and supported by numerical simulations of the full spiking network. Notably our mPRC framework allows us to predict the difference between effects of transient inputs to the excitatory versus the inhibitory neurons in the network.

  7. Back to Pupillometry: How Cortical Network State Fluctuations Tracked by Pupil Dynamics Could Explain Neural Signal Variability in Human Cognitive Neuroscience

    PubMed Central

    2017-01-01

    Abstract The mammalian thalamocortical system generates intrinsic activity reflecting different states of excitability, arising from changes in the membrane potentials of underlying neuronal networks. Fluctuations between these states occur spontaneously, regularly, and frequently throughout awake periods and influence stimulus encoding, information processing, and neuronal and behavioral responses. Changes of pupil size have recently been identified as a reliable marker of underlying neuronal membrane potential and thus can encode associated network state changes in rodent cortex. This suggests that pupillometry, a ubiquitous measure of pupil dilation in cognitive neuroscience, could be used as an index for network state fluctuations also for human brain signals. Considering this variable may explain task-independent variance in neuronal and behavioral signals that were previously disregarded as noise. PMID:29379876

  8. Modeling mesoscopic cortical dynamics using a mean-field model of conductance-based networks of adaptive exponential integrate-and-fire neurons.

    PubMed

    Zerlaut, Yann; Chemla, Sandrine; Chavane, Frederic; Destexhe, Alain

    2018-02-01

    Voltage-sensitive dye imaging (VSDi) has revealed fundamental properties of neocortical processing at macroscopic scales. Since for each pixel VSDi signals report the average membrane potential over hundreds of neurons, it seems natural to use a mean-field formalism to model such signals. Here, we present a mean-field model of networks of Adaptive Exponential (AdEx) integrate-and-fire neurons, with conductance-based synaptic interactions. We study a network of regular-spiking (RS) excitatory neurons and fast-spiking (FS) inhibitory neurons. We use a Master Equation formalism, together with a semi-analytic approach to the transfer function of AdEx neurons to describe the average dynamics of the coupled populations. We compare the predictions of this mean-field model to simulated networks of RS-FS cells, first at the level of the spontaneous activity of the network, which is well predicted by the analytical description. Second, we investigate the response of the network to time-varying external input, and show that the mean-field model predicts the response time course of the population. Finally, to model VSDi signals, we consider a one-dimensional ring model made of interconnected RS-FS mean-field units. We found that this model can reproduce the spatio-temporal patterns seen in VSDi of awake monkey visual cortex as a response to local and transient visual stimuli. Conversely, we show that the model allows one to infer physiological parameters from the experimentally-recorded spatio-temporal patterns.

  9. An Intelligent Ensemble Neural Network Model for Wind Speed Prediction in Renewable Energy Systems.

    PubMed

    Ranganayaki, V; Deepa, S N

    2016-01-01

    Various criteria are proposed to select the number of hidden neurons in artificial neural network (ANN) models and based on the criterion evolved an intelligent ensemble neural network model is proposed to predict wind speed in renewable energy applications. The intelligent ensemble neural model based wind speed forecasting is designed by averaging the forecasted values from multiple neural network models which includes multilayer perceptron (MLP), multilayer adaptive linear neuron (Madaline), back propagation neural network (BPN), and probabilistic neural network (PNN) so as to obtain better accuracy in wind speed prediction with minimum error. The random selection of hidden neurons numbers in artificial neural network results in overfitting or underfitting problem. This paper aims to avoid the occurrence of overfitting and underfitting problems. The selection of number of hidden neurons is done in this paper employing 102 criteria; these evolved criteria are verified by the computed various error values. The proposed criteria for fixing hidden neurons are validated employing the convergence theorem. The proposed intelligent ensemble neural model is applied for wind speed prediction application considering the real time wind data collected from the nearby locations. The obtained simulation results substantiate that the proposed ensemble model reduces the error value to minimum and enhances the accuracy. The computed results prove the effectiveness of the proposed ensemble neural network (ENN) model with respect to the considered error factors in comparison with that of the earlier models available in the literature.

  10. An Intelligent Ensemble Neural Network Model for Wind Speed Prediction in Renewable Energy Systems

    PubMed Central

    Ranganayaki, V.; Deepa, S. N.

    2016-01-01

    Various criteria are proposed to select the number of hidden neurons in artificial neural network (ANN) models and based on the criterion evolved an intelligent ensemble neural network model is proposed to predict wind speed in renewable energy applications. The intelligent ensemble neural model based wind speed forecasting is designed by averaging the forecasted values from multiple neural network models which includes multilayer perceptron (MLP), multilayer adaptive linear neuron (Madaline), back propagation neural network (BPN), and probabilistic neural network (PNN) so as to obtain better accuracy in wind speed prediction with minimum error. The random selection of hidden neurons numbers in artificial neural network results in overfitting or underfitting problem. This paper aims to avoid the occurrence of overfitting and underfitting problems. The selection of number of hidden neurons is done in this paper employing 102 criteria; these evolved criteria are verified by the computed various error values. The proposed criteria for fixing hidden neurons are validated employing the convergence theorem. The proposed intelligent ensemble neural model is applied for wind speed prediction application considering the real time wind data collected from the nearby locations. The obtained simulation results substantiate that the proposed ensemble model reduces the error value to minimum and enhances the accuracy. The computed results prove the effectiveness of the proposed ensemble neural network (ENN) model with respect to the considered error factors in comparison with that of the earlier models available in the literature. PMID:27034973

  11. Associative memory in phasing neuron networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nair, Niketh S; Bochove, Erik J.; Braiman, Yehuda

    2014-01-01

    We studied pattern formation in a network of coupled Hindmarsh-Rose model neurons and introduced a new model for associative memory retrieval using networks of Kuramoto oscillators. Hindmarsh-Rose Neural Networks can exhibit a rich set of collective dynamics that can be controlled by their connectivity. Specifically, we showed an instance of Hebb's rule where spiking was correlated with network topology. Based on this, we presented a simple model of associative memory in coupled phase oscillators.

  12. Estimating the Information Extracted by a Single Spiking Neuron from a Continuous Input Time Series.

    PubMed

    Zeldenrust, Fleur; de Knecht, Sicco; Wadman, Wytse J; Denève, Sophie; Gutkin, Boris

    2017-01-01

    Understanding the relation between (sensory) stimuli and the activity of neurons (i.e., "the neural code") lies at heart of understanding the computational properties of the brain. However, quantifying the information between a stimulus and a spike train has proven to be challenging. We propose a new ( in vitro ) method to measure how much information a single neuron transfers from the input it receives to its output spike train. The input is generated by an artificial neural network that responds to a randomly appearing and disappearing "sensory stimulus": the hidden state. The sum of this network activity is injected as current input into the neuron under investigation. The mutual information between the hidden state on the one hand and spike trains of the artificial network or the recorded spike train on the other hand can easily be estimated due to the binary shape of the hidden state. The characteristics of the input current, such as the time constant as a result of the (dis)appearance rate of the hidden state or the amplitude of the input current (the firing frequency of the neurons in the artificial network), can independently be varied. As an example, we apply this method to pyramidal neurons in the CA1 of mouse hippocampi and compare the recorded spike trains to the optimal response of the "Bayesian neuron" (BN). We conclude that like in the BN, information transfer in hippocampal pyramidal cells is non-linear and amplifying: the information loss between the artificial input and the output spike train is high if the input to the neuron (the firing of the artificial network) is not very informative about the hidden state. If the input to the neuron does contain a lot of information about the hidden state, the information loss is low. Moreover, neurons increase their firing rates in case the (dis)appearance rate is high, so that the (relative) amount of transferred information stays constant.

  13. Optimal Detection of a Localized Perturbation in Random Networks of Integrate-and-Fire Neurons.

    PubMed

    Bernardi, Davide; Lindner, Benjamin

    2017-06-30

    Experimental and theoretical studies suggest that cortical networks are chaotic and coding relies on averages over large populations. However, there is evidence that rats can respond to the short stimulation of a single cortical cell, a theoretically unexplained fact. We study effects of single-cell stimulation on a large recurrent network of integrate-and-fire neurons and propose a simple way to detect the perturbation. Detection rates obtained from simulations and analytical estimates are similar to experimental response rates if the readout is slightly biased towards specific neurons. Near-optimal detection is attained for a broad range of intermediate values of the mean coupling between neurons.

  14. Cluster synchronization in networks of neurons with chemical synapses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Juang, Jonq, E-mail: jjuang@math.nctu.edu.tw; Liang, Yu-Hao, E-mail: moonsea.am96g@g2.nctu.edu.tw

    2014-03-15

    In this work, we study the cluster synchronization of chemically coupled and generally formulated networks which are allowed to be nonidentical. The sufficient condition for the existence of stably synchronous clusters is derived. Specifically, we only need to check the stability of the origins of m decoupled linear systems. Here, m is the number of subpopulations. Examples of nonidentical networks such as Hindmarsh-Rose (HR) neurons with various choices of parameters in different subpopulations, or HR neurons in one subpopulation and FitzHugh-Nagumo neurons in the other subpopulation are provided. Explicit threshold for the coupling strength that guarantees the stably cluster synchronizationmore » can be obtained.« less

  15. Optimal Detection of a Localized Perturbation in Random Networks of Integrate-and-Fire Neurons

    NASA Astrophysics Data System (ADS)

    Bernardi, Davide; Lindner, Benjamin

    2017-06-01

    Experimental and theoretical studies suggest that cortical networks are chaotic and coding relies on averages over large populations. However, there is evidence that rats can respond to the short stimulation of a single cortical cell, a theoretically unexplained fact. We study effects of single-cell stimulation on a large recurrent network of integrate-and-fire neurons and propose a simple way to detect the perturbation. Detection rates obtained from simulations and analytical estimates are similar to experimental response rates if the readout is slightly biased towards specific neurons. Near-optimal detection is attained for a broad range of intermediate values of the mean coupling between neurons.

  16. Cascaded VLSI Chips Help Neural Network To Learn

    NASA Technical Reports Server (NTRS)

    Duong, Tuan A.; Daud, Taher; Thakoor, Anilkumar P.

    1993-01-01

    Cascading provides 12-bit resolution needed for learning. Using conventional silicon chip fabrication technology of VLSI, fully connected architecture consisting of 32 wide-range, variable gain, sigmoidal neurons along one diagonal and 7-bit resolution, electrically programmable, synaptic 32 x 31 weight matrix implemented on neuron-synapse chip. To increase weight nominally from 7 to 13 bits, synapses on chip individually cascaded with respective synapses on another 32 x 32 matrix chip with 7-bit resolution synapses only (without neurons). Cascade correlation algorithm varies number of layers effectively connected into network; adds hidden layers one at a time during learning process in such way as to optimize overall number of neurons and complexity and configuration of network.

  17. Interfacing 3D Engineered Neuronal Cultures to Micro-Electrode Arrays: An Innovative In Vitro Experimental Model.

    PubMed

    Tedesco, Mariateresa; Frega, Monica; Martinoia, Sergio; Pesce, Mattia; Massobrio, Paolo

    2015-10-18

    Currently, large-scale networks derived from dissociated neurons growing and developing in vitro on extracellular micro-transducer devices are the gold-standard experimental model to study basic neurophysiological mechanisms involved in the formation and maintenance of neuronal cell assemblies. However, in vitro studies have been limited to the recording of the electrophysiological activity generated by bi-dimensional (2D) neural networks. Nonetheless, given the intricate relationship between structure and dynamics, a significant improvement is necessary to investigate the formation and the developing dynamics of three-dimensional (3D) networks. In this work, a novel experimental platform in which 3D hippocampal or cortical networks are coupled to planar Micro-Electrode Arrays (MEAs) is presented. 3D networks are realized by seeding neurons in a scaffold constituted of glass microbeads (30-40 µm in diameter) on which neurons are able to grow and form complex interconnected 3D assemblies. In this way, it is possible to design engineered 3D networks made up of 5-8 layers with an expected final cell density. The increasing complexity in the morphological organization of the 3D assembly induces an enhancement of the electrophysiological patterns displayed by this type of networks. Compared with the standard 2D networks, where highly stereotyped bursting activity emerges, the 3D structure alters the bursting activity in terms of duration and frequency, as well as it allows observation of more random spiking activity. In this sense, the developed 3D model more closely resembles in vivo neural networks.

  18. Interfacing 3D Engineered Neuronal Cultures to Micro-Electrode Arrays: An Innovative In Vitro Experimental Model

    PubMed Central

    Tedesco, Mariateresa; Frega, Monica; Martinoia, Sergio; Pesce, Mattia; Massobrio, Paolo

    2015-01-01

    Currently, large-scale networks derived from dissociated neurons growing and developing in vitro on extracellular micro-transducer devices are the gold-standard experimental model to study basic neurophysiological mechanisms involved in the formation and maintenance of neuronal cell assemblies. However, in vitro studies have been limited to the recording of the electrophysiological activity generated by bi-dimensional (2D) neural networks. Nonetheless, given the intricate relationship between structure and dynamics, a significant improvement is necessary to investigate the formation and the developing dynamics of three-dimensional (3D) networks. In this work, a novel experimental platform in which 3D hippocampal or cortical networks are coupled to planar Micro-Electrode Arrays (MEAs) is presented. 3D networks are realized by seeding neurons in a scaffold constituted of glass microbeads (30-40 µm in diameter) on which neurons are able to grow and form complex interconnected 3D assemblies. In this way, it is possible to design engineered 3D networks made up of 5-8 layers with an expected final cell density. The increasing complexity in the morphological organization of the 3D assembly induces an enhancement of the electrophysiological patterns displayed by this type of networks. Compared with the standard 2D networks, where highly stereotyped bursting activity emerges, the 3D structure alters the bursting activity in terms of duration and frequency, as well as it allows observation of more random spiking activity. In this sense, the developed 3D model more closely resembles in vivo neural networks. PMID:26554533

  19. YADCLAN: yet another digitally-controlled linear artificial neuron.

    PubMed

    Frenger, Paul

    2003-01-01

    This paper updates the author's 1999 RMBS presentation on digitally controlled linear artificial neuron design. Each neuron is based on a standard operational amplifier having excitatory and inhibitory inputs, variable gain, an amplified linear analog output and an adjustable threshold comparator for digital output. This design employs a 1-wire serial network of digitally controlled potentiometers and resistors whose resistance values are set and read back under microprocessor supervision. This system embodies several unique and useful features, including: enhanced neuronal stability, dynamic reconfigurability and network extensibility. This artificial neuronal is being employed for feature extraction and pattern recognition in an advanced robotic application.

  20. Effect of Electrothermal Treatment on Nerve Tissue Within the Triangular Fibrocartilage Complex, Scapholunate, and Lunotriquetral Interosseous Ligaments.

    PubMed

    Pirolo, Joseph M; Le, Wei; Yao, Jeffrey

    2016-05-01

    To evaluate the effect of thermal treatment on neural tissue in the triangular fibrocartilage complex (TFCC), scapholunate interosseous ligament (SLIL), and lunotriquetral interosseous ligament (LTIL). The intact TFCC, SLIL, and LTIL were harvested from cadaveric specimens and treated with a radiofrequency probe as would be performed intraoperatively. Slides were stained using a triple-stain technique for neurotrophin receptor p75, pan-neuronal marker protein gene product 9.5 (PGP 9.5), and 4',6-diamidino-2-phenylindole for neural identification. Five TFCC, 5 SLIL, and 4 LTIL specimens were imaged with fluorescence microscopy. Imaging software was used to measure fluorescence signals and compare thermally treated areas with adjacent untreated areas. A paired t test was used to compare treated versus untreated areas. P < .05 was considered significant. For the TFCC, a mean of 94.9% ± 2.7% of PGP 9.5-positive neural tissue was ablated within a mean area of 11.7 ± 2.5 mm(2) (P = .02). For the SLIL treated from the radiocarpal surface, 97.4% ± 1.0% was ablated to a mean depth of 2.4 ± 0.3 mm from the surface and a mean horizontal spread of 3.4 ± 0.5 mm (P = .01). For the LTIL, 96.0% ± 1.5% was ablated to a mean depth of 1.7 ± 0.7 mm and a mean horizontal spread of 2.6 ± 1.0 mm (P = .02). Differences in the presence of neural tissue between treated areas and adjacent untreated areas were statistically significant for all specimens. Our study confirms elimination of neuronal markers after thermal treatment of the TFCC, SLIL, and LTIL in cadaveric specimens. This effect penetrates below the surface to innervated collagen tissue that is left structurally intact after treatment. Electrothermal treatment as commonly performed to treat symptomatic SLIL, LTIL, and TFCC tears eliminates neuronal tissue in treated areas and may function to relieve pain through a denervation effect. Copyright © 2016 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.

  1. A constrained Delaunay discretization method for adaptively meshing highly discontinuous geological media

    NASA Astrophysics Data System (ADS)

    Wang, Yang; Ma, Guowei; Ren, Feng; Li, Tuo

    2017-12-01

    A constrained Delaunay discretization method is developed to generate high-quality doubly adaptive meshes of highly discontinuous geological media. Complex features such as three-dimensional discrete fracture networks (DFNs), tunnels, shafts, slopes, boreholes, water curtains, and drainage systems are taken into account in the mesh generation. The constrained Delaunay triangulation method is used to create adaptive triangular elements on planar fractures. Persson's algorithm (Persson, 2005), based on an analogy between triangular elements and spring networks, is enriched to automatically discretize a planar fracture into mesh points with varying density and smooth-quality gradient. The triangulated planar fractures are treated as planar straight-line graphs (PSLGs) to construct piecewise-linear complex (PLC) for constrained Delaunay tetrahedralization. This guarantees the doubly adaptive characteristic of the resulted mesh: the mesh is adaptive not only along fractures but also in space. The quality of elements is compared with the results from an existing method. It is verified that the present method can generate smoother elements and a better distribution of element aspect ratios. Two numerical simulations are implemented to demonstrate that the present method can be applied to various simulations of complex geological media that contain a large number of discontinuities.

  2. Introduction of functionality, selection of topology, and enhancement of gas adsorption in multivariate metal-organic framework-177.

    PubMed

    Zhang, Yue-Biao; Furukawa, Hiroyasu; Ko, Nakeun; Nie, Weixuan; Park, Hye Jeong; Okajima, Satoshi; Cordova, Kyle E; Deng, Hexiang; Kim, Jaheon; Yaghi, Omar M

    2015-02-25

    Metal-organic framework-177 (MOF-177) is one of the most porous materials whose structure is composed of octahedral Zn4O(-COO)6 and triangular 1,3,5-benzenetribenzoate (BTB) units to make a three-dimensional extended network based on the qom topology. This topology violates a long-standing thesis where highly symmetric building units are expected to yield highly symmetric networks. In the case of octahedron and triangle combinations, MOFs based on pyrite (pyr) and rutile (rtl) nets were expected instead of qom. In this study, we have made 24 MOF-177 structures with different functional groups on the triangular BTB linker, having one or more functionalities. We find that the position of the functional groups on the BTB unit allows the selection for a specific net (qom, pyr, and rtl), and that mixing of functionalities (-H, -NH2, and -C4H4) is an important strategy for the incorporation of a specific functionality (-NO2) into MOF-177 where otherwise incorporation of such functionality would be difficult. Such mixing of functionalities to make multivariate MOF-177 structures leads to enhancement of hydrogen uptake by 25%.

  3. Introduction of Functionality, Selection of Topology, and Enhancement of Gas Adsorption in Multivariate Metal–Organic Framework-177

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Yue-Biao; Furukawa, Hiroyasu; Ko, Nakeun

    2015-02-25

    Metal–organic framework-177 (MOF-177) is one of the most porous materials whose structure is composed of octahedral Zn 4O(-COO) 6 and triangular 1,3,5-benzenetribenzoate (BTB) units to make a three-dimensional extended network based on the qom topology. This topology violates a long-standing thesis where highly symmetric building units are expected to yield highly symmetric networks. In the case of octahedron and triangle combinations, MOFs based on pyrite (pyr) and rutile (rtl) nets were expected instead of qom. In this study, we have made 24 MOF-177 structures with different functional groups on the triangular BTB linker, having one or more functionalities. We findmore » that the position of the functional groups on the BTB unit allows the selection for a specific net (qom, pyr, and rtl), and that mixing of functionalities (-H, -NH 2, and -C 4H 4) is an important strategy for the incorporation of a specific functionality (-NO 2) into MOF-177 where otherwise incorporation of such functionality would be difficult. Such mixing of functionalities to make multivariate MOF-177 structures leads to enhancement of hydrogen uptake by 25%.« less

  4. High-Degree Neurons Feed Cortical Computations

    PubMed Central

    Timme, Nicholas M.; Ito, Shinya; Shimono, Masanori; Yeh, Fang-Chin; Litke, Alan M.; Beggs, John M.

    2016-01-01

    Recent work has shown that functional connectivity among cortical neurons is highly varied, with a small percentage of neurons having many more connections than others. Also, recent theoretical developments now make it possible to quantify how neurons modify information from the connections they receive. Therefore, it is now possible to investigate how information modification, or computation, depends on the number of connections a neuron receives (in-degree) or sends out (out-degree). To do this, we recorded the simultaneous spiking activity of hundreds of neurons in cortico-hippocampal slice cultures using a high-density 512-electrode array. This preparation and recording method combination produced large numbers of neurons recorded at temporal and spatial resolutions that are not currently available in any in vivo recording system. We utilized transfer entropy (a well-established method for detecting linear and nonlinear interactions in time series) and the partial information decomposition (a powerful, recently developed tool for dissecting multivariate information processing into distinct parts) to quantify computation between neurons where information flows converged. We found that computations did not occur equally in all neurons throughout the networks. Surprisingly, neurons that computed large amounts of information tended to receive connections from high out-degree neurons. However, the in-degree of a neuron was not related to the amount of information it computed. To gain insight into these findings, we developed a simple feedforward network model. We found that a degree-modified Hebbian wiring rule best reproduced the pattern of computation and degree correlation results seen in the real data. Interestingly, this rule also maximized signal propagation in the presence of network-wide correlations, suggesting a mechanism by which cortex could deal with common random background input. These are the first results to show that the extent to which a neuron modifies incoming information streams depends on its topological location in the surrounding functional network. PMID:27159884

  5. Nonlinear multiplicative dendritic integration in neuron and network models

    PubMed Central

    Zhang, Danke; Li, Yuanqing; Rasch, Malte J.; Wu, Si

    2013-01-01

    Neurons receive inputs from thousands of synapses distributed across dendritic trees of complex morphology. It is known that dendritic integration of excitatory and inhibitory synapses can be highly non-linear in reality and can heavily depend on the exact location and spatial arrangement of inhibitory and excitatory synapses on the dendrite. Despite this known fact, most neuron models used in artificial neural networks today still only describe the voltage potential of a single somatic compartment and assume a simple linear summation of all individual synaptic inputs. We here suggest a new biophysical motivated derivation of a single compartment model that integrates the non-linear effects of shunting inhibition, where an inhibitory input on the route of an excitatory input to the soma cancels or “shunts” the excitatory potential. In particular, our integration of non-linear dendritic processing into the neuron model follows a simple multiplicative rule, suggested recently by experiments, and allows for strict mathematical treatment of network effects. Using our new formulation, we further devised a spiking network model where inhibitory neurons act as global shunting gates, and show that the network exhibits persistent activity in a low firing regime. PMID:23658543

  6. Electronic neural network for dynamic resource allocation

    NASA Technical Reports Server (NTRS)

    Thakoor, A. P.; Eberhardt, S. P.; Daud, T.

    1991-01-01

    A VLSI implementable neural network architecture for dynamic assignment is presented. The resource allocation problems involve assigning members of one set (e.g. resources) to those of another (e.g. consumers) such that the global 'cost' of the associations is minimized. The network consists of a matrix of sigmoidal processing elements (neurons), where the rows of the matrix represent resources and columns represent consumers. Unlike previous neural implementations, however, association costs are applied directly to the neurons, reducing connectivity of the network to VLSI-compatible 0 (number of neurons). Each row (and column) has an additional neuron associated with it to independently oversee activations of all the neurons in each row (and each column), providing a programmable 'k-winner-take-all' function. This function simultaneously enforces blocking (excitatory/inhibitory) constraints during convergence to control the number of active elements in each row and column within desired boundary conditions. Simulations show that the network, when implemented in fully parallel VLSI hardware, offers optimal (or near-optimal) solutions within only a fraction of a millisecond, for problems up to 128 resources and 128 consumers, orders of magnitude faster than conventional computing or heuristic search methods.

  7. Heterogeneity induces rhythms of weakly coupled circadian neurons

    NASA Astrophysics Data System (ADS)

    Gu, Changgui; Liang, Xiaoming; Yang, Huijie; Rohling, Jos H. T.

    2016-02-01

    The main clock located in the suprachiasmatic nucleus (SCN) regulates circadian rhythms in mammals. The SCN is composed of approximately twenty thousand heterogeneous self-oscillating neurons, that have intrinsic periods varying from 22 h to 28 h. They are coupled through neurotransmitters and neuropeptides to form a network and output a uniform periodic rhythm. Previous studies found that the heterogeneity of the neurons leads to attenuation of the circadian rhythm with strong cellular coupling. In the present study, we investigate the heterogeneity of the neurons and of the network in the condition of constant darkness. Interestingly, we found that the heterogeneity of weakly coupled neurons enables them to oscillate and strengthen the circadian rhythm. In addition, we found that the period of the SCN network increases with the increase of the degree of heterogeneity. As the network heterogeneity does not change the dynamics of the rhythm, our study shows that the heterogeneity of the neurons is vitally important for rhythm generation in weakly coupled systems, such as the SCN, and it provides a new method to strengthen the circadian rhythm, as well as an alternative explanation for differences in free running periods between species in the absence of the daily cycle.

  8. Ensembles of Spiking Neurons with Noise Support Optimal Probabilistic Inference in a Dynamically Changing Environment

    PubMed Central

    Legenstein, Robert; Maass, Wolfgang

    2014-01-01

    It has recently been shown that networks of spiking neurons with noise can emulate simple forms of probabilistic inference through “neural sampling”, i.e., by treating spikes as samples from a probability distribution of network states that is encoded in the network. Deficiencies of the existing model are its reliance on single neurons for sampling from each random variable, and the resulting limitation in representing quickly varying probabilistic information. We show that both deficiencies can be overcome by moving to a biologically more realistic encoding of each salient random variable through the stochastic firing activity of an ensemble of neurons. The resulting model demonstrates that networks of spiking neurons with noise can easily track and carry out basic computational operations on rapidly varying probability distributions, such as the odds of getting rewarded for a specific behavior. We demonstrate the viability of this new approach towards neural coding and computation, which makes use of the inherent parallelism of generic neural circuits, by showing that this model can explain experimentally observed firing activity of cortical neurons for a variety of tasks that require rapid temporal integration of sensory information. PMID:25340749

  9. Data-driven inference of network connectivity for modeling the dynamics of neural codes in the insect antennal lobe

    PubMed Central

    Shlizerman, Eli; Riffell, Jeffrey A.; Kutz, J. Nathan

    2014-01-01

    The antennal lobe (AL), olfactory processing center in insects, is able to process stimuli into distinct neural activity patterns, called olfactory neural codes. To model their dynamics we perform multichannel recordings from the projection neurons in the AL driven by different odorants. We then derive a dynamic neuronal network from the electrophysiological data. The network consists of lateral-inhibitory neurons and excitatory neurons (modeled as firing-rate units), and is capable of producing unique olfactory neural codes for the tested odorants. To construct the network, we (1) design a projection, an odor space, for the neural recording from the AL, which discriminates between distinct odorants trajectories (2) characterize scent recognition, i.e., decision-making based on olfactory signals and (3) infer the wiring of the neural circuit, the connectome of the AL. We show that the constructed model is consistent with biological observations, such as contrast enhancement and robustness to noise. The study suggests a data-driven approach to answer a key biological question in identifying how lateral inhibitory neurons can be wired to excitatory neurons to permit robust activity patterns. PMID:25165442

  10. Statistical identification of stimulus-activated network nodes in multi-neuron voltage-sensitive dye optical recordings.

    PubMed

    Fathiazar, Elham; Anemuller, Jorn; Kretzberg, Jutta

    2016-08-01

    Voltage-Sensitive Dye (VSD) imaging is an optical imaging method that allows measuring the graded voltage changes of multiple neurons simultaneously. In neuroscience, this method is used to reveal networks of neurons involved in certain tasks. However, the recorded relative dye fluorescence changes are usually low and signals are superimposed by noise and artifacts. Therefore, establishing a reliable method to identify which cells are activated by specific stimulus conditions is the first step to identify functional networks. In this paper, we present a statistical method to identify stimulus-activated network nodes as cells, whose activities during sensory network stimulation differ significantly from the un-stimulated control condition. This method is demonstrated based on voltage-sensitive dye recordings from up to 100 neurons in a ganglion of the medicinal leech responding to tactile skin stimulation. Without relying on any prior physiological knowledge, the network nodes identified by our statistical analysis were found to match well with published cell types involved in tactile stimulus processing and to be consistent across stimulus conditions and preparations.

  11. Development of pacemaker properties and rhythmogenic mechanisms in the mouse embryonic respiratory network

    PubMed Central

    Chevalier, Marc; Toporikova, Natalia; Simmers, John; Thoby-Brisson, Muriel

    2016-01-01

    Breathing is a vital rhythmic behavior generated by hindbrain neuronal circuitry, including the preBötzinger complex network (preBötC) that controls inspiration. The emergence of preBötC network activity during prenatal development has been described, but little is known regarding inspiratory neurons expressing pacemaker properties at embryonic stages. Here, we combined calcium imaging and electrophysiological recordings in mouse embryo brainstem slices together with computational modeling to reveal the existence of heterogeneous pacemaker oscillatory properties relying on distinct combinations of burst-generating INaP and ICAN conductances. The respective proportion of the different inspiratory pacemaker subtypes changes during prenatal development. Concomitantly, network rhythmogenesis switches from a purely INaP/ICAN-dependent mechanism at E16.5 to a combined pacemaker/network-driven process at E18.5. Our results provide the first description of pacemaker bursting properties in embryonic preBötC neurons and indicate that network rhythmogenesis undergoes important changes during prenatal development through alterations in both circuit properties and the biophysical characteristics of pacemaker neurons. DOI: http://dx.doi.org/10.7554/eLife.16125.001 PMID:27434668

  12. Irregular behavior in an excitatory-inhibitory neuronal network

    NASA Astrophysics Data System (ADS)

    Park, Choongseok; Terman, David

    2010-06-01

    Excitatory-inhibitory networks arise in many regions throughout the central nervous system and display complex spatiotemporal firing patterns. These neuronal activity patterns (of individual neurons and/or the whole network) are closely related to the functional status of the system and differ between normal and pathological states. For example, neurons within the basal ganglia, a group of subcortical nuclei that are responsible for the generation of movement, display a variety of dynamic behaviors such as correlated oscillatory activity and irregular, uncorrelated spiking. Neither the origins of these firing patterns nor the mechanisms that underlie the patterns are well understood. We consider a biophysical model of an excitatory-inhibitory network in the basal ganglia and explore how specific biophysical properties of the network contribute to the generation of irregular spiking. We use geometric dynamical systems and singular perturbation methods to systematically reduce the model to a simpler set of equations, which is suitable for analysis. The results specify the dependence on the strengths of synaptic connections and the intrinsic firing properties of the cells in the irregular regime when applied to the subthalamopallidal network of the basal ganglia.

  13. Echo state networks with filter neurons and a delay&sum readout.

    PubMed

    Holzmann, Georg; Hauser, Helmut

    2010-03-01

    Echo state networks (ESNs) are a novel approach to recurrent neural network training with the advantage of a very simple and linear learning algorithm. It has been demonstrated that ESNs outperform other methods on a number of benchmark tasks. Although the approach is appealing, there are still some inherent limitations in the original formulation. Here we suggest two enhancements of this network model. First, the previously proposed idea of filters in neurons is extended to arbitrary infinite impulse response (IIR) filter neurons. This enables such networks to learn multiple attractors and signals at different timescales, which is especially important for modeling real-world time series. Second, a delay&sum readout is introduced, which adds trainable delays in the synaptic connections of output neurons and therefore vastly improves the memory capacity of echo state networks. It is shown in commonly used benchmark tasks and real-world examples, that this new structure is able to significantly outperform standard ESNs and other state-of-the-art models for nonlinear dynamical system modeling. Copyright 2009 Elsevier Ltd. All rights reserved.

  14. Activity-dependent switch of GABAergic inhibition into glutamatergic excitation in astrocyte-neuron networks.

    PubMed

    Perea, Gertrudis; Gómez, Ricardo; Mederos, Sara; Covelo, Ana; Ballesteros, Jesús J; Schlosser, Laura; Hernández-Vivanco, Alicia; Martín-Fernández, Mario; Quintana, Ruth; Rayan, Abdelrahman; Díez, Adolfo; Fuenzalida, Marco; Agarwal, Amit; Bergles, Dwight E; Bettler, Bernhard; Manahan-Vaughan, Denise; Martín, Eduardo D; Kirchhoff, Frank; Araque, Alfonso

    2016-12-24

    Interneurons are critical for proper neural network function and can activate Ca 2+ signaling in astrocytes. However, the impact of the interneuron-astrocyte signaling into neuronal network operation remains unknown. Using the simplest hippocampal Astrocyte-Neuron network, i.e., GABAergic interneuron, pyramidal neuron, single CA3-CA1 glutamatergic synapse, and astrocytes, we found that interneuron-astrocyte signaling dynamically affected excitatory neurotransmission in an activity- and time-dependent manner, and determined the sign (inhibition vs potentiation) of the GABA-mediated effects. While synaptic inhibition was mediated by GABA A receptors, potentiation involved astrocyte GABA B receptors, astrocytic glutamate release, and presynaptic metabotropic glutamate receptors. Using conditional astrocyte-specific GABA B receptor ( Gabbr1 ) knockout mice, we confirmed the glial source of the interneuron-induced potentiation, and demonstrated the involvement of astrocytes in hippocampal theta and gamma oscillations in vivo. Therefore, astrocytes decode interneuron activity and transform inhibitory into excitatory signals, contributing to the emergence of novel network properties resulting from the interneuron-astrocyte interplay.

  15. CROSS-DISCIPLINARY PHYSICS AND RELATED AREAS OF SCIENCE AND TECHNOLOGY: Mechanism for propagation of rate signals through a 10-layer feedforward neuronal network

    NASA Astrophysics Data System (ADS)

    Li, Jie; Yu, Wan-Qing; Xu, Ding; Liu, Feng; Wang, Wei

    2009-12-01

    Using numerical simulations, we explore the mechanism for propagation of rate signals through a 10-layer feedforward network composed of Hodgkin-Huxley (HH) neurons with sparse connectivity. When white noise is afferent to the input layer, neuronal firing becomes progressively more synchronous in successive layers and synchrony is well developed in deeper layers owing to the feedforward connections between neighboring layers. The synchrony ensures the successful propagation of rate signals through the network when the synaptic conductance is weak. As the synaptic time constant τsyn varies, coherence resonance is observed in the network activity due to the intrinsic property of HH neurons. This makes the output firing rate single-peaked as a function of τsyn, suggesting that the signal propagation can be modulated by the synaptic time constant. These results are consistent with experimental results and advance our understanding of how information is processed in feedforward networks.

  16. FPGA implementation of motifs-based neuronal network and synchronization analysis

    NASA Astrophysics Data System (ADS)

    Deng, Bin; Zhu, Zechen; Yang, Shuangming; Wei, Xile; Wang, Jiang; Yu, Haitao

    2016-06-01

    Motifs in complex networks play a crucial role in determining the brain functions. In this paper, 13 kinds of motifs are implemented with Field Programmable Gate Array (FPGA) to investigate the relationships between the networks properties and motifs properties. We use discretization method and pipelined architecture to construct various motifs with Hindmarsh-Rose (HR) neuron as the node model. We also build a small-world network based on these motifs and conduct the synchronization analysis of motifs as well as the constructed network. We find that the synchronization properties of motif determine that of motif-based small-world network, which demonstrates effectiveness of our proposed hardware simulation platform. By imitation of some vital nuclei in the brain to generate normal discharges, our proposed FPGA-based artificial neuronal networks have the potential to replace the injured nuclei to complete the brain function in the treatment of Parkinson's disease and epilepsy.

  17. Neuronal network models of epileptogenesis

    PubMed Central

    Abdullahi, Aminu T.; Adamu, Lawan H.

    2017-01-01

    Epilepsy is a chronic neurological condition, following some trigger, transforming a normal brain to one that produces recurrent unprovoked seizures. In the search for the mechanisms that best explain the epileptogenic process, there is a growing body of evidence suggesting that the epilepsies are network level disorders. In this review, we briefly describe the concept of neuronal networks and highlight 2 methods used to analyse such networks. The first method, graph theory, is used to describe general characteristics of a network to facilitate comparison between normal and abnormal networks. The second, dynamic causal modelling, is useful in the analysis of the pathways of seizure spread. We concluded that the end results of the epileptogenic process are best understood as abnormalities of neuronal circuitry and not simply as molecular or cellular abnormalities. The network approach promises to generate new understanding and more targeted treatment of epilepsy. PMID:28416779

  18. Single-hidden-layer feed-forward quantum neural network based on Grover learning.

    PubMed

    Liu, Cheng-Yi; Chen, Chein; Chang, Ching-Ter; Shih, Lun-Min

    2013-09-01

    In this paper, a novel single-hidden-layer feed-forward quantum neural network model is proposed based on some concepts and principles in the quantum theory. By combining the quantum mechanism with the feed-forward neural network, we defined quantum hidden neurons and connected quantum weights, and used them as the fundamental information processing unit in a single-hidden-layer feed-forward neural network. The quantum neurons make a wide range of nonlinear functions serve as the activation functions in the hidden layer of the network, and the Grover searching algorithm outstands the optimal parameter setting iteratively and thus makes very efficient neural network learning possible. The quantum neuron and weights, along with a Grover searching algorithm based learning, result in a novel and efficient neural network characteristic of reduced network, high efficient training and prospect application in future. Some simulations are taken to investigate the performance of the proposed quantum network and the result show that it can achieve accurate learning. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. Towards Reproducible Descriptions of Neuronal Network Models

    PubMed Central

    Nordlie, Eilen; Gewaltig, Marc-Oliver; Plesser, Hans Ekkehard

    2009-01-01

    Progress in science depends on the effective exchange of ideas among scientists. New ideas can be assessed and criticized in a meaningful manner only if they are formulated precisely. This applies to simulation studies as well as to experiments and theories. But after more than 50 years of neuronal network simulations, we still lack a clear and common understanding of the role of computational models in neuroscience as well as established practices for describing network models in publications. This hinders the critical evaluation of network models as well as their re-use. We analyze here 14 research papers proposing neuronal network models of different complexity and find widely varying approaches to model descriptions, with regard to both the means of description and the ordering and placement of material. We further observe great variation in the graphical representation of networks and the notation used in equations. Based on our observations, we propose a good model description practice, composed of guidelines for the organization of publications, a checklist for model descriptions, templates for tables presenting model structure, and guidelines for diagrams of networks. The main purpose of this good practice is to trigger a debate about the communication of neuronal network models in a manner comprehensible to humans, as opposed to machine-readable model description languages. We believe that the good model description practice proposed here, together with a number of other recent initiatives on data-, model-, and software-sharing, may lead to a deeper and more fruitful exchange of ideas among computational neuroscientists in years to come. We further hope that work on standardized ways of describing—and thinking about—complex neuronal networks will lead the scientific community to a clearer understanding of high-level concepts in network dynamics, and will thus lead to deeper insights into the function of the brain. PMID:19662159

  20. Dimer covering and percolation frustration.

    PubMed

    Haji-Akbari, Amir; Haji-Akbari, Nasim; Ziff, Robert M

    2015-09-01

    Covering a graph or a lattice with nonoverlapping dimers is a problem that has received considerable interest in areas, such as discrete mathematics, statistical physics, chemistry, and materials science. Yet, the problem of percolation on dimer-covered lattices has received little attention. In particular, percolation on lattices that are fully covered by nonoverlapping dimers has not evidently been considered. Here, we propose a procedure for generating random dimer coverings of a given lattice. We then compute the bond percolation threshold on random and ordered coverings of the square and the triangular lattices on the remaining bonds connecting the dimers. We obtain p_{c}=0.367713(2) and p_{c}=0.235340(1) for random coverings of the square and the triangular lattices, respectively. We observe that the percolation frustration induced as a result of dimer covering is larger in the low-coordination-number square lattice. There is also no relationship between the existence of long-range order in a covering of the square lattice and its percolation threshold. In particular, an ordered covering of the square lattice, denoted by shifted covering in this paper, has an unusually low percolation threshold and is topologically identical to the triangular lattice. This is in contrast to the other ordered dimer coverings considered in this paper, which have higher percolation thresholds than the random covering. In the case of the triangular lattice, the percolation thresholds of the ordered and random coverings are very close, suggesting the lack of sensitivity of the percolation threshold to microscopic details of the covering in highly coordinated networks.

  1. On the performance of voltage stepping for the simulation of adaptive, nonlinear integrate-and-fire neuronal networks.

    PubMed

    Kaabi, Mohamed Ghaith; Tonnelier, Arnaud; Martinez, Dominique

    2011-05-01

    In traditional event-driven strategies, spike timings are analytically given or calculated with arbitrary precision (up to machine precision). Exact computation is possible only for simplified neuron models, mainly the leaky integrate-and-fire model. In a recent paper, Zheng, Tonnelier, and Martinez (2009) introduced an approximate event-driven strategy, named voltage stepping, that allows the generic simulation of nonlinear spiking neurons. Promising results were achieved in the simulation of single quadratic integrate-and-fire neurons. Here, we assess the performance of voltage stepping in network simulations by considering more complex neurons (quadratic integrate-and-fire neurons with adaptation) coupled with multiple synapses. To handle the discrete nature of synaptic interactions, we recast voltage stepping in a general framework, the discrete event system specification. The efficiency of the method is assessed through simulations and comparisons with a modified time-stepping scheme of the Runge-Kutta type. We demonstrated numerically that the original order of voltage stepping is preserved when simulating connected spiking neurons, independent of the network activity and connectivity.

  2. Optimizing NEURON Simulation Environment Using Remote Memory Access with Recursive Doubling on Distributed Memory Systems.

    PubMed

    Shehzad, Danish; Bozkuş, Zeki

    2016-01-01

    Increase in complexity of neuronal network models escalated the efforts to make NEURON simulation environment efficient. The computational neuroscientists divided the equations into subnets amongst multiple processors for achieving better hardware performance. On parallel machines for neuronal networks, interprocessor spikes exchange consumes large section of overall simulation time. In NEURON for communication between processors Message Passing Interface (MPI) is used. MPI_Allgather collective is exercised for spikes exchange after each interval across distributed memory systems. The increase in number of processors though results in achieving concurrency and better performance but it inversely affects MPI_Allgather which increases communication time between processors. This necessitates improving communication methodology to decrease the spikes exchange time over distributed memory systems. This work has improved MPI_Allgather method using Remote Memory Access (RMA) by moving two-sided communication to one-sided communication, and use of recursive doubling mechanism facilitates achieving efficient communication between the processors in precise steps. This approach enhanced communication concurrency and has improved overall runtime making NEURON more efficient for simulation of large neuronal network models.

  3. Optimizing NEURON Simulation Environment Using Remote Memory Access with Recursive Doubling on Distributed Memory Systems

    PubMed Central

    Bozkuş, Zeki

    2016-01-01

    Increase in complexity of neuronal network models escalated the efforts to make NEURON simulation environment efficient. The computational neuroscientists divided the equations into subnets amongst multiple processors for achieving better hardware performance. On parallel machines for neuronal networks, interprocessor spikes exchange consumes large section of overall simulation time. In NEURON for communication between processors Message Passing Interface (MPI) is used. MPI_Allgather collective is exercised for spikes exchange after each interval across distributed memory systems. The increase in number of processors though results in achieving concurrency and better performance but it inversely affects MPI_Allgather which increases communication time between processors. This necessitates improving communication methodology to decrease the spikes exchange time over distributed memory systems. This work has improved MPI_Allgather method using Remote Memory Access (RMA) by moving two-sided communication to one-sided communication, and use of recursive doubling mechanism facilitates achieving efficient communication between the processors in precise steps. This approach enhanced communication concurrency and has improved overall runtime making NEURON more efficient for simulation of large neuronal network models. PMID:27413363

  4. Modeling of synchronization behavior of bursting neurons at nonlinearly coupled dynamical networks.

    PubMed

    Çakir, Yüksel

    2016-01-01

    Synchronization behaviors of bursting neurons coupled through electrical and dynamic chemical synapses are investigated. The Izhikevich model is used with random and small world network of bursting neurons. Various currents which consist of diffusive electrical and time-delayed dynamic chemical synapses are used in the simulations to investigate the influences of synaptic currents and couplings on synchronization behavior of bursting neurons. The effects of parameters, such as time delay, inhibitory synaptic strengths, and decay time on synchronization behavior are investigated. It is observed that in random networks with no delay, bursting synchrony is established with the electrical synapse alone, single spiking synchrony is observed with hybrid coupling. In small world network with no delay, periodic bursting behavior with multiple spikes is observed when only chemical and only electrical synapse exist. Single-spike and multiple-spike bursting are established with hybrid couplings. A decrease in the synchronization measure is observed with zero time delay, as the decay time is increased in random network. For synaptic delays which are above active phase period, synchronization measure increases with an increase in synaptic strength and time delay in small world network. However, in random network, it increases with only an increase in synaptic strength.

  5. Scaling Properties of Dimensionality Reduction for Neural Populations and Network Models

    PubMed Central

    Cowley, Benjamin R.; Doiron, Brent; Kohn, Adam

    2016-01-01

    Recent studies have applied dimensionality reduction methods to understand how the multi-dimensional structure of neural population activity gives rise to brain function. It is unclear, however, how the results obtained from dimensionality reduction generalize to recordings with larger numbers of neurons and trials or how these results relate to the underlying network structure. We address these questions by applying factor analysis to recordings in the visual cortex of non-human primates and to spiking network models that self-generate irregular activity through a balance of excitation and inhibition. We compared the scaling trends of two key outputs of dimensionality reduction—shared dimensionality and percent shared variance—with neuron and trial count. We found that the scaling properties of networks with non-clustered and clustered connectivity differed, and that the in vivo recordings were more consistent with the clustered network. Furthermore, recordings from tens of neurons were sufficient to identify the dominant modes of shared variability that generalize to larger portions of the network. These findings can help guide the interpretation of dimensionality reduction outputs in regimes of limited neuron and trial sampling and help relate these outputs to the underlying network structure. PMID:27926936

  6. The preBötzinger complex as a hub for network activity along the ventral respiratory column in the neonate rat.

    PubMed

    Gourévitch, Boris; Mellen, Nicholas

    2014-09-01

    In vertebrates, respiratory control is ascribed to heterogeneous respiration-modulated neurons along the Ventral Respiratory Column (VRC) in medulla, which includes the preBötzinger Complex (preBötC), the putative respiratory rhythm generator. Here, the functional anatomy of the VRC was characterized via optical recordings in the sagittaly sectioned neonate rat hindbrain, at sampling rates permitting coupling estimation between neuron pairs, so that each neuron was described using unitary, neuron-system, and coupling attributes. Structured coupling relations in local networks, significantly oriented coupling in the peri-inspiratory interval detected in pooled data, and significant correlations between firing rate and expiratory duration in subsets of neurons revealed network regulation at multiple timescales. Spatially averaged neuronal attributes, including coupling vectors, revealed a sharp boundary at the rostral margin of the preBötC, as well as other functional anatomical features congruent with identified structures, including the parafacial respiratory group and the nucleus ambiguus. Cluster analysis of attributes identified two spatially compact, homogenous groups: the first overlapped with the preBötC, and was characterized by strong respiratory modulation and dense bidirectional coupling with itself and other groups, consistent with a central role for the preBötC in respiratory control; the second lay between preBötC and the facial nucleus, and was characterized by weak respiratory modulation and weak coupling with other respiratory neurons, which is congruent with cardiovascular regulatory networks that are found in this region. Other groups identified using cluster analysis suggested that networks along VRC regulated expiratory duration, and the transition to and from inspiration, but these groups were heterogeneous and anatomically dispersed. Thus, by recording local networks in parallel, this study found evidence for respiratory regulation at multiple timescales along the VRC, as well as a role for the preBötC in the integration of functionally disparate respiratory neurons. Copyright © 2014 Elsevier Inc. All rights reserved.

  7. A stochastic-field description of finite-size spiking neural networks

    PubMed Central

    Longtin, André

    2017-01-01

    Neural network dynamics are governed by the interaction of spiking neurons. Stochastic aspects of single-neuron dynamics propagate up to the network level and shape the dynamical and informational properties of the population. Mean-field models of population activity disregard the finite-size stochastic fluctuations of network dynamics and thus offer a deterministic description of the system. Here, we derive a stochastic partial differential equation (SPDE) describing the temporal evolution of the finite-size refractory density, which represents the proportion of neurons in a given refractory state at any given time. The population activity—the density of active neurons per unit time—is easily extracted from this refractory density. The SPDE includes finite-size effects through a two-dimensional Gaussian white noise that acts both in time and along the refractory dimension. For an infinite number of neurons the standard mean-field theory is recovered. A discretization of the SPDE along its characteristic curves allows direct simulations of the activity of large but finite spiking networks; this constitutes the main advantage of our approach. Linearizing the SPDE with respect to the deterministic asynchronous state allows the theoretical investigation of finite-size activity fluctuations. In particular, analytical expressions for the power spectrum and autocorrelation of activity fluctuations are obtained. Moreover, our approach can be adapted to incorporate multiple interacting populations and quasi-renewal single-neuron dynamics. PMID:28787447

  8. Drifting States and Synchronization Induced Chaos in Autonomous Networks of Excitable Neurons.

    PubMed

    Echeveste, Rodrigo; Gros, Claudius

    2016-01-01

    The study of balanced networks of excitatory and inhibitory neurons has led to several open questions. On the one hand it is yet unclear whether the asynchronous state observed in the brain is autonomously generated, or if it results from the interplay between external drivings and internal dynamics. It is also not known, which kind of network variabilities will lead to irregular spiking and which to synchronous firing states. Here we show how isolated networks of purely excitatory neurons generically show asynchronous firing whenever a minimal level of structural variability is present together with a refractory period. Our autonomous networks are composed of excitable units, in the form of leaky integrators spiking only in response to driving currents, remaining otherwise quiet. For a non-uniform network, composed exclusively of excitatory neurons, we find a rich repertoire of self-induced dynamical states. We show in particular that asynchronous drifting states may be stabilized in purely excitatory networks whenever a refractory period is present. Other states found are either fully synchronized or mixed, containing both drifting and synchronized components. The individual neurons considered are excitable and hence do not dispose of intrinsic natural firing frequencies. An effective network-wide distribution of natural frequencies is however generated autonomously through self-consistent feedback loops. The asynchronous drifting state is, additionally, amenable to an analytic solution. We find two types of asynchronous activity, with the individual neurons spiking regularly in the pure drifting state, albeit with a continuous distribution of firing frequencies. The activity of the drifting component, however, becomes irregular in the mixed state, due to the periodic driving of the synchronized component. We propose a new tool for the study of chaos in spiking neural networks, which consists of an analysis of the time series of pairs of consecutive interspike intervals. In this space, we show that a strange attractor with a fractal dimension of about 1.8 is formed in the mentioned mixed state.

  9. Drifting States and Synchronization Induced Chaos in Autonomous Networks of Excitable Neurons

    PubMed Central

    Echeveste, Rodrigo; Gros, Claudius

    2016-01-01

    The study of balanced networks of excitatory and inhibitory neurons has led to several open questions. On the one hand it is yet unclear whether the asynchronous state observed in the brain is autonomously generated, or if it results from the interplay between external drivings and internal dynamics. It is also not known, which kind of network variabilities will lead to irregular spiking and which to synchronous firing states. Here we show how isolated networks of purely excitatory neurons generically show asynchronous firing whenever a minimal level of structural variability is present together with a refractory period. Our autonomous networks are composed of excitable units, in the form of leaky integrators spiking only in response to driving currents, remaining otherwise quiet. For a non-uniform network, composed exclusively of excitatory neurons, we find a rich repertoire of self-induced dynamical states. We show in particular that asynchronous drifting states may be stabilized in purely excitatory networks whenever a refractory period is present. Other states found are either fully synchronized or mixed, containing both drifting and synchronized components. The individual neurons considered are excitable and hence do not dispose of intrinsic natural firing frequencies. An effective network-wide distribution of natural frequencies is however generated autonomously through self-consistent feedback loops. The asynchronous drifting state is, additionally, amenable to an analytic solution. We find two types of asynchronous activity, with the individual neurons spiking regularly in the pure drifting state, albeit with a continuous distribution of firing frequencies. The activity of the drifting component, however, becomes irregular in the mixed state, due to the periodic driving of the synchronized component. We propose a new tool for the study of chaos in spiking neural networks, which consists of an analysis of the time series of pairs of consecutive interspike intervals. In this space, we show that a strange attractor with a fractal dimension of about 1.8 is formed in the mentioned mixed state. PMID:27708572

  10. Autonomous Optimization of Targeted Stimulation of Neuronal Networks

    PubMed Central

    Kumar, Sreedhar S.; Wülfing, Jan; Okujeni, Samora; Boedecker, Joschka; Riedmiller, Martin

    2016-01-01

    Driven by clinical needs and progress in neurotechnology, targeted interaction with neuronal networks is of increasing importance. Yet, the dynamics of interaction between intrinsic ongoing activity in neuronal networks and their response to stimulation is unknown. Nonetheless, electrical stimulation of the brain is increasingly explored as a therapeutic strategy and as a means to artificially inject information into neural circuits. Strategies using regular or event-triggered fixed stimuli discount the influence of ongoing neuronal activity on the stimulation outcome and are therefore not optimal to induce specific responses reliably. Yet, without suitable mechanistic models, it is hardly possible to optimize such interactions, in particular when desired response features are network-dependent and are initially unknown. In this proof-of-principle study, we present an experimental paradigm using reinforcement-learning (RL) to optimize stimulus settings autonomously and evaluate the learned control strategy using phenomenological models. We asked how to (1) capture the interaction of ongoing network activity, electrical stimulation and evoked responses in a quantifiable ‘state’ to formulate a well-posed control problem, (2) find the optimal state for stimulation, and (3) evaluate the quality of the solution found. Electrical stimulation of generic neuronal networks grown from rat cortical tissue in vitro evoked bursts of action potentials (responses). We show that the dynamic interplay of their magnitudes and the probability to be intercepted by spontaneous events defines a trade-off scenario with a network-specific unique optimal latency maximizing stimulus efficacy. An RL controller was set to find this optimum autonomously. Across networks, stimulation efficacy increased in 90% of the sessions after learning and learned latencies strongly agreed with those predicted from open-loop experiments. Our results show that autonomous techniques can exploit quantitative relationships underlying activity-response interaction in biological neuronal networks to choose optimal actions. Simple phenomenological models can be useful to validate the quality of the resulting controllers. PMID:27509295

  11. Autonomous Optimization of Targeted Stimulation of Neuronal Networks.

    PubMed

    Kumar, Sreedhar S; Wülfing, Jan; Okujeni, Samora; Boedecker, Joschka; Riedmiller, Martin; Egert, Ulrich

    2016-08-01

    Driven by clinical needs and progress in neurotechnology, targeted interaction with neuronal networks is of increasing importance. Yet, the dynamics of interaction between intrinsic ongoing activity in neuronal networks and their response to stimulation is unknown. Nonetheless, electrical stimulation of the brain is increasingly explored as a therapeutic strategy and as a means to artificially inject information into neural circuits. Strategies using regular or event-triggered fixed stimuli discount the influence of ongoing neuronal activity on the stimulation outcome and are therefore not optimal to induce specific responses reliably. Yet, without suitable mechanistic models, it is hardly possible to optimize such interactions, in particular when desired response features are network-dependent and are initially unknown. In this proof-of-principle study, we present an experimental paradigm using reinforcement-learning (RL) to optimize stimulus settings autonomously and evaluate the learned control strategy using phenomenological models. We asked how to (1) capture the interaction of ongoing network activity, electrical stimulation and evoked responses in a quantifiable 'state' to formulate a well-posed control problem, (2) find the optimal state for stimulation, and (3) evaluate the quality of the solution found. Electrical stimulation of generic neuronal networks grown from rat cortical tissue in vitro evoked bursts of action potentials (responses). We show that the dynamic interplay of their magnitudes and the probability to be intercepted by spontaneous events defines a trade-off scenario with a network-specific unique optimal latency maximizing stimulus efficacy. An RL controller was set to find this optimum autonomously. Across networks, stimulation efficacy increased in 90% of the sessions after learning and learned latencies strongly agreed with those predicted from open-loop experiments. Our results show that autonomous techniques can exploit quantitative relationships underlying activity-response interaction in biological neuronal networks to choose optimal actions. Simple phenomenological models can be useful to validate the quality of the resulting controllers.

  12. Learning by stimulation avoidance: A principle to control spiking neural networks dynamics

    PubMed Central

    Sinapayen, Lana; Ikegami, Takashi

    2017-01-01

    Learning based on networks of real neurons, and learning based on biologically inspired models of neural networks, have yet to find general learning rules leading to widespread applications. In this paper, we argue for the existence of a principle allowing to steer the dynamics of a biologically inspired neural network. Using carefully timed external stimulation, the network can be driven towards a desired dynamical state. We term this principle “Learning by Stimulation Avoidance” (LSA). We demonstrate through simulation that the minimal sufficient conditions leading to LSA in artificial networks are also sufficient to reproduce learning results similar to those obtained in biological neurons by Shahaf and Marom, and in addition explains synaptic pruning. We examined the underlying mechanism by simulating a small network of 3 neurons, then scaled it up to a hundred neurons. We show that LSA has a higher explanatory power than existing hypotheses about the response of biological neural networks to external simulation, and can be used as a learning rule for an embodied application: learning of wall avoidance by a simulated robot. In other works, reinforcement learning with spiking networks can be obtained through global reward signals akin simulating the dopamine system; we believe that this is the first project demonstrating sensory-motor learning with random spiking networks through Hebbian learning relying on environmental conditions without a separate reward system. PMID:28158309

  13. Learning by stimulation avoidance: A principle to control spiking neural networks dynamics.

    PubMed

    Sinapayen, Lana; Masumori, Atsushi; Ikegami, Takashi

    2017-01-01

    Learning based on networks of real neurons, and learning based on biologically inspired models of neural networks, have yet to find general learning rules leading to widespread applications. In this paper, we argue for the existence of a principle allowing to steer the dynamics of a biologically inspired neural network. Using carefully timed external stimulation, the network can be driven towards a desired dynamical state. We term this principle "Learning by Stimulation Avoidance" (LSA). We demonstrate through simulation that the minimal sufficient conditions leading to LSA in artificial networks are also sufficient to reproduce learning results similar to those obtained in biological neurons by Shahaf and Marom, and in addition explains synaptic pruning. We examined the underlying mechanism by simulating a small network of 3 neurons, then scaled it up to a hundred neurons. We show that LSA has a higher explanatory power than existing hypotheses about the response of biological neural networks to external simulation, and can be used as a learning rule for an embodied application: learning of wall avoidance by a simulated robot. In other works, reinforcement learning with spiking networks can be obtained through global reward signals akin simulating the dopamine system; we believe that this is the first project demonstrating sensory-motor learning with random spiking networks through Hebbian learning relying on environmental conditions without a separate reward system.

  14. The Topographical Mapping in Drosophila Central Complex Network and Its Signal Routing

    PubMed Central

    Chang, Po-Yen; Su, Ta-Shun; Shih, Chi-Tin; Lo, Chung-Chuan

    2017-01-01

    Neural networks regulate brain functions by routing signals. Therefore, investigating the detailed organization of a neural circuit at the cellular levels is a crucial step toward understanding the neural mechanisms of brain functions. To study how a complicated neural circuit is organized, we analyzed recently published data on the neural circuit of the Drosophila central complex, a brain structure associated with a variety of functions including sensory integration and coordination of locomotion. We discovered that, except for a small number of “atypical” neuron types, the network structure formed by the identified 194 neuron types can be described by only a few simple mathematical rules. Specifically, the topological mapping formed by these neurons can be reconstructed by applying a generation matrix on a small set of initial neurons. By analyzing how information flows propagate with or without the atypical neurons, we found that while the general pattern of signal propagation in the central complex follows the simple topological mapping formed by the “typical” neurons, some atypical neurons can substantially re-route the signal pathways, implying specific roles of these neurons in sensory signal integration. The present study provides insights into the organization principle and signal integration in the central complex. PMID:28443014

  15. Carbon nanotubes might improve neuronal performance by favouring electrical shortcuts.

    PubMed

    Cellot, Giada; Cilia, Emanuele; Cipollone, Sara; Rancic, Vladimir; Sucapane, Antonella; Giordani, Silvia; Gambazzi, Luca; Markram, Henry; Grandolfo, Micaela; Scaini, Denis; Gelain, Fabrizio; Casalis, Loredana; Prato, Maurizio; Giugliano, Michele; Ballerini, Laura

    2009-02-01

    Carbon nanotubes have been applied in several areas of nerve tissue engineering to probe and augment cell behaviour, to label and track subcellular components, and to study the growth and organization of neural networks. Recent reports show that nanotubes can sustain and promote neuronal electrical activity in networks of cultured cells, but the ways in which they affect cellular function are still poorly understood. Here, we show, using single-cell electrophysiology techniques, electron microscopy analysis and theoretical modelling, that nanotubes improve the responsiveness of neurons by forming tight contacts with the cell membranes that might favour electrical shortcuts between the proximal and distal compartments of the neuron. We propose the 'electrotonic hypothesis' to explain the physical interactions between the cell and nanotube, and the mechanisms of how carbon nanotubes might affect the collective electrical activity of cultured neuronal networks. These considerations offer a perspective that would allow us to predict or engineer interactions between neurons and carbon nanotubes.

  16. Simultaneous submicrometric 3D imaging of the micro-vascular network and the neuronal system in a mouse spinal cord

    PubMed Central

    Fratini, Michela; Bukreeva, Inna; Campi, Gaetano; Brun, Francesco; Tromba, Giuliana; Modregger, Peter; Bucci, Domenico; Battaglia, Giuseppe; Spanò, Raffaele; Mastrogiacomo, Maddalena; Requardt, Herwig; Giove, Federico; Bravin, Alberto; Cedola, Alessia

    2015-01-01

    Faults in vascular (VN) and neuronal networks of spinal cord are responsible for serious neurodegenerative pathologies. Because of inadequate investigation tools, the lacking knowledge of the complete fine structure of VN and neuronal system represents a crucial problem. Conventional 2D imaging yields incomplete spatial coverage leading to possible data misinterpretation, whereas standard 3D computed tomography imaging achieves insufficient resolution and contrast. We show that X-ray high-resolution phase-contrast tomography allows the simultaneous visualization of three-dimensional VN and neuronal systems of ex-vivo mouse spinal cord at scales spanning from millimeters to hundreds of nanometers, with nor contrast agent nor sectioning and neither destructive sample-preparation. We image both the 3D distribution of micro-capillary network and the micrometric nerve fibers, axon-bundles and neuron soma. Our approach is very suitable for pre-clinical investigation of neurodegenerative pathologies and spinal-cord-injuries, in particular to resolve the entangled relationship between VN and neuronal system. PMID:25686728

  17. Emergence of small-world structure in networks of spiking neurons through STDP plasticity.

    PubMed

    Basalyga, Gleb; Gleiser, Pablo M; Wennekers, Thomas

    2011-01-01

    In this work, we use a complex network approach to investigate how a neural network structure changes under synaptic plasticity. In particular, we consider a network of conductance-based, single-compartment integrate-and-fire excitatory and inhibitory neurons. Initially the neurons are connected randomly with uniformly distributed synaptic weights. The weights of excitatory connections can be strengthened or weakened during spiking activity by the mechanism known as spike-timing-dependent plasticity (STDP). We extract a binary directed connection matrix by thresholding the weights of the excitatory connections at every simulation step and calculate its major topological characteristics such as the network clustering coefficient, characteristic path length and small-world index. We numerically demonstrate that, under certain conditions, a nontrivial small-world structure can emerge from a random initial network subject to STDP learning.

  18. Dynamics of Competition between Subnetworks of Spiking Neuronal Networks in the Balanced State.

    PubMed

    Lagzi, Fereshteh; Rotter, Stefan

    2015-01-01

    We explore and analyze the nonlinear switching dynamics of neuronal networks with non-homogeneous connectivity. The general significance of such transient dynamics for brain function is unclear; however, for instance decision-making processes in perception and cognition have been implicated with it. The network under study here is comprised of three subnetworks of either excitatory or inhibitory leaky integrate-and-fire neurons, of which two are of the same type. The synaptic weights are arranged to establish and maintain a balance between excitation and inhibition in case of a constant external drive. Each subnetwork is randomly connected, where all neurons belonging to a particular population have the same in-degree and the same out-degree. Neurons in different subnetworks are also randomly connected with the same probability; however, depending on the type of the pre-synaptic neuron, the synaptic weight is scaled by a factor. We observed that for a certain range of the "within" versus "between" connection weights (bifurcation parameter), the network activation spontaneously switches between the two sub-networks of the same type. This kind of dynamics has been termed "winnerless competition", which also has a random component here. In our model, this phenomenon is well described by a set of coupled stochastic differential equations of Lotka-Volterra type that imply a competition between the subnetworks. The associated mean-field model shows the same dynamical behavior as observed in simulations of large networks comprising thousands of spiking neurons. The deterministic phase portrait is characterized by two attractors and a saddle node, its stochastic component is essentially given by the multiplicative inherent noise of the system. We find that the dwell time distribution of the active states is exponential, indicating that the noise drives the system randomly from one attractor to the other. A similar model for a larger number of populations might suggest a general approach to study the dynamics of interacting populations of spiking networks.

  19. Dynamics of Competition between Subnetworks of Spiking Neuronal Networks in the Balanced State

    PubMed Central

    Lagzi, Fereshteh; Rotter, Stefan

    2015-01-01

    We explore and analyze the nonlinear switching dynamics of neuronal networks with non-homogeneous connectivity. The general significance of such transient dynamics for brain function is unclear; however, for instance decision-making processes in perception and cognition have been implicated with it. The network under study here is comprised of three subnetworks of either excitatory or inhibitory leaky integrate-and-fire neurons, of which two are of the same type. The synaptic weights are arranged to establish and maintain a balance between excitation and inhibition in case of a constant external drive. Each subnetwork is randomly connected, where all neurons belonging to a particular population have the same in-degree and the same out-degree. Neurons in different subnetworks are also randomly connected with the same probability; however, depending on the type of the pre-synaptic neuron, the synaptic weight is scaled by a factor. We observed that for a certain range of the “within” versus “between” connection weights (bifurcation parameter), the network activation spontaneously switches between the two sub-networks of the same type. This kind of dynamics has been termed “winnerless competition”, which also has a random component here. In our model, this phenomenon is well described by a set of coupled stochastic differential equations of Lotka-Volterra type that imply a competition between the subnetworks. The associated mean-field model shows the same dynamical behavior as observed in simulations of large networks comprising thousands of spiking neurons. The deterministic phase portrait is characterized by two attractors and a saddle node, its stochastic component is essentially given by the multiplicative inherent noise of the system. We find that the dwell time distribution of the active states is exponential, indicating that the noise drives the system randomly from one attractor to the other. A similar model for a larger number of populations might suggest a general approach to study the dynamics of interacting populations of spiking networks. PMID:26407178

  20. Statistical Neurodynamics.

    NASA Astrophysics Data System (ADS)

    Paine, Gregory Harold

    1982-03-01

    The primary objective of the thesis is to explore the dynamical properties of small nerve networks by means of the methods of statistical mechanics. To this end, a general formalism is developed and applied to elementary groupings of model neurons which are driven by either constant (steady state) or nonconstant (nonsteady state) forces. Neuronal models described by a system of coupled, nonlinear, first-order, ordinary differential equations are considered. A linearized form of the neuronal equations is studied in detail. A Lagrange function corresponding to the linear neural network is constructed which, through a Legendre transformation, provides a constant of motion. By invoking the Maximum-Entropy Principle with the single integral of motion as a constraint, a probability distribution function for the network in a steady state can be obtained. The formalism is implemented for some simple networks driven by a constant force; accordingly, the analysis focuses on a study of fluctuations about the steady state. In particular, a network composed of N noninteracting neurons, termed Free Thinkers, is considered in detail, with a view to interpretation and numerical estimation of the Lagrange multiplier corresponding to the constant of motion. As an archetypical example of a net of interacting neurons, the classical neural oscillator, consisting of two mutually inhibitory neurons, is investigated. It is further shown that in the case of a network driven by a nonconstant force, the Maximum-Entropy Principle can be applied to determine a probability distribution functional describing the network in a nonsteady state. The above examples are reconsidered with nonconstant driving forces which produce small deviations from the steady state. Numerical studies are performed on simplified models of two physical systems: the starfish central nervous system and the mammalian olfactory bulb. Discussions are given as to how statistical neurodynamics can be used to gain a better understanding of the behavior of these systems.

  1. Estimating Temporal Causal Interaction between Spike Trains with Permutation and Transfer Entropy

    PubMed Central

    Li, Zhaohui; Li, Xiaoli

    2013-01-01

    Estimating the causal interaction between neurons is very important for better understanding the functional connectivity in neuronal networks. We propose a method called normalized permutation transfer entropy (NPTE) to evaluate the temporal causal interaction between spike trains, which quantifies the fraction of ordinal information in a neuron that has presented in another one. The performance of this method is evaluated with the spike trains generated by an Izhikevich’s neuronal model. Results show that the NPTE method can effectively estimate the causal interaction between two neurons without influence of data length. Considering both the precision of time delay estimated and the robustness of information flow estimated against neuronal firing rate, the NPTE method is superior to other information theoretic method including normalized transfer entropy, symbolic transfer entropy and permutation conditional mutual information. To test the performance of NPTE on analyzing simulated biophysically realistic synapses, an Izhikevich’s cortical network that based on the neuronal model is employed. It is found that the NPTE method is able to characterize mutual interactions and identify spurious causality in a network of three neurons exactly. We conclude that the proposed method can obtain more reliable comparison of interactions between different pairs of neurons and is a promising tool to uncover more details on the neural coding. PMID:23940662

  2. DNA-carbon nano onion aggregate: triangle, hexagon, six-petal flower to dead-end network

    NASA Astrophysics Data System (ADS)

    Babar, Dipak Gorakh; Pakhira, Bholanath; Sarkar, Sabyasachi

    2017-08-01

    The interaction between calf-thymus (CT) dsDNA and water soluble carbon nano onion (wsCNO) in water follows denaturation of dsDNA (double stranded) to ssDNA (single stranded) as monitored by optical spectroscopy. The ssDNA concomitantly wraps the spiky surface of wsCNO to create triangular aggregate as the building block as observed by time-dependent SEM images. These triangles further aggregate leading to six-petal flower arrangement via hexagon and finally reach a dead end network as imaged by SEM and optical fluorescence microscopy. The dead-end network aggregate lost the intrinsic optical property of DNA suggesting complete loss of its activity.

  3. Neural signal registration and analysis of axons grown in microchannels

    NASA Astrophysics Data System (ADS)

    Pigareva, Y.; Malishev, E.; Gladkov, A.; Kolpakov, V.; Bukatin, A.; Mukhina, I.; Kazantsev, V.; Pimashkin, A.

    2016-08-01

    Registration of neuronal bioelectrical signals remains one of the main physical tools to study fundamental mechanisms of signal processing in the brain. Neurons generate spiking patterns which propagate through complex map of neural network connectivity. Extracellular recording of isolated axons grown in microchannels provides amplification of the signal for detailed study of spike propagation. In this study we used neuronal hippocampal cultures grown in microfluidic devices combined with microelectrode arrays to investigate a changes of electrical activity during neural network development. We found that after 5 days in vitro after culture plating the spiking activity appears first in microchannels and on the next 2-3 days appears on the electrodes of overall neural network. We conclude that such approach provides a convenient method to study neural signal processing and functional structure development on a single cell and network level of the neuronal culture.

  4. Cliques of Neurons Bound into Cavities Provide a Missing Link between Structure and Function.

    PubMed

    Reimann, Michael W; Nolte, Max; Scolamiero, Martina; Turner, Katharine; Perin, Rodrigo; Chindemi, Giuseppe; Dłotko, Paweł; Levi, Ran; Hess, Kathryn; Markram, Henry

    2017-01-01

    The lack of a formal link between neural network structure and its emergent function has hampered our understanding of how the brain processes information. We have now come closer to describing such a link by taking the direction of synaptic transmission into account, constructing graphs of a network that reflect the direction of information flow, and analyzing these directed graphs using algebraic topology. Applying this approach to a local network of neurons in the neocortex revealed a remarkably intricate and previously unseen topology of synaptic connectivity. The synaptic network contains an abundance of cliques of neurons bound into cavities that guide the emergence of correlated activity. In response to stimuli, correlated activity binds synaptically connected neurons into functional cliques and cavities that evolve in a stereotypical sequence toward peak complexity. We propose that the brain processes stimuli by forming increasingly complex functional cliques and cavities.

  5. Improved system identification using artificial neural networks and analysis of individual differences in responses of an identified neuron.

    PubMed

    Costalago Meruelo, Alicia; Simpson, David M; Veres, Sandor M; Newland, Philip L

    2016-03-01

    Mathematical modelling is used routinely to understand the coding properties and dynamics of responses of neurons and neural networks. Here we analyse the effectiveness of Artificial Neural Networks (ANNs) as a modelling tool for motor neuron responses. We used ANNs to model the synaptic responses of an identified motor neuron, the fast extensor motor neuron, of the desert locust in response to displacement of a sensory organ, the femoral chordotonal organ, which monitors movements of the tibia relative to the femur of the leg. The aim of the study was threefold: first to determine the potential value of ANNs as tools to model and investigate neural networks, second to understand the generalisation properties of ANNs across individuals and to different input signals and third, to understand individual differences in responses of an identified neuron. A metaheuristic algorithm was developed to design the ANN architectures. The performance of the models generated by the ANNs was compared with those generated through previous mathematical models of the same neuron. The results suggest that ANNs are significantly better than LNL and Wiener models in predicting specific neural responses to Gaussian White Noise, but not significantly different when tested with sinusoidal inputs. They are also able to predict responses of the same neuron in different individuals irrespective of which animal was used to develop the model, although notable differences between some individuals were evident. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  6. A simplified protocol for differentiation of electrophysiologically mature neuronal networks from human induced pluripotent stem cells.

    PubMed

    Gunhanlar, N; Shpak, G; van der Kroeg, M; Gouty-Colomer, L A; Munshi, S T; Lendemeijer, B; Ghazvini, M; Dupont, C; Hoogendijk, W J G; Gribnau, J; de Vrij, F M S; Kushner, S A

    2018-05-01

    Progress in elucidating the molecular and cellular pathophysiology of neuropsychiatric disorders has been hindered by the limited availability of living human brain tissue. The emergence of induced pluripotent stem cells (iPSCs) has offered a unique alternative strategy using patient-derived functional neuronal networks. However, methods for reliably generating iPSC-derived neurons with mature electrophysiological characteristics have been difficult to develop. Here, we report a simplified differentiation protocol that yields electrophysiologically mature iPSC-derived cortical lineage neuronal networks without the need for astrocyte co-culture or specialized media. This protocol generates a consistent 60:40 ratio of neurons and astrocytes that arise from a common forebrain neural progenitor. Whole-cell patch-clamp recordings of 114 neurons derived from three independent iPSC lines confirmed their electrophysiological maturity, including resting membrane potential (-58.2±1.0 mV), capacitance (49.1±2.9 pF), action potential (AP) threshold (-50.9±0.5 mV) and AP amplitude (66.5±1.3 mV). Nearly 100% of neurons were capable of firing APs, of which 79% had sustained trains of mature APs with minimal accommodation (peak AP frequency: 11.9±0.5 Hz) and 74% exhibited spontaneous synaptic activity (amplitude, 16.03±0.82 pA; frequency, 1.09±0.17 Hz). We expect this protocol to be of broad applicability for implementing iPSC-based neuronal network models of neuropsychiatric disorders.

  7. Cortical network modeling: analytical methods for firing rates and some properties of networks of LIF neurons.

    PubMed

    Tuckwell, Henry C

    2006-01-01

    The circuitry of cortical networks involves interacting populations of excitatory (E) and inhibitory (I) neurons whose relationships are now known to a large extent. Inputs to E- and I-cells may have their origins in remote or local cortical areas. We consider a rudimentary model involving E- and I-cells. One of our goals is to test an analytic approach to finding firing rates in neural networks without using a diffusion approximation and to this end we consider in detail networks of excitatory neurons with leaky integrate-and-fire (LIF) dynamics. A simple measure of synchronization, denoted by S(q), where q is between 0 and 100 is introduced. Fully connected E-networks have a large tendency to become dominated by synchronously firing groups of cells, except when inputs are relatively weak. We observed random or asynchronous firing in such networks with diverse sets of parameter values. When such firing patterns were found, the analytical approach was often able to accurately predict average neuronal firing rates. We also considered several properties of E-E networks, distinguishing several kinds of firing pattern. Included were those with silences before or after periods of intense activity or with periodic synchronization. We investigated the occurrence of synchronized firing with respect to changes in the internal excitatory postsynaptic potential (EPSP) magnitude in a network of 100 neurons with fixed values of the remaining parameters. When the internal EPSP size was less than a certain value, synchronization was absent. The amount of synchronization then increased slowly as the EPSP amplitude increased until at a particular EPSP size the amount of synchronization abruptly increased, with S(5) attaining the maximum value of 100%. We also found network frequency transfer characteristics for various network sizes and found a linear dependence of firing frequency over wide ranges of the external afferent frequency, with non-linear effects at lower input frequencies. The theory may also be applied to sparsely connected networks, whose firing behaviour was found to change abruptly as the probability of a connection passed through a critical value. The analytical method was also found to be useful for a feed-forward excitatory network and a network of excitatory and inhibitory neurons.

  8. Intrinsic and Extrinsic Neuromodulation of Olfactory Processing.

    PubMed

    Lizbinski, Kristyn M; Dacks, Andrew M

    2017-01-01

    Neuromodulation is a ubiquitous feature of neural systems, allowing flexible, context specific control over network dynamics. Neuromodulation was first described in invertebrate motor systems and early work established a basic dichotomy for neuromodulation as having either an intrinsic origin (i.e., neurons that participate in network coding) or an extrinsic origin (i.e., neurons from independent networks). In this conceptual dichotomy, intrinsic sources of neuromodulation provide a "memory" by adjusting network dynamics based upon previous and ongoing activation of the network itself, while extrinsic neuromodulators provide the context of ongoing activity of other neural networks. Although this dichotomy has been thoroughly considered in motor systems, it has received far less attention in sensory systems. In this review, we discuss intrinsic and extrinsic modulation in the context of olfactory processing in invertebrate and vertebrate model systems. We begin by discussing presynaptic modulation of olfactory sensory neurons by local interneurons (LNs) as a mechanism for gain control based on ongoing network activation. We then discuss the cell-class specific effects of serotonergic centrifugal neurons on olfactory processing. Finally, we briefly discuss the integration of intrinsic and extrinsic neuromodulation (metamodulation) as an effective mechanism for exerting global control over olfactory network dynamics. The heterogeneous nature of neuromodulation is a recurring theme throughout this review as the effects of both intrinsic and extrinsic modulation are generally non-uniform.

  9. Statistics of Visual Responses to Image Object Stimuli from Primate AIT Neurons to DNN Neurons.

    PubMed

    Dong, Qiulei; Wang, Hong; Hu, Zhanyi

    2018-02-01

    Under the goal-driven paradigm, Yamins et al. ( 2014 ; Yamins & DiCarlo, 2016 ) have shown that by optimizing only the final eight-way categorization performance of a four-layer hierarchical network, not only can its top output layer quantitatively predict IT neuron responses but its penultimate layer can also automatically predict V4 neuron responses. Currently, deep neural networks (DNNs) in the field of computer vision have reached image object categorization performance comparable to that of human beings on ImageNet, a data set that contains 1.3 million training images of 1000 categories. We explore whether the DNN neurons (units in DNNs) possess image object representational statistics similar to monkey IT neurons, particularly when the network becomes deeper and the number of image categories becomes larger, using VGG19, a typical and widely used deep network of 19 layers in the computer vision field. Following Lehky, Kiani, Esteky, and Tanaka ( 2011 , 2014 ), where the response statistics of 674 IT neurons to 806 image stimuli are analyzed using three measures (kurtosis, Pareto tail index, and intrinsic dimensionality), we investigate the three issues in this letter using the same three measures: (1) the similarities and differences of the neural response statistics between VGG19 and primate IT cortex, (2) the variation trends of the response statistics of VGG19 neurons at different layers from low to high, and (3) the variation trends of the response statistics of VGG19 neurons when the numbers of stimuli and neurons increase. We find that the response statistics on both single-neuron selectivity and population sparseness of VGG19 neurons are fundamentally different from those of IT neurons in most cases; by increasing the number of neurons in different layers and the number of stimuli, the response statistics of neurons at different layers from low to high do not substantially change; and the estimated intrinsic dimensionality values at the low convolutional layers of VGG19 are considerably larger than the value of approximately 100 reported for IT neurons in Lehky et al. ( 2014 ), whereas those at the high fully connected layers are close to or lower than 100. To the best of our knowledge, this work is the first attempt to analyze the response statistics of DNN neurons with respect to primate IT neurons in image object representation.

  10. Neural network with dynamically adaptable neurons

    NASA Technical Reports Server (NTRS)

    Tawel, Raoul (Inventor)

    1994-01-01

    This invention is an adaptive neuron for use in neural network processors. The adaptive neuron participates in the supervised learning phase of operation on a co-equal basis with the synapse matrix elements by adaptively changing its gain in a similar manner to the change of weights in the synapse IO elements. In this manner, training time is decreased by as much as three orders of magnitude.

  11. Automated quantification of neuronal networks and single-cell calcium dynamics using calcium imaging

    PubMed Central

    Patel, Tapan P.; Man, Karen; Firestein, Bonnie L.; Meaney, David F.

    2017-01-01

    Background Recent advances in genetically engineered calcium and membrane potential indicators provide the potential to estimate the activation dynamics of individual neurons within larger, mesoscale networks (100s–1000 +neurons). However, a fully integrated automated workflow for the analysis and visualization of neural microcircuits from high speed fluorescence imaging data is lacking. New method Here we introduce FluoroSNNAP, Fluorescence Single Neuron and Network Analysis Package. FluoroSNNAP is an open-source, interactive software developed in MATLAB for automated quantification of numerous biologically relevant features of both the calcium dynamics of single-cells and network activity patterns. FluoroSNNAP integrates and improves upon existing tools for spike detection, synchronization analysis, and inference of functional connectivity, making it most useful to experimentalists with little or no programming knowledge. Results We apply FluoroSNNAP to characterize the activity patterns of neuronal microcircuits undergoing developmental maturation in vitro. Separately, we highlight the utility of single-cell analysis for phenotyping a mixed population of neurons expressing a human mutant variant of the microtubule associated protein tau and wild-type tau. Comparison with existing method(s) We show the performance of semi-automated cell segmentation using spatiotemporal independent component analysis and significant improvement in detecting calcium transients using a template-based algorithm in comparison to peak-based or wavelet-based detection methods. Our software further enables automated analysis of microcircuits, which is an improvement over existing methods. Conclusions We expect the dissemination of this software will facilitate a comprehensive analysis of neuronal networks, promoting the rapid interrogation of circuits in health and disease. PMID:25629800

  12. Deep Brain Stimulation

    PubMed Central

    Lyketsos, Constantine G.; Pendergrass, Jo Cara; Lozano, Andres M.

    2012-01-01

    Recent studies have identified an association between memory deficits and defects of the integrated neuronal cortical areas known collectively as the default mode network. It is conceivable that the amyloid deposition or other molecular abnormalities seen in patients with Alzheimer’s disease may interfere with this network and disrupt neuronal circuits beyond the localized brain areas. Therefore, Alzheimer’s disease may be both a degenerative disease and a broader system-level disorder affecting integrated neuronal pathways involved in memory. In this paper, we describe the rationale and provide some evidence to support the study of deep brain stimulation of the hippocampal fornix as a novel treatment to improve neuronal circuitry within these integrated networks and thereby sustain memory function in early Alzheimer’s disease. PMID:23346514

  13. Low Dose Isoflurane Exerts Opposing Effects on Neuronal Network Excitability in Neocortex and Hippocampus

    PubMed Central

    Ranft, Andreas; von Meyer, Ludwig; Zieglgänsberger, Walter; Kochs, Eberhard; Dodt, Hans-Ulrich

    2012-01-01

    The anesthetic excitement phase occurring during induction of anesthesia with volatile anesthetics is a well-known phenomenon in clinical practice. However, the physiological mechanisms underlying anesthetic-induced excitation are still unclear. Here we provide evidence from in vitro experiments performed on rat brain slices that the general anesthetic isoflurane at a concentration of about 0.1 mM can enhance neuronal network excitability in the hippocampus, while simultaneously reducing it in the neocortex. In contrast, isoflurane tissue concentrations above 0.3 mM expectedly caused a pronounced reduction in both brain regions. Neuronal network excitability was assessed by combining simultaneous multisite stimulation via a multielectrode array with recording intrinsic optical signals as a measure of neuronal population activity. PMID:22723999

  14. Universal Critical Dynamics in High Resolution Neuronal Avalanche Data

    NASA Astrophysics Data System (ADS)

    Friedman, Nir; Ito, Shinya; Brinkman, Braden A. W.; Shimono, Masanori; DeVille, R. E. Lee; Dahmen, Karin A.; Beggs, John M.; Butler, Thomas C.

    2012-05-01

    The tasks of neural computation are remarkably diverse. To function optimally, neuronal networks have been hypothesized to operate near a nonequilibrium critical point. However, experimental evidence for critical dynamics has been inconclusive. Here, we show that the dynamics of cultured cortical networks are critical. We analyze neuronal network data collected at the individual neuron level using the framework of nonequilibrium phase transitions. Among the most striking predictions confirmed is that the mean temporal profiles of avalanches of widely varying durations are quantitatively described by a single universal scaling function. We also show that the data have three additional features predicted by critical phenomena: approximate power law distributions of avalanche sizes and durations, samples in subcritical and supercritical phases, and scaling laws between anomalous exponents.

  15. From in silico astrocyte cell models to neuron-astrocyte network models: A review.

    PubMed

    Oschmann, Franziska; Berry, Hugues; Obermayer, Klaus; Lenk, Kerstin

    2018-01-01

    The idea that astrocytes may be active partners in synaptic information processing has recently emerged from abundant experimental reports. Because of their spatial proximity to neurons and their bidirectional communication with them, astrocytes are now considered as an important third element of the synapse. Astrocytes integrate and process synaptic information and by doing so generate cytosolic calcium signals that are believed to reflect neuronal transmitter release. Moreover, they regulate neuronal information transmission by releasing gliotransmitters into the synaptic cleft affecting both pre- and postsynaptic receptors. Concurrent with the first experimental reports of the astrocytic impact on neural network dynamics, computational models describing astrocytic functions have been developed. In this review, we give an overview over the published computational models of astrocytic functions, from single-cell dynamics to the tripartite synapse level and network models of astrocytes and neurons. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Towards deep learning with segregated dendrites

    PubMed Central

    Guerguiev, Jordan; Lillicrap, Timothy P

    2017-01-01

    Deep learning has led to significant advances in artificial intelligence, in part, by adopting strategies motivated by neurophysiology. However, it is unclear whether deep learning could occur in the real brain. Here, we show that a deep learning algorithm that utilizes multi-compartment neurons might help us to understand how the neocortex optimizes cost functions. Like neocortical pyramidal neurons, neurons in our model receive sensory information and higher-order feedback in electrotonically segregated compartments. Thanks to this segregation, neurons in different layers of the network can coordinate synaptic weight updates. As a result, the network learns to categorize images better than a single layer network. Furthermore, we show that our algorithm takes advantage of multilayer architectures to identify useful higher-order representations—the hallmark of deep learning. This work demonstrates that deep learning can be achieved using segregated dendritic compartments, which may help to explain the morphology of neocortical pyramidal neurons. PMID:29205151

  17. Towards deep learning with segregated dendrites.

    PubMed

    Guerguiev, Jordan; Lillicrap, Timothy P; Richards, Blake A

    2017-12-05

    Deep learning has led to significant advances in artificial intelligence, in part, by adopting strategies motivated by neurophysiology. However, it is unclear whether deep learning could occur in the real brain. Here, we show that a deep learning algorithm that utilizes multi-compartment neurons might help us to understand how the neocortex optimizes cost functions. Like neocortical pyramidal neurons, neurons in our model receive sensory information and higher-order feedback in electrotonically segregated compartments. Thanks to this segregation, neurons in different layers of the network can coordinate synaptic weight updates. As a result, the network learns to categorize images better than a single layer network. Furthermore, we show that our algorithm takes advantage of multilayer architectures to identify useful higher-order representations-the hallmark of deep learning. This work demonstrates that deep learning can be achieved using segregated dendritic compartments, which may help to explain the morphology of neocortical pyramidal neurons.

  18. Flow-Based Network Analysis of the Caenorhabditis elegans Connectome

    PubMed Central

    Bacik, Karol A.; Schaub, Michael T.; Billeh, Yazan N.; Barahona, Mauricio

    2016-01-01

    We exploit flow propagation on the directed neuronal network of the nematode C. elegans to reveal dynamically relevant features of its connectome. We find flow-based groupings of neurons at different levels of granularity, which we relate to functional and anatomical constituents of its nervous system. A systematic in silico evaluation of the full set of single and double neuron ablations is used to identify deletions that induce the most severe disruptions of the multi-resolution flow structure. Such ablations are linked to functionally relevant neurons, and suggest potential candidates for further in vivo investigation. In addition, we use the directional patterns of incoming and outgoing network flows at all scales to identify flow profiles for the neurons in the connectome, without pre-imposing a priori categories. The four flow roles identified are linked to signal propagation motivated by biological input-response scenarios. PMID:27494178

  19. Biophysical synaptic dynamics in an analog VLSI network of Hodgkin-Huxley neurons.

    PubMed

    Yu, Theodore; Cauwenberghs, Gert

    2009-01-01

    We study synaptic dynamics in a biophysical network of four coupled spiking neurons implemented in an analog VLSI silicon microchip. The four neurons implement a generalized Hodgkin-Huxley model with individually configurable rate-based kinetics of opening and closing of Na+ and K+ ion channels. The twelve synapses implement a rate-based first-order kinetic model of neurotransmitter and receptor dynamics, accounting for NMDA and non-NMDA type chemical synapses. The implemented models on the chip are fully configurable by 384 parameters accounting for conductances, reversal potentials, and pre/post-synaptic voltage-dependence of the channel kinetics. We describe the models and present experimental results from the chip characterizing single neuron dynamics, single synapse dynamics, and multi-neuron network dynamics showing phase-locking behavior as a function of synaptic coupling strength. The 3mm x 3mm microchip consumes 1.29 mW power making it promising for applications including neuromorphic modeling and neural prostheses.

  20. Fast reversible learning based on neurons functioning as anisotropic multiplex hubs

    NASA Astrophysics Data System (ADS)

    Vardi, Roni; Goldental, Amir; Sheinin, Anton; Sardi, Shira; Kanter, Ido

    2017-05-01

    Neural networks are composed of neurons and synapses, which are responsible for learning in a slow adaptive dynamical process. Here we experimentally show that neurons act like independent anisotropic multiplex hubs, which relay and mute incoming signals following their input directions. Theoretically, the observed information routing enriches the computational capabilities of neurons by allowing, for instance, equalization among different information routes in the network, as well as high-frequency transmission of complex time-dependent signals constructed via several parallel routes. In addition, this kind of hubs adaptively eliminate very noisy neurons from the dynamics of the network, preventing masking of information transmission. The timescales for these features are several seconds at most, as opposed to the imprint of information by the synaptic plasticity, a process which exceeds minutes. Results open the horizon to the understanding of fast and adaptive learning realities in higher cognitive brain's functionalities.

  1. Novel transcriptional networks regulated by CLOCK in human neurons.

    PubMed

    Fontenot, Miles R; Berto, Stefano; Liu, Yuxiang; Werthmann, Gordon; Douglas, Connor; Usui, Noriyoshi; Gleason, Kelly; Tamminga, Carol A; Takahashi, Joseph S; Konopka, Genevieve

    2017-11-01

    The molecular mechanisms underlying human brain evolution are not fully understood; however, previous work suggested that expression of the transcription factor CLOCK in the human cortex might be relevant to human cognition and disease. In this study, we investigated this novel transcriptional role for CLOCK in human neurons by performing chromatin immunoprecipitation sequencing for endogenous CLOCK in adult neocortices and RNA sequencing following CLOCK knockdown in differentiated human neurons in vitro. These data suggested that CLOCK regulates the expression of genes involved in neuronal migration, and a functional assay showed that CLOCK knockdown increased neuronal migratory distance. Furthermore, dysregulation of CLOCK disrupts coexpressed networks of genes implicated in neuropsychiatric disorders, and the expression of these networks is driven by hub genes with human-specific patterns of expression. These data support a role for CLOCK-regulated transcriptional cascades involved in human brain evolution and function. © 2017 Fontenot et al.; Published by Cold Spring Harbor Laboratory Press.

  2. Parallel multipoint recording of aligned and cultured neurons on corresponding Micro Channel Array toward on-chip cell analysis.

    PubMed

    Tonomura, W; Moriguchi, H; Jimbo, Y; Konishi, S

    2008-01-01

    This paper describes an advanced Micro Channel Array (MCA) so as to record neuronal network at multiple points simultaneously. Developed MCA is designed for neuronal network analysis which has been studied by co-authors using MEA (Micro Electrode Arrays) system. The MCA employs the principle of the extracellular recording. Presented MCA has the following advantages. First of all, the electrodes integrated around individual micro channels are electrically isolated for parallel multipoint recording. Sucking and clamping of cells through micro channels is expected to improve the cellular selectivity and S/N ratio. In this study, hippocampal neurons were cultured on the developed MCA. As a result, the spontaneous and evoked spike potential could be recorded by sucking and clamping the cells at multiple points. Herein, we describe the successful experimental results together with the design and fabrication of the advanced MCA toward on-chip analysis of neuronal network.

  3. Dopamine in motivational control: rewarding, aversive, and alerting

    PubMed Central

    Bromberg-Martin, Ethan S.; Matsumoto, Masayuki; Hikosaka, Okihide

    2010-01-01

    SUMMARY Midbrain dopamine neurons are well known for their strong responses to rewards and their critical role in positive motivation. It has become increasingly clear, however, that dopamine neurons also transmit signals related to salient but non-rewarding experiences such as aversive and alerting events. Here we review recent advances in understanding the reward and non-reward functions of dopamine. Based on this data, we propose that dopamine neurons come in multiple types that are connected with distinct brain networks and have distinct roles in motivational control. Some dopamine neurons encode motivational value, supporting brain networks for seeking, evaluation, and value learning. Others encode motivational salience, supporting brain networks for orienting, cognition, and general motivation. Both types of dopamine neurons are augmented by an alerting signal involved in rapid detection of potentially important sensory cues. We hypothesize that these dopaminergic pathways for value, salience, and alerting cooperate to support adaptive behavior. PMID:21144997

  4. GABA-A receptor antagonists increase firing, bursting and synchrony of spontaneous activity in neuronal networks grown on microelectrode arrays: a step towards chemical "fingerprinting"

    EPA Science Inventory

    Assessment of effects on spontaneous network activity in neurons grown on MEAs is a proposed method to screen chemicals for potential neurotoxicity. In addition, differential effects on network activity (chemical "fingerprints") could be used to classify chemical modes of action....

  5. Information-geometric measures as robust estimators of connection strengths and external inputs.

    PubMed

    Tatsuno, Masami; Fellous, Jean-Marc; Amari, Shun-Ichi

    2009-08-01

    Information geometry has been suggested to provide a powerful tool for analyzing multineuronal spike trains. Among several advantages of this approach, a significant property is the close link between information-geometric measures and neural network architectures. Previous modeling studies established that the first- and second-order information-geometric measures corresponded to the number of external inputs and the connection strengths of the network, respectively. This relationship was, however, limited to a symmetrically connected network, and the number of neurons used in the parameter estimation of the log-linear model needed to be known. Recently, simulation studies of biophysical model neurons have suggested that information geometry can estimate the relative change of connection strengths and external inputs even with asymmetric connections. Inspired by these studies, we analytically investigated the link between the information-geometric measures and the neural network structure with asymmetrically connected networks of N neurons. We focused on the information-geometric measures of orders one and two, which can be derived from the two-neuron log-linear model, because unlike higher-order measures, they can be easily estimated experimentally. Considering the equilibrium state of a network of binary model neurons that obey stochastic dynamics, we analytically showed that the corrected first- and second-order information-geometric measures provided robust and consistent approximation of the external inputs and connection strengths, respectively. These results suggest that information-geometric measures provide useful insights into the neural network architecture and that they will contribute to the study of system-level neuroscience.

  6. Predictive Coding of Dynamical Variables in Balanced Spiking Networks

    PubMed Central

    Boerlin, Martin; Machens, Christian K.; Denève, Sophie

    2013-01-01

    Two observations about the cortex have puzzled neuroscientists for a long time. First, neural responses are highly variable. Second, the level of excitation and inhibition received by each neuron is tightly balanced at all times. Here, we demonstrate that both properties are necessary consequences of neural networks that represent information efficiently in their spikes. We illustrate this insight with spiking networks that represent dynamical variables. Our approach is based on two assumptions: We assume that information about dynamical variables can be read out linearly from neural spike trains, and we assume that neurons only fire a spike if that improves the representation of the dynamical variables. Based on these assumptions, we derive a network of leaky integrate-and-fire neurons that is able to implement arbitrary linear dynamical systems. We show that the membrane voltage of the neurons is equivalent to a prediction error about a common population-level signal. Among other things, our approach allows us to construct an integrator network of spiking neurons that is robust against many perturbations. Most importantly, neural variability in our networks cannot be equated to noise. Despite exhibiting the same single unit properties as widely used population code models (e.g. tuning curves, Poisson distributed spike trains), balanced networks are orders of magnitudes more reliable. Our approach suggests that spikes do matter when considering how the brain computes, and that the reliability of cortical representations could have been strongly underestimated. PMID:24244113

  7. Midbrain dopamine neurons in Parkinson's disease exhibit a dysregulated miRNA and target-gene network.

    PubMed

    Briggs, Christine E; Wang, Yulei; Kong, Benjamin; Woo, Tsung-Ung W; Iyer, Lakshmanan K; Sonntag, Kai C

    2015-08-27

    The degeneration of substantia nigra (SN) dopamine (DA) neurons in sporadic Parkinson׳s disease (PD) is characterized by disturbed gene expression networks. Micro(mi)RNAs are post-transcriptional regulators of gene expression and we recently provided evidence that these molecules may play a functional role in the pathogenesis of PD. Here, we document a comprehensive analysis of miRNAs in SN DA neurons and PD, including sex differences. Our data show that miRNAs are dysregulated in disease-affected neurons and differentially expressed between male and female samples with a trend of more up-regulated miRNAs in males and more down-regulated miRNAs in females. Unbiased Ingenuity Pathway Analysis (IPA) revealed a network of miRNA/target-gene associations that is consistent with dysfunctional gene and signaling pathways in PD pathology. Our study provides evidence for a general association of miRNAs with the cellular function and identity of SN DA neurons, and with deregulated gene expression networks and signaling pathways related to PD pathogenesis that may be sex-specific. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. Effects of inhibitory neurons on the quorum percolation model and dynamical extension with the Brette-Gerstner model

    NASA Astrophysics Data System (ADS)

    Fardet, Tanguy; Bottani, Samuel; Métens, Stéphane; Monceau, Pascal

    2018-06-01

    The Quorum Percolation model (QP) has been designed in the context of neurobiology to describe the initiation of activity bursts occurring in neuronal cultures from the point of view of statistical physics rather than from a dynamical synchronization approach. This paper aims at investigating an extension of the original QP model by taking into account the presence of inhibitory neurons in the cultures (IQP model). The first part of this paper is focused on an equivalence between the presence of inhibitory neurons and a reduction of the network connectivity. By relying on a simple topological argument, we show that the mean activation behavior of networks containing a fraction η of inhibitory neurons can be mapped onto purely excitatory networks with an appropriately modified wiring, provided that η remains in the range usually observed in neuronal cultures, namely η ⪅ 20%. As a striking result, we show that such a mapping enables to predict the evolution of the critical point of the IQP model with the fraction of inhibitory neurons. In a second part, we bridge the gap between the description of bursts in the framework of percolation and the temporal description of neural networks activity by showing how dynamical simulations of bursts with an adaptive exponential integrate-and-fire model lead to a mean description of bursts activation which is captured by Quorum Percolation.

  9. Static and dynamic views of visual cortical organization.

    PubMed

    Casagrande, Vivien A; Xu, Xiangmin; Sáry, Gyula

    2002-01-01

    Without the aid of modern techniques Cajal speculated that cells in the visual cortex were connected in circuits. From Cajal's time until fairly recently, the flow of information within the cells and circuits of visual cortex has been described as progressing from input to output, from sensation to action. In this chapter we argue that a paradigm shift in our concept of the visual cortical neuron is under way. The most important change in our view concerns the neuron's functional role. Visual cortical neurons do not have static functional signatures but instead function dynamically depending on the ongoing activity of the networks to which they belong. These networks are not merely top-down or bottom-up unidirectional transmission lines, but rather represent machinery that uses recurrent information and is dynamic and highly adaptable. With the advancement of technology for analyzing the conversations of multiple neurons at many levels in the visual system and higher resolution imaging, we predict that the paradigm shift will progress to the point where neurons are no longer viewed as independent processing units but as members of subsets of networks where their role is mapped in space-time coordinates in relationship to the other neuronal members. This view moves us far from Cajal's original views of the neuron. Nevertheless, we believe that understanding the basic morphology and wiring of networks will continue to contribute to our overall understanding of the visual cortex.

  10. Constructive autoassociative neural network for facial recognition.

    PubMed

    Fernandes, Bruno J T; Cavalcanti, George D C; Ren, Tsang I

    2014-01-01

    Autoassociative artificial neural networks have been used in many different computer vision applications. However, it is difficult to define the most suitable neural network architecture because this definition is based on previous knowledge and depends on the problem domain. To address this problem, we propose a constructive autoassociative neural network called CANet (Constructive Autoassociative Neural Network). CANet integrates the concepts of receptive fields and autoassociative memory in a dynamic architecture that changes the configuration of the receptive fields by adding new neurons in the hidden layer, while a pruning algorithm removes neurons from the output layer. Neurons in the CANet output layer present lateral inhibitory connections that improve the recognition rate. Experiments in face recognition and facial expression recognition show that the CANet outperforms other methods presented in the literature.

  11. Functional network inference of the suprachiasmatic nucleus

    PubMed Central

    Abel, John H.; Meeker, Kirsten; Granados-Fuentes, Daniel; St. John, Peter C.; Wang, Thomas J.; Bales, Benjamin B.; Doyle, Francis J.; Herzog, Erik D.; Petzold, Linda R.

    2016-01-01

    In the mammalian suprachiasmatic nucleus (SCN), noisy cellular oscillators communicate within a neuronal network to generate precise system-wide circadian rhythms. Although the intracellular genetic oscillator and intercellular biochemical coupling mechanisms have been examined previously, the network topology driving synchronization of the SCN has not been elucidated. This network has been particularly challenging to probe, due to its oscillatory components and slow coupling timescale. In this work, we investigated the SCN network at a single-cell resolution through a chemically induced desynchronization. We then inferred functional connections in the SCN by applying the maximal information coefficient statistic to bioluminescence reporter data from individual neurons while they resynchronized their circadian cycling. Our results demonstrate that the functional network of circadian cells associated with resynchronization has small-world characteristics, with a node degree distribution that is exponential. We show that hubs of this small-world network are preferentially located in the central SCN, with sparsely connected shells surrounding these cores. Finally, we used two computational models of circadian neurons to validate our predictions of network structure. PMID:27044085

  12. Cortical network architecture for context processing in primate brain

    PubMed Central

    Chao, Zenas C; Nagasaka, Yasuo; Fujii, Naotaka

    2015-01-01

    Context is information linked to a situation that can guide behavior. In the brain, context is encoded by sensory processing and can later be retrieved from memory. How context is communicated within the cortical network in sensory and mnemonic forms is unknown due to the lack of methods for high-resolution, brain-wide neuronal recording and analysis. Here, we report the comprehensive architecture of a cortical network for context processing. Using hemisphere-wide, high-density electrocorticography, we measured large-scale neuronal activity from monkeys observing videos of agents interacting in situations with different contexts. We extracted five context-related network structures including a bottom-up network during encoding and, seconds later, cue-dependent retrieval of the same network with the opposite top-down connectivity. These findings show that context is represented in the cortical network as distributed communication structures with dynamic information flows. This study provides a general methodology for recording and analyzing cortical network neuronal communication during cognition. DOI: http://dx.doi.org/10.7554/eLife.06121.001 PMID:26416139

  13. Impact of adaptation currents on synchronization of coupled exponential integrate-and-fire neurons.

    PubMed

    Ladenbauer, Josef; Augustin, Moritz; Shiau, LieJune; Obermayer, Klaus

    2012-01-01

    The ability of spiking neurons to synchronize their activity in a network depends on the response behavior of these neurons as quantified by the phase response curve (PRC) and on coupling properties. The PRC characterizes the effects of transient inputs on spike timing and can be measured experimentally. Here we use the adaptive exponential integrate-and-fire (aEIF) neuron model to determine how subthreshold and spike-triggered slow adaptation currents shape the PRC. Based on that, we predict how synchrony and phase locked states of coupled neurons change in presence of synaptic delays and unequal coupling strengths. We find that increased subthreshold adaptation currents cause a transition of the PRC from only phase advances to phase advances and delays in response to excitatory perturbations. Increased spike-triggered adaptation currents on the other hand predominantly skew the PRC to the right. Both adaptation induced changes of the PRC are modulated by spike frequency, being more prominent at lower frequencies. Applying phase reduction theory, we show that subthreshold adaptation stabilizes synchrony for pairs of coupled excitatory neurons, while spike-triggered adaptation causes locking with a small phase difference, as long as synaptic heterogeneities are negligible. For inhibitory pairs synchrony is stable and robust against conduction delays, and adaptation can mediate bistability of in-phase and anti-phase locking. We further demonstrate that stable synchrony and bistable in/anti-phase locking of pairs carry over to synchronization and clustering of larger networks. The effects of adaptation in aEIF neurons on PRCs and network dynamics qualitatively reflect those of biophysical adaptation currents in detailed Hodgkin-Huxley-based neurons, which underscores the utility of the aEIF model for investigating the dynamical behavior of networks. Our results suggest neuronal spike frequency adaptation as a mechanism synchronizing low frequency oscillations in local excitatory networks, but indicate that inhibition rather than excitation generates coherent rhythms at higher frequencies.

  14. Impact of Adaptation Currents on Synchronization of Coupled Exponential Integrate-and-Fire Neurons

    PubMed Central

    Ladenbauer, Josef; Augustin, Moritz; Shiau, LieJune; Obermayer, Klaus

    2012-01-01

    The ability of spiking neurons to synchronize their activity in a network depends on the response behavior of these neurons as quantified by the phase response curve (PRC) and on coupling properties. The PRC characterizes the effects of transient inputs on spike timing and can be measured experimentally. Here we use the adaptive exponential integrate-and-fire (aEIF) neuron model to determine how subthreshold and spike-triggered slow adaptation currents shape the PRC. Based on that, we predict how synchrony and phase locked states of coupled neurons change in presence of synaptic delays and unequal coupling strengths. We find that increased subthreshold adaptation currents cause a transition of the PRC from only phase advances to phase advances and delays in response to excitatory perturbations. Increased spike-triggered adaptation currents on the other hand predominantly skew the PRC to the right. Both adaptation induced changes of the PRC are modulated by spike frequency, being more prominent at lower frequencies. Applying phase reduction theory, we show that subthreshold adaptation stabilizes synchrony for pairs of coupled excitatory neurons, while spike-triggered adaptation causes locking with a small phase difference, as long as synaptic heterogeneities are negligible. For inhibitory pairs synchrony is stable and robust against conduction delays, and adaptation can mediate bistability of in-phase and anti-phase locking. We further demonstrate that stable synchrony and bistable in/anti-phase locking of pairs carry over to synchronization and clustering of larger networks. The effects of adaptation in aEIF neurons on PRCs and network dynamics qualitatively reflect those of biophysical adaptation currents in detailed Hodgkin-Huxley-based neurons, which underscores the utility of the aEIF model for investigating the dynamical behavior of networks. Our results suggest neuronal spike frequency adaptation as a mechanism synchronizing low frequency oscillations in local excitatory networks, but indicate that inhibition rather than excitation generates coherent rhythms at higher frequencies. PMID:22511861

  15. Extracting functionally feedforward networks from a population of spiking neurons

    PubMed Central

    Vincent, Kathleen; Tauskela, Joseph S.; Thivierge, Jean-Philippe

    2012-01-01

    Neuronal avalanches are a ubiquitous form of activity characterized by spontaneous bursts whose size distribution follows a power-law. Recent theoretical models have replicated power-law avalanches by assuming the presence of functionally feedforward connections (FFCs) in the underlying dynamics of the system. Accordingly, avalanches are generated by a feedforward chain of activation that persists despite being embedded in a larger, massively recurrent circuit. However, it is unclear to what extent networks of living neurons that exhibit power-law avalanches rely on FFCs. Here, we employed a computational approach to reconstruct the functional connectivity of cultured cortical neurons plated on multielectrode arrays (MEAs) and investigated whether pharmacologically induced alterations in avalanche dynamics are accompanied by changes in FFCs. This approach begins by extracting a functional network of directed links between pairs of neurons, and then evaluates the strength of FFCs using Schur decomposition. In a first step, we examined the ability of this approach to extract FFCs from simulated spiking neurons. The strength of FFCs obtained in strictly feedforward networks diminished monotonically as links were gradually rewired at random. Next, we estimated the FFCs of spontaneously active cortical neuron cultures in the presence of either a control medium, a GABAA receptor antagonist (PTX), or an AMPA receptor antagonist combined with an NMDA receptor antagonist (APV/DNQX). The distribution of avalanche sizes in these cultures was modulated by this pharmacology, with a shallower power-law under PTX (due to the prominence of larger avalanches) and a steeper power-law under APV/DNQX (due to avalanches recruiting fewer neurons) relative to control cultures. The strength of FFCs increased in networks after application of PTX, consistent with an amplification of feedforward activity during avalanches. Conversely, FFCs decreased after application of APV/DNQX, consistent with fading feedforward activation. The observed alterations in FFCs provide experimental support for recent theoretical work linking power-law avalanches to the feedforward organization of functional connections in local neuronal circuits. PMID:23091458

  16. Extracting functionally feedforward networks from a population of spiking neurons.

    PubMed

    Vincent, Kathleen; Tauskela, Joseph S; Thivierge, Jean-Philippe

    2012-01-01

    Neuronal avalanches are a ubiquitous form of activity characterized by spontaneous bursts whose size distribution follows a power-law. Recent theoretical models have replicated power-law avalanches by assuming the presence of functionally feedforward connections (FFCs) in the underlying dynamics of the system. Accordingly, avalanches are generated by a feedforward chain of activation that persists despite being embedded in a larger, massively recurrent circuit. However, it is unclear to what extent networks of living neurons that exhibit power-law avalanches rely on FFCs. Here, we employed a computational approach to reconstruct the functional connectivity of cultured cortical neurons plated on multielectrode arrays (MEAs) and investigated whether pharmacologically induced alterations in avalanche dynamics are accompanied by changes in FFCs. This approach begins by extracting a functional network of directed links between pairs of neurons, and then evaluates the strength of FFCs using Schur decomposition. In a first step, we examined the ability of this approach to extract FFCs from simulated spiking neurons. The strength of FFCs obtained in strictly feedforward networks diminished monotonically as links were gradually rewired at random. Next, we estimated the FFCs of spontaneously active cortical neuron cultures in the presence of either a control medium, a GABA(A) receptor antagonist (PTX), or an AMPA receptor antagonist combined with an NMDA receptor antagonist (APV/DNQX). The distribution of avalanche sizes in these cultures was modulated by this pharmacology, with a shallower power-law under PTX (due to the prominence of larger avalanches) and a steeper power-law under APV/DNQX (due to avalanches recruiting fewer neurons) relative to control cultures. The strength of FFCs increased in networks after application of PTX, consistent with an amplification of feedforward activity during avalanches. Conversely, FFCs decreased after application of APV/DNQX, consistent with fading feedforward activation. The observed alterations in FFCs provide experimental support for recent theoretical work linking power-law avalanches to the feedforward organization of functional connections in local neuronal circuits.

  17. Pulse propagation in discrete excitatory networks of integrate-and-fire neurons.

    PubMed

    Badel, Laurent; Tonnelier, Arnaud

    2004-07-01

    We study the propagation of solitary waves in a discrete excitatory network of integrate-and-fire neurons. We show the existence and the stability of a fast wave and a family of slow waves. Fast waves are similar to those already described in continuum networks. Stable slow waves have not been previously reported in purely excitatory networks and their propagation is particular to the discrete nature of the network. The robustness of our results is studied in the presence of noise.

  18. Hybrid discrete-time neural networks.

    PubMed

    Cao, Hongjun; Ibarz, Borja

    2010-11-13

    Hybrid dynamical systems combine evolution equations with state transitions. When the evolution equations are discrete-time (also called map-based), the result is a hybrid discrete-time system. A class of biological neural network models that has recently received some attention falls within this category: map-based neuron models connected by means of fast threshold modulation (FTM). FTM is a connection scheme that aims to mimic the switching dynamics of a neuron subject to synaptic inputs. The dynamic equations of the neuron adopt different forms according to the state (either firing or not firing) and type (excitatory or inhibitory) of their presynaptic neighbours. Therefore, the mathematical model of one such network is a combination of discrete-time evolution equations with transitions between states, constituting a hybrid discrete-time (map-based) neural network. In this paper, we review previous work within the context of these models, exemplifying useful techniques to analyse them. Typical map-based neuron models are low-dimensional and amenable to phase-plane analysis. In bursting models, fast-slow decomposition can be used to reduce dimensionality further, so that the dynamics of a pair of connected neurons can be easily understood. We also discuss a model that includes electrical synapses in addition to chemical synapses with FTM. Furthermore, we describe how master stability functions can predict the stability of synchronized states in these networks. The main results are extended to larger map-based neural networks.

  19. Synaptic dynamics and neuronal network connectivity are reflected in the distribution of times in Up states.

    PubMed

    Dao Duc, Khanh; Parutto, Pierre; Chen, Xiaowei; Epsztein, Jérôme; Konnerth, Arthur; Holcman, David

    2015-01-01

    The dynamics of neuronal networks connected by synaptic dynamics can sustain long periods of depolarization that can last for hundreds of milliseconds such as Up states recorded during sleep or anesthesia. Yet the underlying mechanism driving these periods remain unclear. We show here within a mean-field model that the residence time of the neuronal membrane potential in cortical Up states does not follow a Poissonian law, but presents several peaks. Furthermore, the present modeling approach allows extracting some information about the neuronal network connectivity from the time distribution histogram. Based on a synaptic-depression model, we find that these peaks, that can be observed in histograms of patch-clamp recordings are not artifacts of electrophysiological measurements, but rather are an inherent property of the network dynamics. Analysis of the equations reveals a stable focus located close to the unstable limit cycle, delimiting a region that defines the Up state. The model further shows that the peaks observed in the Up state time distribution are due to winding around the focus before escaping from the basin of attraction. Finally, we use in vivo recordings of intracellular membrane potential and we recover from the peak distribution, some information about the network connectivity. We conclude that it is possible to recover the network connectivity from the distribution of times that the neuronal membrane voltage spends in Up states.

  20. Efficient Transmission of Subthreshold Signals in Complex Networks of Spiking Neurons

    PubMed Central

    Torres, Joaquin J.; Elices, Irene; Marro, J.

    2015-01-01

    We investigate the efficient transmission and processing of weak, subthreshold signals in a realistic neural medium in the presence of different levels of the underlying noise. Assuming Hebbian weights for maximal synaptic conductances—that naturally balances the network with excitatory and inhibitory synapses—and considering short-term synaptic plasticity affecting such conductances, we found different dynamic phases in the system. This includes a memory phase where population of neurons remain synchronized, an oscillatory phase where transitions between different synchronized populations of neurons appears and an asynchronous or noisy phase. When a weak stimulus input is applied to each neuron, increasing the level of noise in the medium we found an efficient transmission of such stimuli around the transition and critical points separating different phases for well-defined different levels of stochasticity in the system. We proved that this intriguing phenomenon is quite robust, as it occurs in different situations including several types of synaptic plasticity, different type and number of stored patterns and diverse network topologies, namely, diluted networks and complex topologies such as scale-free and small-world networks. We conclude that the robustness of the phenomenon in different realistic scenarios, including spiking neurons, short-term synaptic plasticity and complex networks topologies, make very likely that it could also occur in actual neural systems as recent psycho-physical experiments suggest. PMID:25799449

  1. A Markov model for the temporal dynamics of balanced random networks of finite size

    PubMed Central

    Lagzi, Fereshteh; Rotter, Stefan

    2014-01-01

    The balanced state of recurrent networks of excitatory and inhibitory spiking neurons is characterized by fluctuations of population activity about an attractive fixed point. Numerical simulations show that these dynamics are essentially nonlinear, and the intrinsic noise (self-generated fluctuations) in networks of finite size is state-dependent. Therefore, stochastic differential equations with additive noise of fixed amplitude cannot provide an adequate description of the stochastic dynamics. The noise model should, rather, result from a self-consistent description of the network dynamics. Here, we consider a two-state Markovian neuron model, where spikes correspond to transitions from the active state to the refractory state. Excitatory and inhibitory input to this neuron affects the transition rates between the two states. The corresponding nonlinear dependencies can be identified directly from numerical simulations of networks of leaky integrate-and-fire neurons, discretized at a time resolution in the sub-millisecond range. Deterministic mean-field equations, and a noise component that depends on the dynamic state of the network, are obtained from this model. The resulting stochastic model reflects the behavior observed in numerical simulations quite well, irrespective of the size of the network. In particular, a strong temporal correlation between the two populations, a hallmark of the balanced state in random recurrent networks, are well represented by our model. Numerical simulations of such networks show that a log-normal distribution of short-term spike counts is a property of balanced random networks with fixed in-degree that has not been considered before, and our model shares this statistical property. Furthermore, the reconstruction of the flow from simulated time series suggests that the mean-field dynamics of finite-size networks are essentially of Wilson-Cowan type. We expect that this novel nonlinear stochastic model of the interaction between neuronal populations also opens new doors to analyze the joint dynamics of multiple interacting networks. PMID:25520644

  2. Spatiotemporal intracellular dynamics of neurotrophin and its receptors. Implications for neurotrophin signaling and neuronal function.

    PubMed

    Bronfman, F C; Lazo, O M; Flores, C; Escudero, C A

    2014-01-01

    Neurons possess a polarized morphology specialized to contribute to neuronal networks, and this morphology imposes an important challenge for neuronal signaling and communication. The physiology of the network is regulated by neurotrophic factors that are secreted in an activity-dependent manner modulating neuronal connectivity. Neurotrophins are a well-known family of neurotrophic factors that, together with their cognate receptors, the Trks and the p75 neurotrophin receptor, regulate neuronal plasticity and survival and determine the neuronal phenotype in healthy and regenerating neurons. Is it now becoming clear that neurotrophin signaling and vesicular transport are coordinated to modify neuronal function because disturbances of vesicular transport mechanisms lead to disturbed neurotrophin signaling and to diseases of the nervous system. This chapter summarizes our current understanding of how the regulated secretion of neurotrophin, the distribution of neurotrophin receptors in different locations of neurons, and the intracellular transport of neurotrophin-induced signaling in distal processes are achieved to allow coordinated neurotrophin signaling in the cell body and axons.

  3. GABA and Gap Junctions in the Development of Synchronized Activity in Human Pluripotent Stem Cell-Derived Neural Networks

    PubMed Central

    Mäkinen, Meeri Eeva-Liisa; Ylä-Outinen, Laura; Narkilahti, Susanna

    2018-01-01

    The electrical activity of the brain arises from single neurons communicating with each other. However, how single neurons interact during early development to give rise to neural network activity remains poorly understood. We studied the emergence of synchronous neural activity in human pluripotent stem cell (hPSC)-derived neural networks simultaneously on a single-neuron level and network level. The contribution of gamma-aminobutyric acid (GABA) and gap junctions to the development of synchronous activity in hPSC-derived neural networks was studied with GABA agonist and antagonist and by blocking gap junctional communication, respectively. We characterized the dynamics of the network-wide synchrony in hPSC-derived neural networks with high spatial resolution (calcium imaging) and temporal resolution microelectrode array (MEA). We found that the emergence of synchrony correlates with a decrease in very strong GABA excitation. However, the synchronous network was found to consist of a heterogeneous mixture of synchronously active cells with variable responses to GABA, GABA agonists and gap junction blockers. Furthermore, we show how single-cell distributions give rise to the network effect of GABA, GABA agonists and gap junction blockers. Finally, based on our observations, we suggest that the earliest form of synchronous neuronal activity depends on gap junctions and a decrease in GABA induced depolarization but not on GABAA mediated signaling. PMID:29559893

  4. Neuronal synchrony: Peculiarity and generality

    PubMed Central

    Nowotny, Thomas; Huerta, Ramon; Rabinovich, Mikhail I.

    2008-01-01

    Synchronization in neuronal systems is a new and intriguing application of dynamical systems theory. Why are neuronal systems different as a subject for synchronization? (1) Neurons in themselves are multidimensional nonlinear systems that are able to exhibit a wide variety of different activity patterns. Their “dynamical repertoire” includes regular or chaotic spiking, regular or chaotic bursting, multistability, and complex transient regimes. (2) Usually, neuronal oscillations are the result of the cooperative activity of many synaptically connected neurons (a neuronal circuit). Thus, it is necessary to consider synchronization between different neuronal circuits as well. (3) The synapses that implement the coupling between neurons are also dynamical elements and their intrinsic dynamics influences the process of synchronization or entrainment significantly. In this review we will focus on four new problems: (i) the synchronization in minimal neuronal networks with plastic synapses (synchronization with activity dependent coupling), (ii) synchronization of bursts that are generated by a group of nonsymmetrically coupled inhibitory neurons (heteroclinic synchronization), (iii) the coordination of activities of two coupled neuronal networks (partial synchronization of small composite structures), and (iv) coarse grained synchronization in larger systems (synchronization on a mesoscopic scale). PMID:19045493

  5. Phase synchronization motion and neural coding in dynamic transmission of neural information.

    PubMed

    Wang, Rubin; Zhang, Zhikang; Qu, Jingyi; Cao, Jianting

    2011-07-01

    In order to explore the dynamic characteristics of neural coding in the transmission of neural information in the brain, a model of neural network consisting of three neuronal populations is proposed in this paper using the theory of stochastic phase dynamics. Based on the model established, the neural phase synchronization motion and neural coding under spontaneous activity and stimulation are examined, for the case of varying network structure. Our analysis shows that, under the condition of spontaneous activity, the characteristics of phase neural coding are unrelated to the number of neurons participated in neural firing within the neuronal populations. The result of numerical simulation supports the existence of sparse coding within the brain, and verifies the crucial importance of the magnitudes of the coupling coefficients in neural information processing as well as the completely different information processing capability of neural information transmission in both serial and parallel couplings. The result also testifies that under external stimulation, the bigger the number of neurons in a neuronal population, the more the stimulation influences the phase synchronization motion and neural coding evolution in other neuronal populations. We verify numerically the experimental result in neurobiology that the reduction of the coupling coefficient between neuronal populations implies the enhancement of lateral inhibition function in neural networks, with the enhancement equivalent to depressing neuronal excitability threshold. Thus, the neuronal populations tend to have a stronger reaction under the same stimulation, and more neurons get excited, leading to more neurons participating in neural coding and phase synchronization motion.

  6. Lactate rescues neuronal sodium homeostasis during impaired energy metabolism.

    PubMed

    Karus, Claudia; Ziemens, Daniel; Rose, Christine R

    2015-01-01

    Recently, we established that recurrent activity evokes network sodium oscillations in neurons and astrocytes in hippocampal tissue slices. Interestingly, metabolic integrity of astrocytes was essential for the neurons' capacity to maintain low sodium and to recover from sodium loads, indicating an intimate metabolic coupling between the 2 cell types. Here, we studied if lactate can support neuronal sodium homeostasis during impaired energy metabolism by analyzing whether glucose removal, pharmacological inhibition of glycolysis and/or addition of lactate affect cellular sodium regulation. Furthermore, we studied the effect of lactate on sodium regulation during recurrent network activity and upon inhibition of the glial Krebs cycle by sodium-fluoroacetate. Our results indicate that lactate is preferentially used by neurons. They demonstrate that lactate supports neuronal sodium homeostasis and rescues the effects of glial poisoning by sodium-fluoroacetate. Altogether, they are in line with the proposed transfer of lactate from astrocytes to neurons, the so-called astrocyte-neuron-lactate shuttle.

  7. Lactate rescues neuronal sodium homeostasis during impaired energy metabolism

    PubMed Central

    Karus, Claudia; Ziemens, Daniel; Rose, Christine R

    2015-01-01

    Recently, we established that recurrent activity evokes network sodium oscillations in neurons and astrocytes in hippocampal tissue slices. Interestingly, metabolic integrity of astrocytes was essential for the neurons' capacity to maintain low sodium and to recover from sodium loads, indicating an intimate metabolic coupling between the 2 cell types. Here, we studied if lactate can support neuronal sodium homeostasis during impaired energy metabolism by analyzing whether glucose removal, pharmacological inhibition of glycolysis and/or addition of lactate affect cellular sodium regulation. Furthermore, we studied the effect of lactate on sodium regulation during recurrent network activity and upon inhibition of the glial Krebs cycle by sodium-fluoroacetate. Our results indicate that lactate is preferentially used by neurons. They demonstrate that lactate supports neuronal sodium homeostasis and rescues the effects of glial poisoning by sodium-fluoroacetate. Altogether, they are in line with the proposed transfer of lactate from astrocytes to neurons, the so-called astrocyte-neuron-lactate shuttle. PMID:26039160

  8. Clustering promotes switching dynamics in networks of noisy neurons

    NASA Astrophysics Data System (ADS)

    Franović, Igor; Klinshov, Vladimir

    2018-02-01

    Macroscopic variability is an emergent property of neural networks, typically manifested in spontaneous switching between the episodes of elevated neuronal activity and the quiescent episodes. We investigate the conditions that facilitate switching dynamics, focusing on the interplay between the different sources of noise and heterogeneity of the network topology. We consider clustered networks of rate-based neurons subjected to external and intrinsic noise and derive an effective model where the network dynamics is described by a set of coupled second-order stochastic mean-field systems representing each of the clusters. The model provides an insight into the different contributions to effective macroscopic noise and qualitatively indicates the parameter domains where switching dynamics may occur. By analyzing the mean-field model in the thermodynamic limit, we demonstrate that clustering promotes multistability, which gives rise to switching dynamics in a considerably wider parameter region compared to the case of a non-clustered network with sparse random connection topology.

  9. Effect of acute stretch injury on action potential and network activity of rat neocortical neurons in culture.

    PubMed

    Magou, George C; Pfister, Bryan J; Berlin, Joshua R

    2015-10-22

    The basis for acute seizures following traumatic brain injury (TBI) remains unclear. Animal models of TBI have revealed acute hyperexcitablility in cortical neurons that could underlie seizure activity, but studying initiating events causing hyperexcitability is difficult in these models. In vitro models of stretch injury with cultured cortical neurons, a surrogate for TBI, allow facile investigation of cellular changes after injury but they have only demonstrated post-injury hypoexcitability. The goal of this study was to determine if neuronal hyperexcitability could be triggered by in vitro stretch injury. Controlled uniaxial stretch injury was delivered to a spatially delimited region of a spontaneously active network of cultured rat cortical neurons, yielding a region of stretch-injured neurons and adjacent regions of non-stretched neurons that did not directly experience stretch injury. Spontaneous electrical activity was measured in non-stretched and stretch-injured neurons, and in control neuronal networks not subjected to stretch injury. Non-stretched neurons in stretch-injured cultures displayed a three-fold increase in action potential firing rate and bursting activity 30-60 min post-injury. Stretch-injured neurons, however, displayed dramatically lower rates of action potential firing and bursting. These results demonstrate that acute hyperexcitability can be observed in non-stretched neurons located in regions adjacent to the site of stretch injury, consistent with reports that seizure activity can arise from regions surrounding the site of localized brain injury. Thus, this in vitro procedure for localized neuronal stretch injury may provide a model to study the earliest cellular changes in neuronal function associated with acute post-traumatic seizures. Copyright © 2015. Published by Elsevier B.V.

  10. To Break or to Brake Neuronal Network Accelerated by Ammonium Ions?

    PubMed Central

    Dynnik, Vladimir V.; Kononov, Alexey V.; Sergeev, Alexander I.; Teplov, Iliya Y.; Tankanag, Arina V.; Zinchenko, Valery P.

    2015-01-01

    Purpose The aim of present study was to investigate the effects of ammonium ions on in vitro neuronal network activity and to search alternative methods of acute ammonia neurotoxicity prevention. Methods Rat hippocampal neuronal and astrocytes co-cultures in vitro, fluorescent microscopy and perforated patch clamp were used to monitor the changes in intracellular Ca2+- and membrane potential produced by ammonium ions and various modulators in the cells implicated in neural networks. Results Low concentrations of NH4Cl (0.1–4 mM) produce short temporal effects on network activity. Application of 5–8 mM NH4Cl: invariably transforms diverse network firing regimen to identical burst patterns, characterized by substantial neuronal membrane depolarization at plateau phase of potential and high-amplitude Ca2+-oscillations; raises frequency and average for period of oscillations Ca2+-level in all cells implicated in network; results in the appearance of group of «run out» cells with high intracellular Ca2+ and steadily diminished amplitudes of oscillations; increases astrocyte Ca2+-signalling, characterized by the appearance of groups of cells with increased intracellular Ca2+-level and/or chaotic Ca2+-oscillations. Accelerated network activity may be suppressed by the blockade of NMDA or AMPA/kainate-receptors or by overactivation of AMPA/kainite-receptors. Ammonia still activate neuronal firing in the presence of GABA(A) receptors antagonist bicuculline, indicating that «disinhibition phenomenon» is not implicated in the mechanisms of networks acceleration. Network activity may also be slowed down by glycine, agonists of metabotropic inhibitory receptors, betaine, L-carnitine, L-arginine, etc. Conclusions Obtained results demonstrate that ammonium ions accelerate neuronal networks firing, implicating ionotropic glutamate receptors, having preserved the activities of group of inhibitory ionotropic and metabotropic receptors. This may mean, that ammonia neurotoxicity might be prevented by the activation of various inhibitory receptors (i.e. by the reinforcement of negative feedback control), instead of application of various enzyme inhibitors and receptor antagonists (breaking of neural, metabolic and signaling systems). PMID:26217943

  11. NeuroCa: integrated framework for systematic analysis of spatiotemporal neuronal activity patterns from large-scale optical recording data

    PubMed Central

    Jang, Min Jee; Nam, Yoonkey

    2015-01-01

    Abstract. Optical recording facilitates monitoring the activity of a large neural network at the cellular scale, but the analysis and interpretation of the collected data remain challenging. Here, we present a MATLAB-based toolbox, named NeuroCa, for the automated processing and quantitative analysis of large-scale calcium imaging data. Our tool includes several computational algorithms to extract the calcium spike trains of individual neurons from the calcium imaging data in an automatic fashion. Two algorithms were developed to decompose the imaging data into the activity of individual cells and subsequently detect calcium spikes from each neuronal signal. Applying our method to dense networks in dissociated cultures, we were able to obtain the calcium spike trains of ∼1000 neurons in a few minutes. Further analyses using these data permitted the quantification of neuronal responses to chemical stimuli as well as functional mapping of spatiotemporal patterns in neuronal firing within the spontaneous, synchronous activity of a large network. These results demonstrate that our method not only automates time-consuming, labor-intensive tasks in the analysis of neural data obtained using optical recording techniques but also provides a systematic way to visualize and quantify the collective dynamics of a network in terms of its cellular elements. PMID:26229973

  12. Large-Scale Modeling of Epileptic Seizures: Scaling Properties of Two Parallel Neuronal Network Simulation Algorithms

    DOE PAGES

    Pesce, Lorenzo L.; Lee, Hyong C.; Hereld, Mark; ...

    2013-01-01

    Our limited understanding of the relationship between the behavior of individual neurons and large neuronal networks is an important limitation in current epilepsy research and may be one of the main causes of our inadequate ability to treat it. Addressing this problem directly via experiments is impossibly complex; thus, we have been developing and studying medium-large-scale simulations of detailed neuronal networks to guide us. Flexibility in the connection schemas and a complete description of the cortical tissue seem necessary for this purpose. In this paper we examine some of the basic issues encountered in these multiscale simulations. We have determinedmore » the detailed behavior of two such simulators on parallel computer systems. The observed memory and computation-time scaling behavior for a distributed memory implementation were very good over the range studied, both in terms of network sizes (2,000 to 400,000 neurons) and processor pool sizes (1 to 256 processors). Our simulations required between a few megabytes and about 150 gigabytes of RAM and lasted between a few minutes and about a week, well within the capability of most multinode clusters. Therefore, simulations of epileptic seizures on networks with millions of cells should be feasible on current supercomputers.« less

  13. An FPGA Platform for Real-Time Simulation of Spiking Neuronal Networks

    PubMed Central

    Pani, Danilo; Meloni, Paolo; Tuveri, Giuseppe; Palumbo, Francesca; Massobrio, Paolo; Raffo, Luigi

    2017-01-01

    In the last years, the idea to dynamically interface biological neurons with artificial ones has become more and more urgent. The reason is essentially due to the design of innovative neuroprostheses where biological cell assemblies of the brain can be substituted by artificial ones. For closed-loop experiments with biological neuronal networks interfaced with in silico modeled networks, several technological challenges need to be faced, from the low-level interfacing between the living tissue and the computational model to the implementation of the latter in a suitable form for real-time processing. Field programmable gate arrays (FPGAs) can improve flexibility when simple neuronal models are required, obtaining good accuracy, real-time performance, and the possibility to create a hybrid system without any custom hardware, just programming the hardware to achieve the required functionality. In this paper, this possibility is explored presenting a modular and efficient FPGA design of an in silico spiking neural network exploiting the Izhikevich model. The proposed system, prototypically implemented on a Xilinx Virtex 6 device, is able to simulate a fully connected network counting up to 1,440 neurons, in real-time, at a sampling rate of 10 kHz, which is reasonable for small to medium scale extra-cellular closed-loop experiments. PMID:28293163

  14. Dynamic neural networking as a basis for plasticity in the control of heart rate.

    PubMed

    Kember, G; Armour, J A; Zamir, M

    2013-01-21

    A model is proposed in which the relationship between individual neurons within a neural network is dynamically changing to the effect of providing a measure of "plasticity" in the control of heart rate. The neural network on which the model is based consists of three populations of neurons residing in the central nervous system, the intrathoracic extracardiac nervous system, and the intrinsic cardiac nervous system. This hierarchy of neural centers is used to challenge the classical view that the control of heart rate, a key clinical index, resides entirely in central neuronal command (spinal cord, medulla oblongata, and higher centers). Our results indicate that dynamic networking allows for the possibility of an interplay among the three populations of neurons to the effect of altering the order of control of heart rate among them. This interplay among the three levels of control allows for different neural pathways for the control of heart rate to emerge under different blood flow demands or disease conditions and, as such, it has significant clinical implications because current understanding and treatment of heart rate anomalies are based largely on a single level of control and on neurons acting in unison as a single entity rather than individually within a (plastically) interconnected network. Copyright © 2012 Elsevier Ltd. All rights reserved.

  15. Mean Field Analysis of Large-Scale Interacting Populations of Stochastic Conductance-Based Spiking Neurons Using the Klimontovich Method

    NASA Astrophysics Data System (ADS)

    Gandolfo, Daniel; Rodriguez, Roger; Tuckwell, Henry C.

    2017-03-01

    We investigate the dynamics of large-scale interacting neural populations, composed of conductance based, spiking model neurons with modifiable synaptic connection strengths, which are possibly also subjected to external noisy currents. The network dynamics is controlled by a set of neural population probability distributions (PPD) which are constructed along the same lines as in the Klimontovich approach to the kinetic theory of plasmas. An exact non-closed, nonlinear, system of integro-partial differential equations is derived for the PPDs. As is customary, a closing procedure leads to a mean field limit. The equations we have obtained are of the same type as those which have been recently derived using rigorous techniques of probability theory. The numerical solutions of these so called McKean-Vlasov-Fokker-Planck equations, which are only valid in the limit of infinite size networks, actually shows that the statistical measures as obtained from PPDs are in good agreement with those obtained through direct integration of the stochastic dynamical system for large but finite size networks. Although numerical solutions have been obtained for networks of Fitzhugh-Nagumo model neurons, which are often used to approximate Hodgkin-Huxley model neurons, the theory can be readily applied to networks of general conductance-based model neurons of arbitrary dimension.

  16. Adaptive output feedback NN control of a class of discrete-time MIMO nonlinear systems with unknown control directions.

    PubMed

    Li, Yanan; Yang, Chenguang; Ge, Shuzhi Sam; Lee, Tong Heng

    2011-04-01

    In this paper, adaptive neural network (NN) control is investigated for a class of block triangular multiinput-multioutput nonlinear discrete-time systems with each subsystem in pure-feedback form with unknown control directions. These systems are of couplings in every equation of each subsystem, and different subsystems may have different orders. To avoid the noncausal problem in the control design, the system is transformed into a predictor form by rigorous derivation. By exploring the properties of the block triangular form, implicit controls are developed for each subsystem such that the couplings of inputs and states among subsystems have been completely decoupled. The radial basis function NN is employed to approximate the unknown control. Each subsystem achieves a semiglobal uniformly ultimately bounded stability with the proposed control, and simulation results are presented to demonstrate its efficiency.

  17. Energy-efficient neural information processing in individual neurons and neuronal networks.

    PubMed

    Yu, Lianchun; Yu, Yuguo

    2017-11-01

    Brains are composed of networks of an enormous number of neurons interconnected with synapses. Neural information is carried by the electrical signals within neurons and the chemical signals among neurons. Generating these electrical and chemical signals is metabolically expensive. The fundamental issue raised here is whether brains have evolved efficient ways of developing an energy-efficient neural code from the molecular level to the circuit level. Here, we summarize the factors and biophysical mechanisms that could contribute to the energy-efficient neural code for processing input signals. The factors range from ion channel kinetics, body temperature, axonal propagation of action potentials, low-probability release of synaptic neurotransmitters, optimal input and noise, the size of neurons and neuronal clusters, excitation/inhibition balance, coding strategy, cortical wiring, and the organization of functional connectivity. Both experimental and computational evidence suggests that neural systems may use these factors to maximize the efficiency of energy consumption in processing neural signals. Studies indicate that efficient energy utilization may be universal in neuronal systems as an evolutionary consequence of the pressure of limited energy. As a result, neuronal connections may be wired in a highly economical manner to lower energy costs and space. Individual neurons within a network may encode independent stimulus components to allow a minimal number of neurons to represent whole stimulus characteristics efficiently. This basic principle may fundamentally change our view of how billions of neurons organize themselves into complex circuits to operate and generate the most powerful intelligent cognition in nature. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  18. Time-dependent Increase in the Network Response to the Stimulation of Neuronal Cell Cultures on Micro-electrode Arrays.

    PubMed

    Gertz, Monica L; Baker, Zachary; Jose, Sharon; Peixoto, Nathalia

    2017-05-29

    Micro-electrode arrays (MEAs) can be used to investigate drug toxicity, design paradigms for next-generation personalized medicine, and study network dynamics in neuronal cultures. In contrast with more traditional methods, such as patch-clamping, which can only record activity from a single cell, MEAs can record simultaneously from multiple sites in a network, without requiring the arduous task of placing each electrode individually. Moreover, numerous control and stimulation configurations can be easily applied within the same experimental setup, allowing for a broad range of dynamics to be explored. One of the key dynamics of interest in these in vitro studies has been the extent to which cultured networks display properties indicative of learning. Mouse neuronal cells cultured on MEAs display an increase in response following training induced by electrical stimulation. This protocol demonstrates how to culture neuronal cells on MEAs; successfully record from over 95% of the plated dishes; establish a protocol to train the networks to respond to patterns of stimulation; and sort, plot, and interpret the results from such experiments. The use of a proprietary system for stimulating and recording neuronal cultures is demonstrated. Software packages are also used to sort neuronal units. A custom-designed graphical user interface is used to visualize post-stimulus time histograms, inter-burst intervals, and burst duration, as well as to compare the cellular response to stimulation before and after a training protocol. Finally, representative results and future directions of this research effort are discussed.

  19. Spiking neuron network Helmholtz machine.

    PubMed

    Sountsov, Pavel; Miller, Paul

    2015-01-01

    An increasing amount of behavioral and neurophysiological data suggests that the brain performs optimal (or near-optimal) probabilistic inference and learning during perception and other tasks. Although many machine learning algorithms exist that perform inference and learning in an optimal way, the complete description of how one of those algorithms (or a novel algorithm) can be implemented in the brain is currently incomplete. There have been many proposed solutions that address how neurons can perform optimal inference but the question of how synaptic plasticity can implement optimal learning is rarely addressed. This paper aims to unify the two fields of probabilistic inference and synaptic plasticity by using a neuronal network of realistic model spiking neurons to implement a well-studied computational model called the Helmholtz Machine. The Helmholtz Machine is amenable to neural implementation as the algorithm it uses to learn its parameters, called the wake-sleep algorithm, uses a local delta learning rule. Our spiking-neuron network implements both the delta rule and a small example of a Helmholtz machine. This neuronal network can learn an internal model of continuous-valued training data sets without supervision. The network can also perform inference on the learned internal models. We show how various biophysical features of the neural implementation constrain the parameters of the wake-sleep algorithm, such as the duration of the wake and sleep phases of learning and the minimal sample duration. We examine the deviations from optimal performance and tie them to the properties of the synaptic plasticity rule.

  20. Spiking neuron network Helmholtz machine

    PubMed Central

    Sountsov, Pavel; Miller, Paul

    2015-01-01

    An increasing amount of behavioral and neurophysiological data suggests that the brain performs optimal (or near-optimal) probabilistic inference and learning during perception and other tasks. Although many machine learning algorithms exist that perform inference and learning in an optimal way, the complete description of how one of those algorithms (or a novel algorithm) can be implemented in the brain is currently incomplete. There have been many proposed solutions that address how neurons can perform optimal inference but the question of how synaptic plasticity can implement optimal learning is rarely addressed. This paper aims to unify the two fields of probabilistic inference and synaptic plasticity by using a neuronal network of realistic model spiking neurons to implement a well-studied computational model called the Helmholtz Machine. The Helmholtz Machine is amenable to neural implementation as the algorithm it uses to learn its parameters, called the wake-sleep algorithm, uses a local delta learning rule. Our spiking-neuron network implements both the delta rule and a small example of a Helmholtz machine. This neuronal network can learn an internal model of continuous-valued training data sets without supervision. The network can also perform inference on the learned internal models. We show how various biophysical features of the neural implementation constrain the parameters of the wake-sleep algorithm, such as the duration of the wake and sleep phases of learning and the minimal sample duration. We examine the deviations from optimal performance and tie them to the properties of the synaptic plasticity rule. PMID:25954191

  1. Changes in neural network homeostasis trigger neuropsychiatric symptoms.

    PubMed

    Winkelmann, Aline; Maggio, Nicola; Eller, Joanna; Caliskan, Gürsel; Semtner, Marcus; Häussler, Ute; Jüttner, René; Dugladze, Tamar; Smolinsky, Birthe; Kowalczyk, Sarah; Chronowska, Ewa; Schwarz, Günter; Rathjen, Fritz G; Rechavi, Gideon; Haas, Carola A; Kulik, Akos; Gloveli, Tengis; Heinemann, Uwe; Meier, Jochen C

    2014-02-01

    The mechanisms that regulate the strength of synaptic transmission and intrinsic neuronal excitability are well characterized; however, the mechanisms that promote disease-causing neural network dysfunction are poorly defined. We generated mice with targeted neuron type-specific expression of a gain-of-function variant of the neurotransmitter receptor for glycine (GlyR) that is found in hippocampectomies from patients with temporal lobe epilepsy. In this mouse model, targeted expression of gain-of-function GlyR in terminals of glutamatergic cells or in parvalbumin-positive interneurons persistently altered neural network excitability. The increased network excitability associated with gain-of-function GlyR expression in glutamatergic neurons resulted in recurrent epileptiform discharge, which provoked cognitive dysfunction and memory deficits without affecting bidirectional synaptic plasticity. In contrast, decreased network excitability due to gain-of-function GlyR expression in parvalbumin-positive interneurons resulted in an anxiety phenotype, but did not affect cognitive performance or discriminative associative memory. Our animal model unveils neuron type-specific effects on cognition, formation of discriminative associative memory, and emotional behavior in vivo. Furthermore, our data identify a presynaptic disease-causing molecular mechanism that impairs homeostatic regulation of neural network excitability and triggers neuropsychiatric symptoms.

  2. Spatiotemporal alterations of cortical network activity by selective loss of NOS-expressing interneurons.

    PubMed

    Shlosberg, Dan; Buskila, Yossi; Abu-Ghanem, Yasmin; Amitai, Yael

    2012-01-01

    Deciphering the role of GABAergic neurons in large neuronal networks such as the neocortex forms a particularly complex task as they comprise a highly diverse population. The neuronal isoform of the enzyme nitric oxide synthase (nNOS) is expressed in the neocortex by specific subsets of GABAergic neurons. These neurons can be identified in live brain slices by the nitric oxide (NO) fluorescent indicator diaminofluorescein-2 diacetate (DAF-2DA). However, this indicator was found to be highly toxic to the stained neurons. We used this feature to induce acute phototoxic damage to NO-producing neurons in cortical slices, and measured subsequent alterations in parameters of cellular and network activity. Neocortical slices were briefly incubated in DAF-2DA and then illuminated through the 4× objective. Histochemistry for NADPH-diaphorase (NADPH-d), a marker for nNOS activity, revealed elimination of staining in the illuminated areas following treatment. Whole cell recordings from several neuronal types before, during, and after illumination confirmed the selective damage to non-fast-spiking (FS) interneurons. Treated slices displayed mild disinhibition. The reversal potential of compound synaptic events on pyramidal neurons became more positive, and their decay time constant was elongated, substantiating the removal of an inhibitory conductance. The horizontal decay of local field potentials (LFPs) was significantly reduced at distances of 300-400 μm from the stimulation, but not when inhibition was non-selectively weakened with the GABA(A) blocker picrotoxin. Finally, whereas the depression of LFPs along short trains of 40 Hz stimuli was linearly reduced with distance or initial amplitude in control slices, this ordered relationship was disrupted in DAF-treated slices. These results reveal that NO-producing interneurons in the neocortex convey lateral inhibition to neighboring columns, and shape the spatiotemporal dynamics of the network's activity.

  3. Four-Element Composite Triangular Dielectric Resonator Antenna Using Li2O-1.94MgO-0.02Al2O3-P2O5 Ceramic for Wideband Applications

    NASA Astrophysics Data System (ADS)

    Kumari, Preeti; Tripathi, Pankaj; Sahu, B.; Singh, S. P.; Kumar, Devendra

    2018-05-01

    A simulation and fabrication study of a coaxial probe-fed four-element composite triangular dielectric resonator antenna (TDRA) using low loss Li2O-1.94MgO-0.02Al2O3-P2O5 (LMAP) ceramic and Teflon. LMAP ceramic was carried out and the ceramic was synthesized using a solid-state sintering route. The phase, microstructure and microwave dielectric properties of LMAP were investigated using x-ray diffraction pattern, scanning electron microscopy and a network analyzer. A coaxial probe-fed four-element composite TDRA was designed and fabricated using LMAP as one section of each composite element of the proposed antenna. Each triangular element of the proposed dielectric resonator antenna (DRA) consists of two sections of different dielectric constant materials. The inner triangular section touching the coaxial probe at one of its corners is made of the LMAP ceramic (ɛ r = 6.2) while othe uter section is made of Teflon (ɛ r = 2.1). Four triangular DRA elements are excited bya centrally located 50-Ω coaxial probe. The parametric study of the proposed antenna was performed through simulation using Ansys High Frequency Structure Simulator software by varying the dimensions and dielectric constants of both sections of each triangular element of the TDRA to optimize the results for obtaining a wideband antenna. The simulated resonant frequency of 9.30 GHz with a percentage bandwidth of 61.65% for the proposed antenna is obtained within its operating frequency range of 7.82-14.8 GHz. Monopole-like radiation patterns with low cross-polarization levels and a peak gain of 5.63 dB are obtained for the proposed antenna through simulation. The antenna prototype having optimized dimensions has also been fabricated. The experimental resonant frequency of 9.10 GHz with a percentage bandwidth of 66.09% is obtained within its operating frequency range of 7.70-15.30 GHz. It is found that the simulation results for the proposed antenna are in close agreement with the measured data. The proposed antenna can potentially be used in broadcast base stations, radar and satellite communications.

  4. Forecasting PM10 in Algiers: efficacy of multilayer perceptron networks.

    PubMed

    Abderrahim, Hamza; Chellali, Mohammed Reda; Hamou, Ahmed

    2016-01-01

    Air quality forecasting system has acquired high importance in atmospheric pollution due to its negative impacts on the environment and human health. The artificial neural network is one of the most common soft computing methods that can be pragmatic for carving such complex problem. In this paper, we used a multilayer perceptron neural network to forecast the daily averaged concentration of the respirable suspended particulates with aerodynamic diameter of not more than 10 μm (PM10) in Algiers, Algeria. The data for training and testing the network are based on the data sampled from 2002 to 2006 collected by SAMASAFIA network center at El Hamma station. The meteorological data, air temperature, relative humidity, and wind speed, are used as inputs network parameters in the formation of model. The training patterns used correspond to 41 days data. The performance of the developed models was evaluated on the basis index of agreement and other statistical parameters. It was seen that the overall performance of model with 15 neurons is better than the ones with 5 and 10 neurons. The results of multilayer network with as few as one hidden layer and 15 neurons were quite reasonable than the ones with 5 and 10 neurons. Finally, an error around 9% has been reached.

  5. Temporal neural networks and transient analysis of complex engineering systems

    NASA Astrophysics Data System (ADS)

    Uluyol, Onder

    A theory is introduced for a multi-layered Local Output Gamma Feedback (LOGF) neural network within the paradigm of Locally-Recurrent Globally-Feedforward neural networks. It is developed for the identification, prediction, and control tasks of spatio-temporal systems and allows for the presentation of different time scales through incorporation of a gamma memory. It is initially applied to the tasks of sunspot and Mackey-Glass series prediction as benchmarks, then it is extended to the task of power level control of a nuclear reactor at different fuel cycle conditions. The developed LOGF neuron model can also be viewed as a Transformed Input and State (TIS) Gamma memory for neural network architectures for temporal processing. The novel LOGF neuron model extends the static neuron model by incorporating into it a short-term memory structure in the form of a digital gamma filter. A feedforward neural network made up of LOGF neurons can thus be used to model dynamic systems. A learning algorithm based upon the Backpropagation-Through-Time (BTT) approach is derived. It is applicable for training a general L-layer LOGF neural network. The spatial and temporal weights and parameters of the network are iteratively optimized for a given problem using the derived learning algorithm.

  6. Cultured networks of excitatory projection neurons and inhibitory interneurons for studying human cortical neurotoxicity

    PubMed Central

    Xu, Jin-Chong; Fan, Jing; Wang, Xueqing; Eacker, Stephen M.; Kam, Tae-In; Chen, Li; Yin, Xiling; Zhu, Juehua; Chi, Zhikai; Jiang, Haisong; Chen, Rong; Dawson, Ted M.; Dawson, Valina L.

    2017-01-01

    Translating neuroprotective treatments from discovery in cell and animal models to the clinic has proven challenging. To reduce the gap between basic studies of neurotoxicity and neuroprotection and clinically relevant therapies, we developed a human cortical neuron culture system from human embryonic stem cells (ESCs) or inducible pluripotent stem cells (iPSCs) that generated both excitatory and inhibitory neuronal networks resembling the composition of the human cortex. This methodology used timed administration of retinoic acid (RA) to FOXG1 neural precursor cells leading to differentiation of neuronal populations representative of the six cortical layers with both excitatory and inhibitory neuronal networks that were functional and homeostatically stable. In human cortical neuron cultures, excitotoxicity or ischemia due to oxygen and glucose deprivation led to cell death that was dependent on N-methyl-D-aspartate (NMDA) receptors, nitric oxide (NO), and the poly (ADP-ribose) polymerase (PARP)-dependent cell death, a cell death pathway designated parthanatos to separate it from apoptosis, necroptosis and other forms of cell death. Neuronal cell death was attenuated by PARP inhibitors that are currently in clinical trials for cancer treatment. This culture system provides a new platform for the study of human cortical neurotoxicity and suggests that PARP inhibitors may be useful for ameliorating excitotoxic and ischemic cell death in human neurons. PMID:27053772

  7. Robust spatial memory maps in flickering neuronal networks: a topological model

    NASA Astrophysics Data System (ADS)

    Dabaghian, Yuri; Babichev, Andrey; Memoli, Facundo; Chowdhury, Samir; Rice University Collaboration; Ohio State University Collaboration

    It is widely accepted that the hippocampal place cells provide a substrate of the neuronal representation of the environment--the ``cognitive map''. However, hippocampal network, as any other network in the brain is transient: thousands of hippocampal neurons die every day and the connections formed by these cells constantly change due to various forms of synaptic plasticity. What then explains the remarkable reliability of our spatial memories? We propose a computational approach to answering this question based on a couple of insights. First, we propose that the hippocampal cognitive map is fundamentally topological, and hence it is amenable to analysis by topological methods. We then apply several novel methods from homology theory, to understand how dynamic connections between cells influences the speed and reliability of spatial learning. We simulate the rat's exploratory movements through different environments and study how topological invariants of these environments arise in a network of simulated neurons with ``flickering'' connectivity. We find that despite transient connectivity the network of place cells produces a stable representation of the topology of the environment.

  8. Exact event-driven implementation for recurrent networks of stochastic perfect integrate-and-fire neurons.

    PubMed

    Taillefumier, Thibaud; Touboul, Jonathan; Magnasco, Marcelo

    2012-12-01

    In vivo cortical recording reveals that indirectly driven neural assemblies can produce reliable and temporally precise spiking patterns in response to stereotyped stimulation. This suggests that despite being fundamentally noisy, the collective activity of neurons conveys information through temporal coding. Stochastic integrate-and-fire models delineate a natural theoretical framework to study the interplay of intrinsic neural noise and spike timing precision. However, there are inherent difficulties in simulating their networks' dynamics in silico with standard numerical discretization schemes. Indeed, the well-posedness of the evolution of such networks requires temporally ordering every neuronal interaction, whereas the order of interactions is highly sensitive to the random variability of spiking times. Here, we answer these issues for perfect stochastic integrate-and-fire neurons by designing an exact event-driven algorithm for the simulation of recurrent networks, with delayed Dirac-like interactions. In addition to being exact from the mathematical standpoint, our proposed method is highly efficient numerically. We envision that our algorithm is especially indicated for studying the emergence of polychronized motifs in networks evolving under spike-timing-dependent plasticity with intrinsic noise.

  9. Microfluidic neurite guidance to study structure-function relationships in topologically-complex population-based neural networks.

    PubMed

    Honegger, Thibault; Thielen, Moritz I; Feizi, Soheil; Sanjana, Neville E; Voldman, Joel

    2016-06-22

    The central nervous system is a dense, layered, 3D interconnected network of populations of neurons, and thus recapitulating that complexity for in vitro CNS models requires methods that can create defined topologically-complex neuronal networks. Several three-dimensional patterning approaches have been developed but none have demonstrated the ability to control the connections between populations of neurons. Here we report a method using AC electrokinetic forces that can guide, accelerate, slow down and push up neurites in un-modified collagen scaffolds. We present a means to create in vitro neural networks of arbitrary complexity by using such forces to create 3D intersections of primary neuronal populations that are plated in a 2D plane. We report for the first time in vitro basic brain motifs that have been previously observed in vivo and show that their functional network is highly decorrelated to their structure. This platform can provide building blocks to reproduce in vitro the complexity of neural circuits and provide a minimalistic environment to study the structure-function relationship of the brain circuitry.

  10. Field coupling-induced pattern formation in two-layer neuronal network

    NASA Astrophysics Data System (ADS)

    Qin, Huixin; Wang, Chunni; Cai, Ning; An, Xinlei; Alzahrani, Faris

    2018-07-01

    The exchange of charged ions across membrane can generate fluctuation of membrane potential and also complex effect of electromagnetic induction. Diversity in excitability of neurons induces different modes selection and dynamical responses to external stimuli. Based on a neuron model with electromagnetic induction, which is described by magnetic flux and memristor, a two-layer network is proposed to discuss the pattern control and wave propagation in the network. In each layer, gap junction coupling is applied to connect the neurons, while field coupling is considered between two layers of the network. The field coupling is approached by using coupling of magnetic flux, which is associated with distribution of electromagnetic field. It is found that appropriate intensity of field coupling can enhance wave propagation from one layer to another one, and beautiful spatial patterns are formed. The developed target wave in the second layer shows some difference from target wave triggered in the first layer of the network when two layers are considered by different excitabilities. The potential mechanism could be pacemaker-like driving from the first layer will be encoded by the second layer.

  11. Activity-dependent switch of GABAergic inhibition into glutamatergic excitation in astrocyte-neuron networks

    PubMed Central

    Perea, Gertrudis; Gómez, Ricardo; Mederos, Sara; Covelo, Ana; Ballesteros, Jesús J; Schlosser, Laura; Hernández-Vivanco, Alicia; Martín-Fernández, Mario; Quintana, Ruth; Rayan, Abdelrahman; Díez, Adolfo; Fuenzalida, Marco; Agarwal, Amit; Bergles, Dwight E; Bettler, Bernhard; Manahan-Vaughan, Denise; Martín, Eduardo D; Kirchhoff, Frank; Araque, Alfonso

    2016-01-01

    Interneurons are critical for proper neural network function and can activate Ca2+ signaling in astrocytes. However, the impact of the interneuron-astrocyte signaling into neuronal network operation remains unknown. Using the simplest hippocampal Astrocyte-Neuron network, i.e., GABAergic interneuron, pyramidal neuron, single CA3-CA1 glutamatergic synapse, and astrocytes, we found that interneuron-astrocyte signaling dynamically affected excitatory neurotransmission in an activity- and time-dependent manner, and determined the sign (inhibition vs potentiation) of the GABA-mediated effects. While synaptic inhibition was mediated by GABAA receptors, potentiation involved astrocyte GABAB receptors, astrocytic glutamate release, and presynaptic metabotropic glutamate receptors. Using conditional astrocyte-specific GABAB receptor (Gabbr1) knockout mice, we confirmed the glial source of the interneuron-induced potentiation, and demonstrated the involvement of astrocytes in hippocampal theta and gamma oscillations in vivo. Therefore, astrocytes decode interneuron activity and transform inhibitory into excitatory signals, contributing to the emergence of novel network properties resulting from the interneuron-astrocyte interplay. DOI: http://dx.doi.org/10.7554/eLife.20362.001 PMID:28012274

  12. Generalized activity equations for spiking neural network dynamics.

    PubMed

    Buice, Michael A; Chow, Carson C

    2013-01-01

    Much progress has been made in uncovering the computational capabilities of spiking neural networks. However, spiking neurons will always be more expensive to simulate compared to rate neurons because of the inherent disparity in time scales-the spike duration time is much shorter than the inter-spike time, which is much shorter than any learning time scale. In numerical analysis, this is a classic stiff problem. Spiking neurons are also much more difficult to study analytically. One possible approach to making spiking networks more tractable is to augment mean field activity models with some information about spiking correlations. For example, such a generalized activity model could carry information about spiking rates and correlations between spikes self-consistently. Here, we will show how this can be accomplished by constructing a complete formal probabilistic description of the network and then expanding around a small parameter such as the inverse of the number of neurons in the network. The mean field theory of the system gives a rate-like description. The first order terms in the perturbation expansion keep track of covariances.

  13. Self-organized criticality occurs in non-conservative neuronal networks during Up states

    PubMed Central

    Millman, Daniel; Mihalas, Stefan; Kirkwood, Alfredo; Niebur, Ernst

    2010-01-01

    During sleep, under anesthesia and in vitro, cortical neurons in sensory, motor, association and executive areas fluctuate between Up and Down states (UDS) characterized by distinct membrane potentials and spike rates [1, 2, 3, 4, 5]. Another phenomenon observed in preparations similar to those that exhibit UDS, such as anesthetized rats [6], brain slices and cultures devoid of sensory input [7], as well as awake monkey cortex [8] is self-organized criticality (SOC). This is characterized by activity “avalanches” whose size distributions obey a power law with critical exponent of about −32 and branching parameter near unity. Recent work has demonstrated SOC in conservative neuronal network models [9, 10], however critical behavior breaks down when biologically realistic non-conservatism is introduced [9]. We here report robust SOC behavior in networks of non-conservative leaky integrate-and-fire neurons with short-term synaptic depression. We show analytically and numerically that these networks typically have 2 stable activity levels corresponding to Up and Down states, that the networks switch spontaneously between them, and that Up states are critical and Down states are subcritical. PMID:21804861

  14. Microfluidic neurite guidance to study structure-function relationships in topologically-complex population-based neural networks

    NASA Astrophysics Data System (ADS)

    Honegger, Thibault; Thielen, Moritz I.; Feizi, Soheil; Sanjana, Neville E.; Voldman, Joel

    2016-06-01

    The central nervous system is a dense, layered, 3D interconnected network of populations of neurons, and thus recapitulating that complexity for in vitro CNS models requires methods that can create defined topologically-complex neuronal networks. Several three-dimensional patterning approaches have been developed but none have demonstrated the ability to control the connections between populations of neurons. Here we report a method using AC electrokinetic forces that can guide, accelerate, slow down and push up neurites in un-modified collagen scaffolds. We present a means to create in vitro neural networks of arbitrary complexity by using such forces to create 3D intersections of primary neuronal populations that are plated in a 2D plane. We report for the first time in vitro basic brain motifs that have been previously observed in vivo and show that their functional network is highly decorrelated to their structure. This platform can provide building blocks to reproduce in vitro the complexity of neural circuits and provide a minimalistic environment to study the structure-function relationship of the brain circuitry.

  15. Robust Adaptive Synchronization of Ring Configured Uncertain Chaotic FitzHugh–Nagumo Neurons under Direction-Dependent Coupling

    PubMed Central

    Iqbal, Muhammad; Rehan, Muhammad; Hong, Keum-Shik

    2018-01-01

    This paper exploits the dynamical modeling, behavior analysis, and synchronization of a network of four different FitzHugh–Nagumo (FHN) neurons with unknown parameters linked in a ring configuration under direction-dependent coupling. The main purpose is to investigate a robust adaptive control law for the synchronization of uncertain and perturbed neurons, communicating in a medium of bidirectional coupling. The neurons are assumed to be different and interconnected in a ring structure. The strength of the gap junctions is taken to be different for each link in the network, owing to the inter-neuronal coupling medium properties. Robust adaptive control mechanism based on Lyapunov stability analysis is employed and theoretical criteria are derived to realize the synchronization of the network of four FHN neurons in a ring form with unknown parameters under direction-dependent coupling and disturbances. The proposed scheme for synchronization of dissimilar neurons, under external electrical stimuli, coupled in a ring communication topology, having all parameters unknown, and subject to directional coupling medium and perturbations, is addressed for the first time as per our knowledge. To demonstrate the efficacy of the proposed strategy, simulation results are provided. PMID:29535622

  16. Functional neuroanatomy of the central noradrenergic system.

    PubMed

    Szabadi, Elemer

    2013-08-01

    The central noradrenergic neurone, like the peripheral sympathetic neurone, is characterized by a diffusely arborizing terminal axonal network. The central neurones aggregate in distinct brainstem nuclei, of which the locus coeruleus (LC) is the most prominent. LC neurones project widely to most areas of the neuraxis, where they mediate dual effects: neuronal excitation by α₁-adrenoceptors and inhibition by α₂-adrenoceptors. The LC plays an important role in physiological regulatory networks. In the sleep/arousal network the LC promotes wakefulness, via excitatory projections to the cerebral cortex and other wakefulness-promoting nuclei, and inhibitory projections to sleep-promoting nuclei. The LC, together with other pontine noradrenergic nuclei, modulates autonomic functions by excitatory projections to preganglionic sympathetic, and inhibitory projections to preganglionic parasympathetic neurones. The LC also modulates the acute effects of light on physiological functions ('photomodulation'): stimulation of arousal and sympathetic activity by light via the LC opposes the inhibitory effects of light mediated by the ventrolateral preoptic nucleus on arousal and by the paraventricular nucleus on sympathetic activity. Photostimulation of arousal by light via the LC may enable diurnal animals to function during daytime. LC neurones degenerate early and progressively in Parkinson's disease and Alzheimer's disease, leading to cognitive impairment, depression and sleep disturbance.

  17. The formation and distribution of hippocampal synapses on patterned neuronal networks

    NASA Astrophysics Data System (ADS)

    Dowell-Mesfin, Natalie M.

    Communication within the central nervous system is highly orchestrated with neurons forming trillions of specialized junctions called synapses. In vivo, biochemical and topographical cues can regulate neuronal growth. Biochemical cues also influence synaptogenesis and synaptic plasticity. The effects of topography on the development of synapses have been less studied. In vitro, neuronal growth is unorganized and complex making it difficult to study the development of networks. Patterned topographical cues guide and control the growth of neuronal processes (axons and dendrites) into organized networks. The aim of this dissertation was to determine if patterned topographical cues can influence synapse formation and distribution. Standard fabrication and compression molding procedures were used to produce silicon masters and polystyrene replicas with topographical cues presented as 1 mum high pillars with diameters of 0.5 and 2.0 mum and gaps of 1.0 to 5.0 mum. Embryonic rat hippocampal neurons grown unto patterned surfaces. A developmental analysis with immunocytochemistry was used to assess the distribution of pre- and post-synaptic proteins. Activity-dependent pre-synaptic vesicle uptake using functional imaging dyes was also performed. Adaptive filtering computer algorithms identified synapses by segmenting juxtaposed pairs of pre- and post-synaptic labels. Synapse number and area were automatically extracted from each deconvolved data set. In addition, neuronal processes were traced automatically to assess changes in synapse distribution. The results of these experiments demonstrated that patterned topographic cues can induce organized and functional neuronal networks that can serve as models for the study of synapse formation and plasticity as well as for the development of neuroprosthetic devices.

  18. Neuronal avalanches and learning

    NASA Astrophysics Data System (ADS)

    de Arcangelis, Lucilla

    2011-05-01

    Networks of living neurons represent one of the most fascinating systems of biology. If the physical and chemical mechanisms at the basis of the functioning of a single neuron are quite well understood, the collective behaviour of a system of many neurons is an extremely intriguing subject. Crucial ingredient of this complex behaviour is the plasticity property of the network, namely the capacity to adapt and evolve depending on the level of activity. This plastic ability is believed, nowadays, to be at the basis of learning and memory in real brains. Spontaneous neuronal activity has recently shown features in common to other complex systems. Experimental data have, in fact, shown that electrical information propagates in a cortex slice via an avalanche mode. These avalanches are characterized by a power law distribution for the size and duration, features found in other problems in the context of the physics of complex systems and successful models have been developed to describe their behaviour. In this contribution we discuss a statistical mechanical model for the complex activity in a neuronal network. The model implements the main physiological properties of living neurons and is able to reproduce recent experimental results. Then, we discuss the learning abilities of this neuronal network. Learning occurs via plastic adaptation of synaptic strengths by a non-uniform negative feedback mechanism. The system is able to learn all the tested rules, in particular the exclusive OR (XOR) and a random rule with three inputs. The learning dynamics exhibits universal features as function of the strength of plastic adaptation. Any rule could be learned provided that the plastic adaptation is sufficiently slow.

  19. The neural representation of the gender of faces in the primate visual system: A computer modeling study.

    PubMed

    Minot, Thomas; Dury, Hannah L; Eguchi, Akihiro; Humphreys, Glyn W; Stringer, Simon M

    2017-03-01

    We use an established neural network model of the primate visual system to show how neurons might learn to encode the gender of faces. The model consists of a hierarchy of 4 competitive neuronal layers with associatively modifiable feedforward synaptic connections between successive layers. During training, the network was presented with many realistic images of male and female faces, during which the synaptic connections are modified using biologically plausible local associative learning rules. After training, we found that different subsets of output neurons have learned to respond exclusively to either male or female faces. With the inclusion of short range excitation within each neuronal layer to implement a self-organizing map architecture, neurons representing either male or female faces were clustered together in the output layer. This learning process is entirely unsupervised, as the gender of the face images is not explicitly labeled and provided to the network as a supervisory training signal. These simulations are extended to training the network on rotating faces. It is found that by using a trace learning rule incorporating a temporal memory trace of recent neuronal activity, neurons responding selectively to either male or female faces were also able to learn to respond invariantly over different views of the faces. This kind of trace learning has been previously shown to operate within the primate visual system by neurophysiological and psychophysical studies. The computer simulations described here predict that similar neurons encoding the gender of faces will be present within the primate visual system. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  20. Spatio-temporal specialization of GABAergic septo-hippocampal neurons for rhythmic network activity.

    PubMed

    Unal, Gunes; Crump, Michael G; Viney, Tim J; Éltes, Tímea; Katona, Linda; Klausberger, Thomas; Somogyi, Peter

    2018-03-03

    Medial septal GABAergic neurons of the basal forebrain innervate the hippocampus and related cortical areas, contributing to the coordination of network activity, such as theta oscillations and sharp wave-ripple events, via a preferential innervation of GABAergic interneurons. Individual medial septal neurons display diverse activity patterns, which may be related to their termination in different cortical areas and/or to the different types of innervated interneurons. To test these hypotheses, we extracellularly recorded and juxtacellularly labeled single medial septal neurons in anesthetized rats in vivo during hippocampal theta and ripple oscillations, traced their axons to distant cortical target areas, and analyzed their postsynaptic interneurons. Medial septal GABAergic neurons exhibiting different hippocampal theta phase preferences and/or sharp wave-ripple related activity terminated in restricted hippocampal regions, and selectively targeted a limited number of interneuron types, as established on the basis of molecular markers. We demonstrate the preferential innervation of bistratified cells in CA1 and of basket cells in CA3 by individual axons. One group of septal neurons was suppressed during sharp wave-ripples, maintained their firing rate across theta and non-theta network states and mainly fired along the descending phase of CA1 theta oscillations. In contrast, neurons that were active during sharp wave-ripples increased their firing significantly during "theta" compared to "non-theta" states, with most firing during the ascending phase of theta oscillations. These results demonstrate that specialized septal GABAergic neurons contribute to the coordination of network activity through parallel, target area- and cell type-selective projections to the hippocampus.

Top