Tibau, Elisenda; Valencia, Miguel; Soriano, Jordi
2013-01-01
Neuronal networks in vitro are prominent systems to study the development of connections in living neuronal networks and the interplay between connectivity, activity and function. These cultured networks show a rich spontaneous activity that evolves concurrently with the connectivity of the underlying network. In this work we monitor the development of neuronal cultures, and record their activity using calcium fluorescence imaging. We use spectral analysis to characterize global dynamical and structural traits of the neuronal cultures. We first observe that the power spectrum can be used as a signature of the state of the network, for instance when inhibition is active or silent, as well as a measure of the network's connectivity strength. Second, the power spectrum identifies prominent developmental changes in the network such as GABAA switch. And third, the analysis of the spatial distribution of the spectral density, in experiments with a controlled disintegration of the network through CNQX, an AMPA-glutamate receptor antagonist in excitatory neurons, reveals the existence of communities of strongly connected, highly active neurons that display synchronous oscillations. Our work illustrates the interest of spectral analysis for the study of in vitro networks, and its potential use as a network-state indicator, for instance to compare healthy and diseased neuronal networks.
Arunachalam, Viswanathan; Akhavan-Tabatabaei, Raha; Lopez, Cristina
2013-01-01
The classical models of single neuron like Hodgkin-Huxley point neuron or leaky integrate and fire neuron assume the influence of postsynaptic potentials to last till the neuron fires. Vidybida (2008) in a refreshing departure has proposed models for binding neurons in which the trace of an input is remembered only for a finite fixed period of time after which it is forgotten. The binding neurons conform to the behaviour of real neurons and are applicable in constructing fast recurrent networks for computer modeling. This paper develops explicitly several useful results for a binding neuron like the firing time distribution and other statistical characteristics. We also discuss the applicability of the developed results in constructing a modified hourglass network model in which there are interconnected neurons with excitatory as well as inhibitory inputs. Limited simulation results of the hourglass network are presented.
Developing neuronal networks: Self-organized criticality predicts the future
NASA Astrophysics Data System (ADS)
Pu, Jiangbo; Gong, Hui; Li, Xiangning; Luo, Qingming
2013-01-01
Self-organized criticality emerged in neural activity is one of the key concepts to describe the formation and the function of developing neuronal networks. The relationship between critical dynamics and neural development is both theoretically and experimentally appealing. However, whereas it is well-known that cortical networks exhibit a rich repertoire of activity patterns at different stages during in vitro maturation, dynamical activity patterns through the entire neural development still remains unclear. Here we show that a series of metastable network states emerged in the developing and ``aging'' process of hippocampal networks cultured from dissociated rat neurons. The unidirectional sequence of state transitions could be only observed in networks showing power-law scaling of distributed neuronal avalanches. Our data suggest that self-organized criticality may guide spontaneous activity into a sequential succession of homeostatically-regulated transient patterns during development, which may help to predict the tendency of neural development at early ages in the future.
Fukushima, Kazuyuki; Miura, Yuji; Sawada, Kohei; Yamazaki, Kazuto; Ito, Masashi
2016-01-01
Using human cell models mimicking the central nervous system (CNS) provides a better understanding of the human CNS, and it is a key strategy to improve success rates in CNS drug development. In the CNS, neurons function as networks in which astrocytes play important roles. Thus, an assessment system of neuronal network functions in a co-culture of human neurons and astrocytes has potential to accelerate CNS drug development. We previously demonstrated that human hippocampus-derived neural stem/progenitor cells (HIP-009 cells) were a novel tool to obtain human neurons and astrocytes in the same culture. In this study, we applied HIP-009 cells to a multielectrode array (MEA) system to detect neuronal signals as neuronal network functions. We observed spontaneous firings of HIP-009 neurons, and validated functional formation of neuronal networks pharmacologically. By using this assay system, we investigated effects of several reference compounds, including agonists and antagonists of glutamate and γ-aminobutyric acid receptors, and sodium, potassium, and calcium channels, on neuronal network functions using firing and burst numbers, and synchrony as readouts. These results indicate that the HIP-009/MEA assay system is applicable to the pharmacological assessment of drug candidates affecting synaptic functions for CNS drug development. © 2015 Society for Laboratory Automation and Screening.
Three-dimensional neural cultures produce networks that mimic native brain activity.
Bourke, Justin L; Quigley, Anita F; Duchi, Serena; O'Connell, Cathal D; Crook, Jeremy M; Wallace, Gordon G; Cook, Mark J; Kapsa, Robert M I
2018-02-01
Development of brain function is critically dependent on neuronal networks organized through three dimensions. Culture of central nervous system neurons has traditionally been limited to two dimensions, restricting growth patterns and network formation to a single plane. Here, with the use of multichannel extracellular microelectrode arrays, we demonstrate that neurons cultured in a true three-dimensional environment recapitulate native neuronal network formation and produce functional outcomes more akin to in vivo neuronal network activity. Copyright © 2017 John Wiley & Sons, Ltd.
Can simple rules control development of a pioneer vertebrate neuronal network generating behavior?
Roberts, Alan; Conte, Deborah; Hull, Mike; Merrison-Hort, Robert; al Azad, Abul Kalam; Buhl, Edgar; Borisyuk, Roman; Soffe, Stephen R
2014-01-08
How do the pioneer networks in the axial core of the vertebrate nervous system first develop? Fundamental to understanding any full-scale neuronal network is knowledge of the constituent neurons, their properties, synaptic interconnections, and normal activity. Our novel strategy uses basic developmental rules to generate model networks that retain individual neuron and synapse resolution and are capable of reproducing correct, whole animal responses. We apply our developmental strategy to young Xenopus tadpoles, whose brainstem and spinal cord share a core vertebrate plan, but at a tractable complexity. Following detailed anatomical and physiological measurements to complete a descriptive library of each type of spinal neuron, we build models of their axon growth controlled by simple chemical gradients and physical barriers. By adding dendrites and allowing probabilistic formation of synaptic connections, we reconstruct network connectivity among up to 2000 neurons. When the resulting "network" is populated by model neurons and synapses, with properties based on physiology, it can respond to sensory stimulation by mimicking tadpole swimming behavior. This functioning model represents the most complete reconstruction of a vertebrate neuronal network that can reproduce the complex, rhythmic behavior of a whole animal. The findings validate our novel developmental strategy for generating realistic networks with individual neuron- and synapse-level resolution. We use it to demonstrate how early functional neuronal connectivity and behavior may in life result from simple developmental "rules," which lay out a scaffold for the vertebrate CNS without specific neuron-to-neuron recognition.
Nanostructured superhydrophobic substrates trigger the development of 3D neuronal networks.
Limongi, Tania; Cesca, Fabrizia; Gentile, Francesco; Marotta, Roberto; Ruffilli, Roberta; Barberis, Andrea; Dal Maschio, Marco; Petrini, Enrica Maria; Santoriello, Stefania; Benfenati, Fabio; Di Fabrizio, Enzo
2013-02-11
The generation of 3D networks of primary neurons is a big challenge in neuroscience. Here, a novel method is presented for a 3D neuronal culture on superhydrophobic (SH) substrates. How nano-patterned SH devices stimulate neurons to build 3D networks is investigated. Scanning electron microscopy and confocal imaging show that soon after plating neurites adhere to the nanopatterned pillar sidewalls and they are subsequently pulled between pillars in a suspended position. These neurons display an enhanced survival rate compared to standard cultures and develop mature networks with physiological excitability. These findings underline the importance of using nanostructured SH surfaces for directing 3D neuronal growth, as well as for the design of biomaterials for neuronal regeneration. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Cullen, D Kacy; R Patel, Ankur; Doorish, John F; Smith, Douglas H; Pfister, Bryan J
2008-12-01
Neural-electrical interface platforms are being developed to extracellularly monitor neuronal population activity. Polyaniline-based electrically conducting polymer fibers are attractive substrates for sustained functional interfaces with neurons due to their flexibility, tailored geometry and controlled electro-conductive properties. In this study, we addressed the neurobiological considerations of utilizing small diameter (<400 microm) fibers consisting of a blend of electrically conductive polyaniline and polypropylene (PA-PP) as the backbone of encapsulated tissue-engineered neural-electrical relays. We devised new approaches to promote survival, adhesion and neurite outgrowth of primary dorsal root ganglion neurons on PA-PP fibers. We attained a greater than ten-fold increase in the density of viable neurons on fiber surfaces to approximately 700 neurons mm(-2) by manipulating surrounding surface charges to bias settling neuronal suspensions toward fibers coated with cell-adhesive ligands. This stark increase in neuronal density resulted in robust neuritic extension and network formation directly along the fibers. Additionally, we encapsulated these neuronal networks on PA-PP fibers using agarose to form a protective barrier while potentially facilitating network stability. Following encapsulation, the neuronal networks maintained integrity, high viability (>85%) and intimate adhesion to PA-PP fibers. These efforts accomplished key prerequisites for the establishment of functional electrical interfaces with neuronal populations using small diameter PA-PP fibers-specifically, improved neurocompatibility, high-density neuronal adhesion and neuritic network development directly on fiber surfaces.
Kirwan, Peter; Turner-Bridger, Benita; Peter, Manuel; Momoh, Ayiba; Arambepola, Devika; Robinson, Hugh P. C.; Livesey, Frederick J.
2015-01-01
A key aspect of nervous system development, including that of the cerebral cortex, is the formation of higher-order neural networks. Developing neural networks undergo several phases with distinct activity patterns in vivo, which are thought to prune and fine-tune network connectivity. We report here that human pluripotent stem cell (hPSC)-derived cerebral cortex neurons form large-scale networks that reflect those found in the developing cerebral cortex in vivo. Synchronised oscillatory networks develop in a highly stereotyped pattern over several weeks in culture. An initial phase of increasing frequency of oscillations is followed by a phase of decreasing frequency, before giving rise to non-synchronous, ordered activity patterns. hPSC-derived cortical neural networks are excitatory, driven by activation of AMPA- and NMDA-type glutamate receptors, and can undergo NMDA-receptor-mediated plasticity. Investigating single neuron connectivity within PSC-derived cultures, using rabies-based trans-synaptic tracing, we found two broad classes of neuronal connectivity: most neurons have small numbers (<10) of presynaptic inputs, whereas a small set of hub-like neurons have large numbers of synaptic connections (>40). These data demonstrate that the formation of hPSC-derived cortical networks mimics in vivo cortical network development and function, demonstrating the utility of in vitro systems for mechanistic studies of human forebrain neural network biology. PMID:26395144
Kirwan, Peter; Turner-Bridger, Benita; Peter, Manuel; Momoh, Ayiba; Arambepola, Devika; Robinson, Hugh P C; Livesey, Frederick J
2015-09-15
A key aspect of nervous system development, including that of the cerebral cortex, is the formation of higher-order neural networks. Developing neural networks undergo several phases with distinct activity patterns in vivo, which are thought to prune and fine-tune network connectivity. We report here that human pluripotent stem cell (hPSC)-derived cerebral cortex neurons form large-scale networks that reflect those found in the developing cerebral cortex in vivo. Synchronised oscillatory networks develop in a highly stereotyped pattern over several weeks in culture. An initial phase of increasing frequency of oscillations is followed by a phase of decreasing frequency, before giving rise to non-synchronous, ordered activity patterns. hPSC-derived cortical neural networks are excitatory, driven by activation of AMPA- and NMDA-type glutamate receptors, and can undergo NMDA-receptor-mediated plasticity. Investigating single neuron connectivity within PSC-derived cultures, using rabies-based trans-synaptic tracing, we found two broad classes of neuronal connectivity: most neurons have small numbers (<10) of presynaptic inputs, whereas a small set of hub-like neurons have large numbers of synaptic connections (>40). These data demonstrate that the formation of hPSC-derived cortical networks mimics in vivo cortical network development and function, demonstrating the utility of in vitro systems for mechanistic studies of human forebrain neural network biology. © 2015. Published by The Company of Biologists Ltd.
Autapse-Induced Spiral Wave in Network of Neurons under Noise
Qin, Huixin; Ma, Jun; Wang, Chunni; Wu, Ying
2014-01-01
Autapse plays an important role in regulating the electric activity of neuron by feedbacking time-delayed current on the membrane of neuron. Autapses are considered in a local area of regular network of neurons to investigate the development of spatiotemporal pattern, and emergence of spiral wave is observed while it fails to grow up and occupy the network completely. It is found that spiral wave can be induced to occupy more area in the network under optimized noise on the network with periodical or no-flux boundary condition being used. The developed spiral wave with self-sustained property can regulate the collective behaviors of neurons as a pacemaker. To detect the collective behaviors, a statistical factor of synchronization is calculated to investigate the emergence of ordered state in the network. The network keeps ordered state when self-sustained spiral wave is formed under noise and autapse in local area of network, and it independent of the selection of periodical or no-flux boundary condition. The developed stable spiral wave could be helpful for memory due to the distinct self-sustained property. PMID:24967577
Autapse-induced spiral wave in network of neurons under noise.
Qin, Huixin; Ma, Jun; Wang, Chunni; Wu, Ying
2014-01-01
Autapse plays an important role in regulating the electric activity of neuron by feedbacking time-delayed current on the membrane of neuron. Autapses are considered in a local area of regular network of neurons to investigate the development of spatiotemporal pattern, and emergence of spiral wave is observed while it fails to grow up and occupy the network completely. It is found that spiral wave can be induced to occupy more area in the network under optimized noise on the network with periodical or no-flux boundary condition being used. The developed spiral wave with self-sustained property can regulate the collective behaviors of neurons as a pacemaker. To detect the collective behaviors, a statistical factor of synchronization is calculated to investigate the emergence of ordered state in the network. The network keeps ordered state when self-sustained spiral wave is formed under noise and autapse in local area of network, and it independent of the selection of periodical or no-flux boundary condition. The developed stable spiral wave could be helpful for memory due to the distinct self-sustained property.
Developmental time windows for axon growth influence neuronal network topology.
Lim, Sol; Kaiser, Marcus
2015-04-01
Early brain connectivity development consists of multiple stages: birth of neurons, their migration and the subsequent growth of axons and dendrites. Each stage occurs within a certain period of time depending on types of neurons and cortical layers. Forming synapses between neurons either by growing axons starting at similar times for all neurons (much-overlapped time windows) or at different time points (less-overlapped) may affect the topological and spatial properties of neuronal networks. Here, we explore the extreme cases of axon formation during early development, either starting at the same time for all neurons (parallel, i.e., maximally overlapped time windows) or occurring for each neuron separately one neuron after another (serial, i.e., no overlaps in time windows). For both cases, the number of potential and established synapses remained comparable. Topological and spatial properties, however, differed: Neurons that started axon growth early on in serial growth achieved higher out-degrees, higher local efficiency and longer axon lengths while neurons demonstrated more homogeneous connectivity patterns for parallel growth. Second, connection probability decreased more rapidly with distance between neurons for parallel growth than for serial growth. Third, bidirectional connections were more numerous for parallel growth. Finally, we tested our predictions with C. elegans data. Together, this indicates that time windows for axon growth influence the topological and spatial properties of neuronal networks opening up the possibility to a posteriori estimate developmental mechanisms based on network properties of a developed network.
Dynamical analysis of Parkinsonian state emulated by hybrid Izhikevich neuron models
NASA Astrophysics Data System (ADS)
Liu, Chen; Wang, Jiang; Yu, Haitao; Deng, Bin; Wei, Xile; Li, Huiyan; Loparo, Kenneth A.; Fietkiewicz, Chris
2015-11-01
Computational models play a significant role in exploring novel theories to complement the findings of physiological experiments. Various computational models have been developed to reveal the mechanisms underlying brain functions. Particularly, in the development of therapies to modulate behavioral and pathological abnormalities, computational models provide the basic foundations to exhibit transitions between physiological and pathological conditions. Considering the significant roles of the intrinsic properties of the globus pallidus and the coupling connections between neurons in determining the firing patterns and the dynamical activities of the basal ganglia neuronal network, we propose a hypothesis that pathological behaviors under the Parkinsonian state may originate from combined effects of intrinsic properties of globus pallidus neurons and synaptic conductances in the whole neuronal network. In order to establish a computational efficient network model, hybrid Izhikevich neuron model is used due to its capacity of capturing the dynamical characteristics of the biological neuronal activities. Detailed analysis of the individual Izhikevich neuron model can assist in understanding the roles of model parameters, which then facilitates the establishment of the basal ganglia-thalamic network model, and contributes to a further exploration of the underlying mechanisms of the Parkinsonian state. Simulation results show that the hybrid Izhikevich neuron model is capable of capturing many of the dynamical properties of the basal ganglia-thalamic neuronal network, such as variations of the firing rates and emergence of synchronous oscillations under the Parkinsonian condition, despite the simplicity of the two-dimensional neuronal model. It may suggest that the computational efficient hybrid Izhikevich neuron model can be used to explore basal ganglia normal and abnormal functions. Especially it provides an efficient way of emulating the large-scale neuron network and potentially contributes to development of improved therapy for neurological disorders such as Parkinson's disease.
Synchronization in a noise-driven developing neural network
NASA Astrophysics Data System (ADS)
Lin, I.-H.; Wu, R.-K.; Chen, C.-M.
2011-11-01
We use computer simulations to investigate the structural and dynamical properties of a developing neural network whose activity is driven by noise. Structurally, the constructed neural networks in our simulations exhibit the small-world properties that have been observed in several neural networks. The dynamical change of neuronal membrane potential is described by the Hodgkin-Huxley model, and two types of learning rules, including spike-timing-dependent plasticity (STDP) and inverse STDP, are considered to restructure the synaptic strength between neurons. Clustered synchronized firing (SF) of the network is observed when the network connectivity (number of connections/maximal connections) is about 0.75, in which the firing rate of neurons is only half of the network frequency. At the connectivity of 0.86, all neurons fire synchronously at the network frequency. The network SF frequency increases logarithmically with the culturing time of a growing network and decreases exponentially with the delay time in signal transmission. These conclusions are consistent with experimental observations. The phase diagrams of SF in a developing network are investigated for both learning rules.
Dong, Jing; Gao, Lingqi; Han, Junde; Zhang, Junjie; Zheng, Jijian
2017-07-01
Deprivation of spontaneous rhythmic electrical activity in early development by anesthesia administration, among other interventions, induces neuronal apoptosis. However, it is unclear whether enhancement of neuronal electrical activity attenuates neuronal apoptosis in either normal development or after anesthesia exposure. The present study investigated the effects of dopamine, an enhancer of spontaneous rhythmic electrical activity, on ketamine-induced neuronal apoptosis in the developing rat retina. TUNEL and immunohistochemical assays indicated that ketamine time- and dose-dependently aggravated physiological and ketamine-induced apoptosis and inhibited early-synchronized spontaneous network activity. Dopamine administration reversed ketamine-induced neuronal apoptosis, but did not reverse the inhibitory effects of ketamine on early synchronized spontaneous network activity despite enhancing it in controls. Blockade of D1, D2, and A2A receptors and inhibition of cAMP/PKA signaling partially antagonized the protective effect of dopamine against ketamine-induced apoptosis. Together, these data indicate that dopamine attenuates ketamine-induced neuronal apoptosis in the developing rat retina by activating the D1, D2, and A2A receptors, and upregulating cAMP/PKA signaling, rather than through modulation of early synchronized spontaneous network activity.
Synchronization properties of heterogeneous neuronal networks with mixed excitability type
NASA Astrophysics Data System (ADS)
Leone, Michael J.; Schurter, Brandon N.; Letson, Benjamin; Booth, Victoria; Zochowski, Michal; Fink, Christian G.
2015-03-01
We study the synchronization of neuronal networks with dynamical heterogeneity, showing that network structures with the same propensity for synchronization (as quantified by master stability function analysis) may develop dramatically different synchronization properties when heterogeneity is introduced with respect to neuronal excitability type. Specifically, we investigate networks composed of neurons with different types of phase response curves (PRCs), which characterize how oscillating neurons respond to excitatory perturbations. Neurons exhibiting type 1 PRC respond exclusively with phase advances, while neurons exhibiting type 2 PRC respond with either phase delays or phase advances, depending on when the perturbation occurs. We find that Watts-Strogatz small world networks transition to synchronization gradually as the proportion of type 2 neurons increases, whereas scale-free networks may transition gradually or rapidly, depending upon local correlations between node degree and excitability type. Random placement of type 2 neurons results in gradual transition to synchronization, whereas placement of type 2 neurons as hubs leads to a much more rapid transition, showing that type 2 hub cells easily "hijack" neuronal networks to synchronization. These results underscore the fact that the degree of synchronization observed in neuronal networks is determined by a complex interplay between network structure and the dynamical properties of individual neurons, indicating that efforts to recover structural connectivity from dynamical correlations must in general take both factors into account.
Contestabile, Andrea; Moroni, Monica; Hallinan, Grace I.; Palazzolo, Gemma; Chad, John; Deinhardt, Katrin; Carugo, Dario
2018-01-01
ABSTRACT Development of remote stimulation techniques for neuronal tissues represents a challenging goal. Among the potential methods, mechanical stimuli are the most promising vectors to convey information non-invasively into intact brain tissue. In this context, selective mechano-sensitization of neuronal circuits would pave the way to develop a new cell-type-specific stimulation approach. We report here, for the first time, the development and characterization of mechano-sensitized neuronal networks through the heterologous expression of an engineered bacterial large-conductance mechanosensitive ion channel (MscL). The neuronal functional expression of the MscL was validated through patch-clamp recordings upon application of calibrated suction pressures. Moreover, we verified the effective development of in-vitro neuronal networks expressing the engineered MscL in terms of cell survival, number of synaptic puncta and spontaneous network activity. The pure mechanosensitivity of the engineered MscL, with its wide genetic modification library, may represent a versatile tool to further develop a mechano-genetic approach. This article has an associated First Person interview with the first author of the paper. PMID:29361543
Bitzenhofer, Sebastian H; Ahlbeck, Joachim; Wolff, Amy; Wiegert, J. Simon; Gee, Christine E.; Oertner, Thomas G.; Hanganu-Opatz, Ileana L.
2017-01-01
Coordinated activity patterns in the developing brain may contribute to the wiring of neuronal circuits underlying future behavioural requirements. However, causal evidence for this hypothesis has been difficult to obtain owing to the absence of tools for selective manipulation of oscillations during early development. We established a protocol that combines optogenetics with electrophysiological recordings from neonatal mice in vivo to elucidate the substrate of early network oscillations in the prefrontal cortex. We show that light-induced activation of layer II/III pyramidal neurons that are transfected by in utero electroporation with a high-efficiency channelrhodopsin drives frequency-specific spiking and boosts network oscillations within beta–gamma frequency range. By contrast, activation of layer V/VI pyramidal neurons causes nonspecific network activation. Thus, entrainment of neonatal prefrontal networks in fast rhythms relies on the activation of layer II/III pyramidal neurons. This approach used here may be useful for further interrogation of developing circuits, and their behavioural readout. PMID:28216627
Analysis of neuronal cells of dissociated primary culture on high-density CMOS electrode array
Matsuda, Eiko; Mita, Takeshi; Hubert, Julien; Bakkum, Douglas; Frey, Urs; Hierlemann, Andreas; Takahashi, Hirokazu; Ikegami, Takashi
2017-01-01
Spontaneous development of neuronal cells was recorded around 4–34 days in vitro (DIV) with high-density CMOS array, which enables detailed study of the spatio-temporal activity of neuronal culture. We used the CMOS array to characterize the evolution of the inter-spike interval (ISI) distribution from putative single neurons, and estimate the network structure based on transfer entropy analysis, where each node corresponds to a single neuron. We observed that the ISI distributions gradually obeyed the power law with maturation of the network. The amount of information transferred between neurons increased at the early stage of development, but decreased as the network matured. These results suggest that both ISI and transfer entropy were very useful for characterizing the dynamic development of cultured neural cells over a few weeks. PMID:24109870
Millet, Larry J; Stewart, Matthew E; Nuzzo, Ralph G; Gillette, Martha U
2010-06-21
Wiring the nervous system relies on the interplay of intrinsic and extrinsic signaling molecules that control neurite extension, neuronal polarity, process maturation and experience-dependent refinement. Extrinsic signals establish and enrich neuron-neuron interactions during development. Understanding how such extrinsic cues direct neurons to establish neural connections in vitro will facilitate the development of organized neural networks for investigating the development and function of nervous system networks. Producing ordered networks of neurons with defined connectivity in vitro presents special technical challenges because the results must be compliant with the biological requirements of rewiring neural networks. Here we demonstrate the ability to form stable, instructive surface-bound gradients of laminin that guide postnatal hippocampal neuron development in vitro. Our work uses a three-channel, interconnected microfluidic device that permits the production of adlayers of planar substrates through the combination of laminar flow, diffusion and physisorption. Through simple flow modifications, a variety of patterns and gradients of laminin (LN) and fluorescein isothiocyanate-conjugated poly-l-lysine (FITC-PLL) were deposited to present neurons with an instructive substratum to guide neuronal development. We present three variations in substrate design that produce distinct growth regimens for postnatal neurons in dispersed cell cultures. In the first approach, diffusion-mediated gradients of LN were formed on cover slips to guide neurons toward increasing LN concentrations. In the second approach, a combined gradient of LN and FITC-PLL was produced using aspiration-driven laminar flow to restrict neuronal growth to a 15 microm wide growth zone at the center of the two superimposed gradients. The last approach demonstrates the capacity to combine binary lines of FITC-PLL in conjunction with surface gradients of LN and bovine serum albumin (BSA) to produce substrate adlayers that provide additional levels of control over growth. This work demonstrates the advantages of spatio-temporal fluid control for patterning surface-bound gradients using a simple microfluidics-based substrate deposition procedure. We anticipate that this microfluidics-based patterning approach will provide instructive patterns and surface-bound gradients to enable a new level of control in guiding neuron development and network formation.
Wyart, Claire; Ybert, Christophe; Bourdieu, Laurent; Herr, Catherine; Prinz, Christelle; Chatenay, Didier
2002-06-30
The use of ordered neuronal networks in vitro is a promising approach to study the development and the activity of small neuronal assemblies. However, in previous attempts, sufficient growth control and physiological maturation of neurons could not be achieved. Here we describe an original protocol in which polylysine patterns confine the adhesion of cellular bodies to prescribed spots and the neuritic growth to thin lines. Hippocampal neurons in these networks are maintained healthy in serum free medium up to 5 weeks in vitro. Electrophysiology and immunochemistry show that neurons exhibit mature excitatory and inhibitory synapses and calcium imaging reveals spontaneous activity of neurons in isolated networks. We demonstrate that neurons in these geometrical networks form functional synapses preferentially to their first neighbors. We have, therefore, established a simple and robust protocol to constrain both the location of neuronal cell bodies and their pattern of connectivity. Moreover, the long term maintenance of the geometry and the physiology of the networks raises the possibility of new applications for systematic screening of pharmacological agents and for electronic to neuron devices.
GaAs Optoelectronic Integrated-Circuit Neurons
NASA Technical Reports Server (NTRS)
Lin, Steven H.; Kim, Jae H.; Psaltis, Demetri
1992-01-01
Monolithic GaAs optoelectronic integrated circuits developed for use as artificial neurons. Neural-network computer contains planar arrays of optoelectronic neurons, and variable synaptic connections between neurons effected by diffraction of light from volume hologram in photorefractive material. Basic principles of neural-network computers explained more fully in "Optoelectronic Integrated Circuits For Neural Networks" (NPO-17652). In present circuits, devices replaced by metal/semiconductor field effect transistors (MESFET's), which consume less power.
A real-time hybrid neuron network for highly parallel cognitive systems.
Christiaanse, Gerrit Jan; Zjajo, Amir; Galuzzi, Carlo; van Leuken, Rene
2016-08-01
For comprehensive understanding of how neurons communicate with each other, new tools need to be developed that can accurately mimic the behaviour of such neurons and neuron networks under `real-time' constraints. In this paper, we propose an easily customisable, highly pipelined, neuron network design, which executes optimally scheduled floating-point operations for maximal amount of biophysically plausible neurons per FPGA family type. To reduce the required amount of resources without adverse effect on the calculation latency, a single exponent instance is used for multiple neuron calculation operations. Experimental results indicate that the proposed network design allows the simulation of up to 1188 neurons on Virtex7 (XC7VX550T) device in brain real-time yielding a speed-up of x12.4 compared to the state-of-the art.
NASA Astrophysics Data System (ADS)
Ma, Jun; Yang, Li-Jian; Wu, Ying; Zhang, Cai-Rong
2010-09-01
The effect of small-world connection and noise on the formation and transition of spiral wave in the networks of Hodgkin-Huxley neurons are investigated in detail. Some interesting results are found in our numerical studies. i) The quiescent neurons are activated to propagate electric signal to others by generating and developing spiral wave from spiral seed in small area. ii) A statistical factor is defined to describe the collective properties and phase transition induced by the topology of networks and noise. iii) Stable rotating spiral wave can be generated and keeps robust when the rewiring probability is below certain threshold, otherwise, spiral wave can not be developed from the spiral seed and spiral wave breakup occurs for a stable rotating spiral wave. iv) Gaussian white noise is introduced on the membrane of neurons to study the noise-induced phase transition on spiral wave in small-world networks of neurons. It is confirmed that Gaussian white noise plays active role in supporting and developing spiral wave in the networks of neurons, and appearance of smaller factor of synchronization indicates high possibility to induce spiral wave.
Training a Network of Electronic Neurons for Control of a Mobile Robot
NASA Astrophysics Data System (ADS)
Vromen, T. G. M.; Steur, E.; Nijmeijer, H.
An adaptive training procedure is developed for a network of electronic neurons, which controls a mobile robot driving around in an unknown environment while avoiding obstacles. The neuronal network controls the angular velocity of the wheels of the robot based on the sensor readings. The nodes in the neuronal network controller are clusters of neurons rather than single neurons. The adaptive training procedure ensures that the input-output behavior of the clusters is identical, even though the constituting neurons are nonidentical and have, in isolation, nonidentical responses to the same input. In particular, we let the neurons interact via a diffusive coupling, and the proposed training procedure modifies the diffusion interaction weights such that the neurons behave synchronously with a predefined response. The working principle of the training procedure is experimentally validated and results of an experiment with a mobile robot that is completely autonomously driving in an unknown environment with obstacles are presented.
Chevalier, Marc; Toporikova, Natalia; Simmers, John; Thoby-Brisson, Muriel
2016-01-01
Breathing is a vital rhythmic behavior generated by hindbrain neuronal circuitry, including the preBötzinger complex network (preBötC) that controls inspiration. The emergence of preBötC network activity during prenatal development has been described, but little is known regarding inspiratory neurons expressing pacemaker properties at embryonic stages. Here, we combined calcium imaging and electrophysiological recordings in mouse embryo brainstem slices together with computational modeling to reveal the existence of heterogeneous pacemaker oscillatory properties relying on distinct combinations of burst-generating INaP and ICAN conductances. The respective proportion of the different inspiratory pacemaker subtypes changes during prenatal development. Concomitantly, network rhythmogenesis switches from a purely INaP/ICAN-dependent mechanism at E16.5 to a combined pacemaker/network-driven process at E18.5. Our results provide the first description of pacemaker bursting properties in embryonic preBötC neurons and indicate that network rhythmogenesis undergoes important changes during prenatal development through alterations in both circuit properties and the biophysical characteristics of pacemaker neurons. DOI: http://dx.doi.org/10.7554/eLife.16125.001 PMID:27434668
Computational model of electrically coupled, intrinsically distinct pacemaker neurons.
Soto-Treviño, Cristina; Rabbah, Pascale; Marder, Eve; Nadim, Farzan
2005-07-01
Electrical coupling between neurons with similar properties is often studied. Nonetheless, the role of electrical coupling between neurons with widely different intrinsic properties also occurs, but is less well understood. Inspired by the pacemaker group of the crustacean pyloric network, we developed a multicompartment, conductance-based model of a small network of intrinsically distinct, electrically coupled neurons. In the pyloric network, a small intrinsically bursting neuron, through gap junctions, drives 2 larger, tonically spiking neurons to reliably burst in-phase with it. Each model neuron has 2 compartments, one responsible for spike generation and the other for producing a slow, large-amplitude oscillation. We illustrate how these compartments interact and determine the dynamics of the model neurons. Our model captures the dynamic oscillation range measured from the isolated and coupled biological neurons. At the network level, we explore the range of coupling strengths for which synchronous bursting oscillations are possible. The spatial segregation of ionic currents significantly enhances the ability of the 2 neurons to burst synchronously, and the oscillation range of the model pacemaker network depends not only on the strength of the electrical synapse but also on the identity of the neuron receiving inputs. We also compare the activity of the electrically coupled, distinct neurons with that of a network of coupled identical bursting neurons. For small to moderate coupling strengths, the network of identical elements, when receiving asymmetrical inputs, can have a smaller dynamic range of oscillation than that of its constituent neurons in isolation.
Phenotypic Checkpoints Regulate Neuronal Development
Ben-Ari, Yehezkel; Spitzer, Nicholas C.
2010-01-01
Nervous system development proceeds by sequential gene expression mediated by cascades of transcription factors in parallel with sequences of patterned network activity driven by receptors and ion channels. These sequences are cell type- and developmental stage-dependent and modulated by paracrine actions of substances released by neurons and glia. How and to what extent these sequences interact to enable neuronal network development is not understood. Recent evidence demonstrates that CNS development requires intermediate stages of differentiation providing functional feedback that influences gene expression. We suggest that embryonic neuronal functions constitute a series of phenotypic checkpoint signatures; neurons failing to express these functions are delayed or developmentally arrested. Such checkpoints are likely to be a general feature of neuronal development and may constitute presymptomatic signatures of neurological disorders when they go awry. PMID:20864191
Intrinsically active and pacemaker neurons in pluripotent stem cell-derived neuronal populations.
Illes, Sebastian; Jakab, Martin; Beyer, Felix; Gelfert, Renate; Couillard-Despres, Sébastien; Schnitzler, Alfons; Ritter, Markus; Aigner, Ludwig
2014-03-11
Neurons generated from pluripotent stem cells (PSCs) self-organize into functional neuronal assemblies in vitro, generating synchronous network activities. Intriguingly, PSC-derived neuronal assemblies develop spontaneous activities that are independent of external stimulation, suggesting the presence of thus far undetected intrinsically active neurons (IANs). Here, by using mouse embryonic stem cells, we provide evidence for the existence of IANs in PSC-neuronal networks based on extracellular multielectrode array and intracellular patch-clamp recordings. IANs remain active after pharmacological inhibition of fast synaptic communication and possess intrinsic mechanisms required for autonomous neuronal activity. PSC-derived IANs are functionally integrated in PSC-neuronal populations, contribute to synchronous network bursting, and exhibit pacemaker properties. The intrinsic activity and pacemaker properties of the neuronal subpopulation identified herein may be particularly relevant for interventions involving transplantation of neural tissues. IANs may be a key element in the regulation of the functional activity of grafted as well as preexisting host neuronal networks.
Intrinsically Active and Pacemaker Neurons in Pluripotent Stem Cell-Derived Neuronal Populations
Illes, Sebastian; Jakab, Martin; Beyer, Felix; Gelfert, Renate; Couillard-Despres, Sébastien; Schnitzler, Alfons; Ritter, Markus; Aigner, Ludwig
2014-01-01
Summary Neurons generated from pluripotent stem cells (PSCs) self-organize into functional neuronal assemblies in vitro, generating synchronous network activities. Intriguingly, PSC-derived neuronal assemblies develop spontaneous activities that are independent of external stimulation, suggesting the presence of thus far undetected intrinsically active neurons (IANs). Here, by using mouse embryonic stem cells, we provide evidence for the existence of IANs in PSC-neuronal networks based on extracellular multielectrode array and intracellular patch-clamp recordings. IANs remain active after pharmacological inhibition of fast synaptic communication and possess intrinsic mechanisms required for autonomous neuronal activity. PSC-derived IANs are functionally integrated in PSC-neuronal populations, contribute to synchronous network bursting, and exhibit pacemaker properties. The intrinsic activity and pacemaker properties of the neuronal subpopulation identified herein may be particularly relevant for interventions involving transplantation of neural tissues. IANs may be a key element in the regulation of the functional activity of grafted as well as preexisting host neuronal networks. PMID:24672755
Biffi, E; Menegon, A; Regalia, G; Maida, S; Ferrigno, G; Pedrocchi, A
2011-08-15
Modern drug discovery for Central Nervous System pathologies has recently focused its attention to in vitro neuronal networks as models for the study of neuronal activities. Micro Electrode Arrays (MEAs), a widely recognized tool for pharmacological investigations, enable the simultaneous study of the spiking activity of discrete regions of a neuronal culture, providing an insight into the dynamics of networks. Taking advantage of MEAs features and making the most of the cross-correlation analysis to assess internal parameters of a neuronal system, we provide an efficient method for the evaluation of comprehensive neuronal network activity. We developed an intra network burst correlation algorithm, we evaluated its sensitivity and we explored its potential use in pharmacological studies. Our results demonstrate the high sensitivity of this algorithm and the efficacy of this methodology in pharmacological dose-response studies, with the advantage of analyzing the effect of drugs on the comprehensive correlative properties of integrated neuronal networks. Copyright © 2011 Elsevier B.V. All rights reserved.
Hebbian based learning with winner-take-all for spiking neural networks
NASA Astrophysics Data System (ADS)
Gupta, Ankur; Long, Lyle
2009-03-01
Learning methods for spiking neural networks are not as well developed as the traditional neural networks that widely use back-propagation training. We propose and implement a Hebbian based learning method with winner-take-all competition for spiking neural networks. This approach is spike time dependent which makes it naturally well suited for a network of spiking neurons. Homeostasis with Hebbian learning is implemented which ensures stability and quicker learning. Homeostasis implies that the net sum of incoming weights associated with a neuron remains the same. Winner-take-all is also implemented for competitive learning between output neurons. We implemented this learning rule on a biologically based vision processing system that we are developing, and use layers of leaky integrate and fire neurons. The network when presented with 4 bars (or Gabor filters) of different orientation learns to recognize the bar orientations (or Gabor filters). After training, each output neuron learns to recognize a bar at specific orientation and responds by firing more vigorously to that bar and less vigorously to others. These neurons are found to have bell shaped tuning curves and are similar to the simple cells experimentally observed by Hubel and Wiesel in the striate cortex of cat and monkey.
Connexin-Dependent Neuroglial Networking as a New Therapeutic Target.
Charvériat, Mathieu; Naus, Christian C; Leybaert, Luc; Sáez, Juan C; Giaume, Christian
2017-01-01
Astrocytes and neurons dynamically interact during physiological processes, and it is now widely accepted that they are both organized in plastic and tightly regulated networks. Astrocytes are connected through connexin-based gap junction channels, with brain region specificities, and those networks modulate neuronal activities, such as those involved in sleep-wake cycle, cognitive, or sensory functions. Additionally, astrocyte domains have been involved in neurogenesis and neuronal differentiation during development; they participate in the "tripartite synapse" with both pre-synaptic and post-synaptic neurons by tuning down or up neuronal activities through the control of neuronal synaptic strength. Connexin-based hemichannels are also involved in those regulations of neuronal activities, however, this feature will not be considered in the present review. Furthermore, neuronal processes, transmitting electrical signals to chemical synapses, stringently control astroglial connexin expression, and channel functions. Long-range energy trafficking toward neurons through connexin-coupled astrocytes and plasticity of those networks are hence largely dependent on neuronal activity. Such reciprocal interactions between neurons and astrocyte networks involve neurotransmitters, cytokines, endogenous lipids, and peptides released by neurons but also other brain cell types, including microglial and endothelial cells. Over the past 10 years, knowledge about neuroglial interactions has widened and now includes effects of CNS-targeting drugs such as antidepressants, antipsychotics, psychostimulants, or sedatives drugs as potential modulators of connexin function and thus astrocyte networking activity. In physiological situations, neuroglial networking is consequently resulting from a two-way interaction between astrocyte gap junction-mediated networks and those made by neurons. As both cell types are modulated by CNS drugs we postulate that neuroglial networking may emerge as new therapeutic targets in neurological and psychiatric disorders.
Long-term optical stimulation of channelrhodopsin-expressing neurons to study network plasticity
Lignani, Gabriele; Ferrea, Enrico; Difato, Francesco; Amarù, Jessica; Ferroni, Eleonora; Lugarà, Eleonora; Espinoza, Stefano; Gainetdinov, Raul R.; Baldelli, Pietro; Benfenati, Fabio
2013-01-01
Neuronal plasticity produces changes in excitability, synaptic transmission, and network architecture in response to external stimuli. Network adaptation to environmental conditions takes place in time scales ranging from few seconds to days, and modulates the entire network dynamics. To study the network response to defined long-term experimental protocols, we setup a system that combines optical and electrophysiological tools embedded in a cell incubator. Primary hippocampal neurons transduced with lentiviruses expressing channelrhodopsin-2/H134R were subjected to various photostimulation protocols in a time window in the order of days. To monitor the effects of light-induced gating of network activity, stimulated transduced neurons were simultaneously recorded using multi-electrode arrays (MEAs). The developed experimental model allows discerning short-term, long-lasting, and adaptive plasticity responses of the same neuronal network to distinct stimulation frequencies applied over different temporal windows. PMID:23970852
Long-term optical stimulation of channelrhodopsin-expressing neurons to study network plasticity.
Lignani, Gabriele; Ferrea, Enrico; Difato, Francesco; Amarù, Jessica; Ferroni, Eleonora; Lugarà, Eleonora; Espinoza, Stefano; Gainetdinov, Raul R; Baldelli, Pietro; Benfenati, Fabio
2013-01-01
Neuronal plasticity produces changes in excitability, synaptic transmission, and network architecture in response to external stimuli. Network adaptation to environmental conditions takes place in time scales ranging from few seconds to days, and modulates the entire network dynamics. To study the network response to defined long-term experimental protocols, we setup a system that combines optical and electrophysiological tools embedded in a cell incubator. Primary hippocampal neurons transduced with lentiviruses expressing channelrhodopsin-2/H134R were subjected to various photostimulation protocols in a time window in the order of days. To monitor the effects of light-induced gating of network activity, stimulated transduced neurons were simultaneously recorded using multi-electrode arrays (MEAs). The developed experimental model allows discerning short-term, long-lasting, and adaptive plasticity responses of the same neuronal network to distinct stimulation frequencies applied over different temporal windows.
Intrinsic Neuronal Properties Switch the Mode of Information Transmission in Networks
Gjorgjieva, Julijana; Mease, Rebecca A.; Moody, William J.; Fairhall, Adrienne L.
2014-01-01
Diverse ion channels and their dynamics endow single neurons with complex biophysical properties. These properties determine the heterogeneity of cell types that make up the brain, as constituents of neural circuits tuned to perform highly specific computations. How do biophysical properties of single neurons impact network function? We study a set of biophysical properties that emerge in cortical neurons during the first week of development, eventually allowing these neurons to adaptively scale the gain of their response to the amplitude of the fluctuations they encounter. During the same time period, these same neurons participate in large-scale waves of spontaneously generated electrical activity. We investigate the potential role of experimentally observed changes in intrinsic neuronal properties in determining the ability of cortical networks to propagate waves of activity. We show that such changes can strongly affect the ability of multi-layered feedforward networks to represent and transmit information on multiple timescales. With properties modeled on those observed at early stages of development, neurons are relatively insensitive to rapid fluctuations and tend to fire synchronously in response to wave-like events of large amplitude. Following developmental changes in voltage-dependent conductances, these same neurons become efficient encoders of fast input fluctuations over few layers, but lose the ability to transmit slower, population-wide input variations across many layers. Depending on the neurons' intrinsic properties, noise plays different roles in modulating neuronal input-output curves, which can dramatically impact network transmission. The developmental change in intrinsic properties supports a transformation of a networks function from the propagation of network-wide information to one in which computations are scaled to local activity. This work underscores the significance of simple changes in conductance parameters in governing how neurons represent and propagate information, and suggests a role for background synaptic noise in switching the mode of information transmission. PMID:25474701
Intrinsic protective mechanisms of the neuron-glia network against glioma invasion.
Iwadate, Yasuo; Fukuda, Kazumasa; Matsutani, Tomoo; Saeki, Naokatsu
2016-04-01
Gliomas arising in the brain parenchyma infiltrate into the surrounding brain and break down established complex neuron-glia networks. However, mounting evidence suggests that initially the network microenvironment of the adult central nervous system (CNS) is innately non-permissive to glioma cell invasion. The main players are inhibitory molecules in CNS myelin, as well as proteoglycans associated with astrocytes. Neural stem cells, and neurons themselves, possess inhibitory functions against neighboring tumor cells. These mechanisms have evolved to protect the established neuron-glia network, which is necessary for brain function. Greater insight into the interaction between glioma cells and the surrounding neuron-glia network is crucial for developing new therapies for treating these devastating tumors while preserving the important and complex neural functions of patients. Copyright © 2015 Elsevier Ltd. All rights reserved.
Kintos, Nickolas; Nusbaum, Michael P; Nadim, Farzan
2008-06-01
Many central pattern generating networks are influenced by synaptic input from modulatory projection neurons. The network response to a projection neuron is sometimes mimicked by bath applying the neuronally-released modulator, despite the absence of network interactions with the projection neuron. One interesting example occurs in the crab stomatogastric ganglion (STG), where bath applying the neuropeptide pyrokinin (PK) elicits a gastric mill rhythm which is similar to that elicited by the projection neuron modulatory commissural neuron 1 (MCN1), despite the absence of PK in MCN1 and the fact that MCN1 is not active during the PK-elicited rhythm. MCN1 terminals have fast and slow synaptic actions on the gastric mill network and are presynaptically inhibited by this network in the STG. These local connections are inactive in the PK-elicited rhythm, and the mechanism underlying this rhythm is unknown. We use mathematical and biophysically-realistic modeling to propose potential mechanisms by which PK can elicit a gastric mill rhythm that is similar to the MCN1-elicited rhythm. We analyze slow-wave network oscillations using simplified mathematical models and, in parallel, develop biophysically-realistic models that account for fast, action potential-driven oscillations and some spatial structure of the network neurons. Our results illustrate how the actions of bath-applied neuromodulators can mimic those of descending projection neurons through mathematically similar but physiologically distinct mechanisms.
Luccioli, Stefano; Ben-Jacob, Eshel; Barzilai, Ari; Bonifazi, Paolo; Torcini, Alessandro
2014-01-01
It has recently been discovered that single neuron stimulation can impact network dynamics in immature and adult neuronal circuits. Here we report a novel mechanism which can explain in neuronal circuits, at an early stage of development, the peculiar role played by a few specific neurons in promoting/arresting the population activity. For this purpose, we consider a standard neuronal network model, with short-term synaptic plasticity, whose population activity is characterized by bursting behavior. The addition of developmentally inspired constraints and correlations in the distribution of the neuronal connectivities and excitabilities leads to the emergence of functional hub neurons, whose stimulation/deletion is critical for the network activity. Functional hubs form a clique, where a precise sequential activation of the neurons is essential to ignite collective events without any need for a specific topological architecture. Unsupervised time-lagged firings of supra-threshold cells, in connection with coordinated entrainments of near-threshold neurons, are the key ingredients to orchestrate population activity. PMID:25255443
A microfluidic platform for controlled biochemical stimulation of twin neuronal networks.
Biffi, Emilia; Piraino, Francesco; Pedrocchi, Alessandra; Fiore, Gianfranco B; Ferrigno, Giancarlo; Redaelli, Alberto; Menegon, Andrea; Rasponi, Marco
2012-06-01
Spatially and temporally resolved delivery of soluble factors is a key feature for pharmacological applications. In this framework, microfluidics coupled to multisite electrophysiology offers great advantages in neuropharmacology and toxicology. In this work, a microfluidic device for biochemical stimulation of neuronal networks was developed. A micro-chamber for cell culturing, previously developed and tested for long term neuronal growth by our group, was provided with a thin wall, which partially divided the cell culture region in two sub-compartments. The device was reversibly coupled to a flat micro electrode array and used to culture primary neurons in the same microenvironment. We demonstrated that the two fluidically connected compartments were able to originate two parallel neuronal networks with similar electrophysiological activity but functionally independent. Furthermore, the device allowed to connect the outlet port to a syringe pump and to transform the static culture chamber in a perfused one. At 14 days invitro, sub-networks were independently stimulated with a test molecule, tetrodotoxin, a neurotoxin known to block action potentials, by means of continuous delivery. Electrical activity recordings proved the ability of the device configuration to selectively stimulate each neuronal network individually. The proposed microfluidic approach represents an innovative methodology to perform biological, pharmacological, and electrophysiological experiments on neuronal networks. Indeed, it allows for controlled delivery of substances to cells, and it overcomes the limitations due to standard drug stimulation techniques. Finally, the twin network configuration reduces biological variability, which has important outcomes on pharmacological and drug screening.
The emergence of spontaneous activity in neuronal cultures
NASA Astrophysics Data System (ADS)
Orlandi, J. G.; Alvarez-Lacalle, E.; Teller, S.; Soriano, J.; Casademunt, J.
2013-01-01
In vitro neuronal networks of dissociated hippocampal or cortical tissues are one of the most attractive model systems for the physics and neuroscience communities. Cultured neurons grow and mature, develop axons and dendrites, and quickly connect to their neighbors to establish a spontaneously active network within a week. The resulting neuronal network is characterized by a combination of excitatory and inhibitory neurons coupled through synaptic connections that interact in a highly nonlinear manner. The nonlinear behavior emerges from the dynamics of both the neurons' spiking activity and synaptic transmission, together with biological noise. These ingredients give rise to a rich repertoire of phenomena that are still poorly understood, including the emergence and maintenance of periodic spontaneous activity, avalanches, propagation of fronts and synchronization. In this work we present an overview on the rich activity of cultured neuronal networks, and detail the minimal theoretical considerations needed to describe experimental observations.
Emergent properties of interacting populations of spiking neurons.
Cardanobile, Stefano; Rotter, Stefan
2011-01-01
Dynamic neuronal networks are a key paradigm of increasing importance in brain research, concerned with the functional analysis of biological neuronal networks and, at the same time, with the synthesis of artificial brain-like systems. In this context, neuronal network models serve as mathematical tools to understand the function of brains, but they might as well develop into future tools for enhancing certain functions of our nervous system. Here, we present and discuss our recent achievements in developing multiplicative point processes into a viable mathematical framework for spiking network modeling. The perspective is that the dynamic behavior of these neuronal networks is faithfully reflected by a set of non-linear rate equations, describing all interactions on the population level. These equations are similar in structure to Lotka-Volterra equations, well known by their use in modeling predator-prey relations in population biology, but abundant applications to economic theory have also been described. We present a number of biologically relevant examples for spiking network function, which can be studied with the help of the aforementioned correspondence between spike trains and specific systems of non-linear coupled ordinary differential equations. We claim that, enabled by the use of multiplicative point processes, we can make essential contributions to a more thorough understanding of the dynamical properties of interacting neuronal populations.
Emergent Properties of Interacting Populations of Spiking Neurons
Cardanobile, Stefano; Rotter, Stefan
2011-01-01
Dynamic neuronal networks are a key paradigm of increasing importance in brain research, concerned with the functional analysis of biological neuronal networks and, at the same time, with the synthesis of artificial brain-like systems. In this context, neuronal network models serve as mathematical tools to understand the function of brains, but they might as well develop into future tools for enhancing certain functions of our nervous system. Here, we present and discuss our recent achievements in developing multiplicative point processes into a viable mathematical framework for spiking network modeling. The perspective is that the dynamic behavior of these neuronal networks is faithfully reflected by a set of non-linear rate equations, describing all interactions on the population level. These equations are similar in structure to Lotka-Volterra equations, well known by their use in modeling predator-prey relations in population biology, but abundant applications to economic theory have also been described. We present a number of biologically relevant examples for spiking network function, which can be studied with the help of the aforementioned correspondence between spike trains and specific systems of non-linear coupled ordinary differential equations. We claim that, enabled by the use of multiplicative point processes, we can make essential contributions to a more thorough understanding of the dynamical properties of interacting neuronal populations. PMID:22207844
Importance of being Nernst: Synaptic activity and functional relevance in stem cell-derived neurons
Bradford, Aaron B; McNutt, Patrick M
2015-01-01
Functional synaptogenesis and network emergence are signature endpoints of neurogenesis. These behaviors provide higher-order confirmation that biochemical and cellular processes necessary for neurotransmitter release, post-synaptic detection and network propagation of neuronal activity have been properly expressed and coordinated among cells. The development of synaptic neurotransmission can therefore be considered a defining property of neurons. Although dissociated primary neuron cultures readily form functioning synapses and network behaviors in vitro, continuously cultured neurogenic cell lines have historically failed to meet these criteria. Therefore, in vitro-derived neuron models that develop synaptic transmission are critically needed for a wide array of studies, including molecular neuroscience, developmental neurogenesis, disease research and neurotoxicology. Over the last decade, neurons derived from various stem cell lines have shown varying ability to develop into functionally mature neurons. In this review, we will discuss the neurogenic potential of various stem cells populations, addressing strengths and weaknesses of each, with particular attention to the emergence of functional behaviors. We will propose methods to functionally characterize new stem cell-derived neuron (SCN) platforms to improve their reliability as physiological relevant models. Finally, we will review how synaptically active SCNs can be applied to accelerate research in a variety of areas. Ultimately, emphasizing the critical importance of synaptic activity and network responses as a marker of neuronal maturation is anticipated to result in in vitro findings that better translate to efficacious clinical treatments. PMID:26240679
Experiments in clustered neuronal networks: A paradigm for complex modular dynamics
NASA Astrophysics Data System (ADS)
Teller, Sara; Soriano, Jordi
2016-06-01
Uncovering the interplay activity-connectivity is one of the major challenges in neuroscience. To deepen in the understanding of how a neuronal circuit shapes network dynamics, neuronal cultures have emerged as remarkable systems given their accessibility and easy manipulation. An attractive configuration of these in vitro systems consists in an ensemble of interconnected clusters of neurons. Using calcium fluorescence imaging to monitor spontaneous activity in these clustered neuronal networks, we were able to draw functional maps and reveal their topological features. We also observed that these networks exhibit a hierarchical modular dynamics, in which clusters fire in small groups that shape characteristic communities in the network. The structure and stability of these communities is sensitive to chemical or physical action, and therefore their analysis may serve as a proxy for network health. Indeed, the combination of all these approaches is helping to develop models to quantify damage upon network degradation, with promising applications for the study of neurological disorders in vitro.
Identification of the connections in biologically inspired neural networks
NASA Technical Reports Server (NTRS)
Demuth, H.; Leung, K.; Beale, M.; Hicklin, J.
1990-01-01
We developed an identification method to find the strength of the connections between neurons from their behavior in small biologically-inspired artificial neural networks. That is, given the network external inputs and the temporal firing pattern of the neurons, we can calculate a solution for the strengths of the connections between neurons and the initial neuron activations if a solution exists. The method determines directly if there is a solution to a particular neural network problem. No training of the network is required. It should be noted that this is a first pass at the solution of a difficult problem. The neuron and network models chosen are related to biology but do not contain all of its complexities, some of which we hope to add to the model in future work. A variety of new results have been obtained. First, the method has been tailored to produce connection weight matrix solutions for networks with important features of biological neural (bioneural) networks. Second, a computationally efficient method of finding a robust central solution has been developed. This later method also enables us to find the most consistent solution in the presence of noisy data. Prospects of applying our method to identify bioneural network connections are exciting because such connections are almost impossible to measure in the laboratory. Knowledge of such connections would facilitate an understanding of bioneural networks and would allow the construction of the electronic counterparts of bioneural networks on very large scale integrated (VLSI) circuits.
Granger causality network reconstruction of conductance-based integrate-and-fire neuronal systems.
Zhou, Douglas; Xiao, Yanyang; Zhang, Yaoyu; Xu, Zhiqin; Cai, David
2014-01-01
Reconstruction of anatomical connectivity from measured dynamical activities of coupled neurons is one of the fundamental issues in the understanding of structure-function relationship of neuronal circuitry. Many approaches have been developed to address this issue based on either electrical or metabolic data observed in experiment. The Granger causality (GC) analysis remains one of the major approaches to explore the dynamical causal connectivity among individual neurons or neuronal populations. However, it is yet to be clarified how such causal connectivity, i.e., the GC connectivity, can be mapped to the underlying anatomical connectivity in neuronal networks. We perform the GC analysis on the conductance-based integrate-and-fire (I&F) neuronal networks to obtain their causal connectivity. Through numerical experiments, we find that the underlying synaptic connectivity amongst individual neurons or subnetworks, can be successfully reconstructed by the GC connectivity constructed from voltage time series. Furthermore, this reconstruction is insensitive to dynamical regimes and can be achieved without perturbing systems and prior knowledge of neuronal model parameters. Surprisingly, the synaptic connectivity can even be reconstructed by merely knowing the raster of systems, i.e., spike timing of neurons. Using spike-triggered correlation techniques, we establish a direct mapping between the causal connectivity and the synaptic connectivity for the conductance-based I&F neuronal networks, and show the GC is quadratically related to the coupling strength. The theoretical approach we develop here may provide a framework for examining the validity of the GC analysis in other settings.
Granger Causality Network Reconstruction of Conductance-Based Integrate-and-Fire Neuronal Systems
Zhou, Douglas; Xiao, Yanyang; Zhang, Yaoyu; Xu, Zhiqin; Cai, David
2014-01-01
Reconstruction of anatomical connectivity from measured dynamical activities of coupled neurons is one of the fundamental issues in the understanding of structure-function relationship of neuronal circuitry. Many approaches have been developed to address this issue based on either electrical or metabolic data observed in experiment. The Granger causality (GC) analysis remains one of the major approaches to explore the dynamical causal connectivity among individual neurons or neuronal populations. However, it is yet to be clarified how such causal connectivity, i.e., the GC connectivity, can be mapped to the underlying anatomical connectivity in neuronal networks. We perform the GC analysis on the conductance-based integrate-and-fire (IF) neuronal networks to obtain their causal connectivity. Through numerical experiments, we find that the underlying synaptic connectivity amongst individual neurons or subnetworks, can be successfully reconstructed by the GC connectivity constructed from voltage time series. Furthermore, this reconstruction is insensitive to dynamical regimes and can be achieved without perturbing systems and prior knowledge of neuronal model parameters. Surprisingly, the synaptic connectivity can even be reconstructed by merely knowing the raster of systems, i.e., spike timing of neurons. Using spike-triggered correlation techniques, we establish a direct mapping between the causal connectivity and the synaptic connectivity for the conductance-based IF neuronal networks, and show the GC is quadratically related to the coupling strength. The theoretical approach we develop here may provide a framework for examining the validity of the GC analysis in other settings. PMID:24586285
Tedesco, Mariateresa; Frega, Monica; Martinoia, Sergio; Pesce, Mattia; Massobrio, Paolo
2015-10-18
Currently, large-scale networks derived from dissociated neurons growing and developing in vitro on extracellular micro-transducer devices are the gold-standard experimental model to study basic neurophysiological mechanisms involved in the formation and maintenance of neuronal cell assemblies. However, in vitro studies have been limited to the recording of the electrophysiological activity generated by bi-dimensional (2D) neural networks. Nonetheless, given the intricate relationship between structure and dynamics, a significant improvement is necessary to investigate the formation and the developing dynamics of three-dimensional (3D) networks. In this work, a novel experimental platform in which 3D hippocampal or cortical networks are coupled to planar Micro-Electrode Arrays (MEAs) is presented. 3D networks are realized by seeding neurons in a scaffold constituted of glass microbeads (30-40 µm in diameter) on which neurons are able to grow and form complex interconnected 3D assemblies. In this way, it is possible to design engineered 3D networks made up of 5-8 layers with an expected final cell density. The increasing complexity in the morphological organization of the 3D assembly induces an enhancement of the electrophysiological patterns displayed by this type of networks. Compared with the standard 2D networks, where highly stereotyped bursting activity emerges, the 3D structure alters the bursting activity in terms of duration and frequency, as well as it allows observation of more random spiking activity. In this sense, the developed 3D model more closely resembles in vivo neural networks.
Tedesco, Mariateresa; Frega, Monica; Martinoia, Sergio; Pesce, Mattia; Massobrio, Paolo
2015-01-01
Currently, large-scale networks derived from dissociated neurons growing and developing in vitro on extracellular micro-transducer devices are the gold-standard experimental model to study basic neurophysiological mechanisms involved in the formation and maintenance of neuronal cell assemblies. However, in vitro studies have been limited to the recording of the electrophysiological activity generated by bi-dimensional (2D) neural networks. Nonetheless, given the intricate relationship between structure and dynamics, a significant improvement is necessary to investigate the formation and the developing dynamics of three-dimensional (3D) networks. In this work, a novel experimental platform in which 3D hippocampal or cortical networks are coupled to planar Micro-Electrode Arrays (MEAs) is presented. 3D networks are realized by seeding neurons in a scaffold constituted of glass microbeads (30-40 µm in diameter) on which neurons are able to grow and form complex interconnected 3D assemblies. In this way, it is possible to design engineered 3D networks made up of 5-8 layers with an expected final cell density. The increasing complexity in the morphological organization of the 3D assembly induces an enhancement of the electrophysiological patterns displayed by this type of networks. Compared with the standard 2D networks, where highly stereotyped bursting activity emerges, the 3D structure alters the bursting activity in terms of duration and frequency, as well as it allows observation of more random spiking activity. In this sense, the developed 3D model more closely resembles in vivo neural networks. PMID:26554533
Extracting neuronal functional network dynamics via adaptive Granger causality analysis.
Sheikhattar, Alireza; Miran, Sina; Liu, Ji; Fritz, Jonathan B; Shamma, Shihab A; Kanold, Patrick O; Babadi, Behtash
2018-04-24
Quantifying the functional relations between the nodes in a network based on local observations is a key challenge in studying complex systems. Most existing time series analysis techniques for this purpose provide static estimates of the network properties, pertain to stationary Gaussian data, or do not take into account the ubiquitous sparsity in the underlying functional networks. When applied to spike recordings from neuronal ensembles undergoing rapid task-dependent dynamics, they thus hinder a precise statistical characterization of the dynamic neuronal functional networks underlying adaptive behavior. We develop a dynamic estimation and inference paradigm for extracting functional neuronal network dynamics in the sense of Granger, by integrating techniques from adaptive filtering, compressed sensing, point process theory, and high-dimensional statistics. We demonstrate the utility of our proposed paradigm through theoretical analysis, algorithm development, and application to synthetic and real data. Application of our techniques to two-photon Ca 2+ imaging experiments from the mouse auditory cortex reveals unique features of the functional neuronal network structures underlying spontaneous activity at unprecedented spatiotemporal resolution. Our analysis of simultaneous recordings from the ferret auditory and prefrontal cortical areas suggests evidence for the role of rapid top-down and bottom-up functional dynamics across these areas involved in robust attentive behavior.
The relevance of network micro-structure for neural dynamics.
Pernice, Volker; Deger, Moritz; Cardanobile, Stefano; Rotter, Stefan
2013-01-01
The activity of cortical neurons is determined by the input they receive from presynaptic neurons. Many previous studies have investigated how specific aspects of the statistics of the input affect the spike trains of single neurons and neurons in recurrent networks. However, typically very simple random network models are considered in such studies. Here we use a recently developed algorithm to construct networks based on a quasi-fractal probability measure which are much more variable than commonly used network models, and which therefore promise to sample the space of recurrent networks in a more exhaustive fashion than previously possible. We use the generated graphs as the underlying network topology in simulations of networks of integrate-and-fire neurons in an asynchronous and irregular state. Based on an extensive dataset of networks and neuronal simulations we assess statistical relations between features of the network structure and the spiking activity. Our results highlight the strong influence that some details of the network structure have on the activity dynamics of both single neurons and populations, even if some global network parameters are kept fixed. We observe specific and consistent relations between activity characteristics like spike-train irregularity or correlations and network properties, for example the distributions of the numbers of in- and outgoing connections or clustering. Exploiting these relations, we demonstrate that it is possible to estimate structural characteristics of the network from activity data. We also assess higher order correlations of spiking activity in the various networks considered here, and find that their occurrence strongly depends on the network structure. These results provide directions for further theoretical studies on recurrent networks, as well as new ways to interpret spike train recordings from neural circuits.
Mäkinen, Meeri Eeva-Liisa; Ylä-Outinen, Laura; Narkilahti, Susanna
2018-01-01
The electrical activity of the brain arises from single neurons communicating with each other. However, how single neurons interact during early development to give rise to neural network activity remains poorly understood. We studied the emergence of synchronous neural activity in human pluripotent stem cell (hPSC)-derived neural networks simultaneously on a single-neuron level and network level. The contribution of gamma-aminobutyric acid (GABA) and gap junctions to the development of synchronous activity in hPSC-derived neural networks was studied with GABA agonist and antagonist and by blocking gap junctional communication, respectively. We characterized the dynamics of the network-wide synchrony in hPSC-derived neural networks with high spatial resolution (calcium imaging) and temporal resolution microelectrode array (MEA). We found that the emergence of synchrony correlates with a decrease in very strong GABA excitation. However, the synchronous network was found to consist of a heterogeneous mixture of synchronously active cells with variable responses to GABA, GABA agonists and gap junction blockers. Furthermore, we show how single-cell distributions give rise to the network effect of GABA, GABA agonists and gap junction blockers. Finally, based on our observations, we suggest that the earliest form of synchronous neuronal activity depends on gap junctions and a decrease in GABA induced depolarization but not on GABAA mediated signaling. PMID:29559893
The application of the multi-alternative approach in active neural network models
NASA Astrophysics Data System (ADS)
Podvalny, S.; Vasiljev, E.
2017-02-01
The article refers to the construction of intelligent systems based artificial neuron networks are used. We discuss the basic properties of the non-compliance of artificial neuron networks and their biological prototypes. It is shown here that the main reason for these discrepancies is the structural immutability of the neuron network models in the learning process, that is, their passivity. Based on the modern understanding of the biological nervous system as a structured ensemble of nerve cells, it is proposed to abandon the attempts to simulate its work at the level of the elementary neurons functioning processes and proceed to the reproduction of the information structure of data storage and processing on the basis of the general enough evolutionary principles of multialternativity, i.e. the multi-level structural model, diversity and modularity. The implementation method of these principles is offered, using the faceted memory organization in the neuron network with the rearranging active structure. An example of the implementation of the active facet-type neuron network in the intellectual decision-making system in the conditions of critical events development in the electrical distribution system.
Gilson, Matthieu; Burkitt, Anthony N; Grayden, David B; Thomas, Doreen A; van Hemmen, J Leo
2009-12-01
In neuronal networks, the changes of synaptic strength (or weight) performed by spike-timing-dependent plasticity (STDP) are hypothesized to give rise to functional network structure. This article investigates how this phenomenon occurs for the excitatory recurrent connections of a network with fixed input weights that is stimulated by external spike trains. We develop a theoretical framework based on the Poisson neuron model to analyze the interplay between the neuronal activity (firing rates and the spike-time correlations) and the learning dynamics, when the network is stimulated by correlated pools of homogeneous Poisson spike trains. STDP can lead to both a stabilization of all the neuron firing rates (homeostatic equilibrium) and a robust weight specialization. The pattern of specialization for the recurrent weights is determined by a relationship between the input firing-rate and correlation structures, the network topology, the STDP parameters and the synaptic response properties. We find conditions for feed-forward pathways or areas with strengthened self-feedback to emerge in an initially homogeneous recurrent network.
Recent developments in VSD imaging of small neuronal networks
Hill, Evan S.; Bruno, Angela M.
2014-01-01
Voltage-sensitive dye (VSD) imaging is a powerful technique that can provide, in single experiments, a large-scale view of network activity unobtainable with traditional sharp electrode recording methods. Here we review recent work using VSDs to study small networks and highlight several results from this approach. Topics covered include circuit mapping, network multifunctionality, the network basis of decision making, and the presence of variably participating neurons in networks. Analytical tools being developed and applied to large-scale VSD imaging data sets are discussed, and the future prospects for this exciting field are considered. PMID:25225295
Neural signal registration and analysis of axons grown in microchannels
NASA Astrophysics Data System (ADS)
Pigareva, Y.; Malishev, E.; Gladkov, A.; Kolpakov, V.; Bukatin, A.; Mukhina, I.; Kazantsev, V.; Pimashkin, A.
2016-08-01
Registration of neuronal bioelectrical signals remains one of the main physical tools to study fundamental mechanisms of signal processing in the brain. Neurons generate spiking patterns which propagate through complex map of neural network connectivity. Extracellular recording of isolated axons grown in microchannels provides amplification of the signal for detailed study of spike propagation. In this study we used neuronal hippocampal cultures grown in microfluidic devices combined with microelectrode arrays to investigate a changes of electrical activity during neural network development. We found that after 5 days in vitro after culture plating the spiking activity appears first in microchannels and on the next 2-3 days appears on the electrodes of overall neural network. We conclude that such approach provides a convenient method to study neural signal processing and functional structure development on a single cell and network level of the neuronal culture.
Tonomura, W; Moriguchi, H; Jimbo, Y; Konishi, S
2008-01-01
This paper describes an advanced Micro Channel Array (MCA) so as to record neuronal network at multiple points simultaneously. Developed MCA is designed for neuronal network analysis which has been studied by co-authors using MEA (Micro Electrode Arrays) system. The MCA employs the principle of the extracellular recording. Presented MCA has the following advantages. First of all, the electrodes integrated around individual micro channels are electrically isolated for parallel multipoint recording. Sucking and clamping of cells through micro channels is expected to improve the cellular selectivity and S/N ratio. In this study, hippocampal neurons were cultured on the developed MCA. As a result, the spontaneous and evoked spike potential could be recorded by sucking and clamping the cells at multiple points. Herein, we describe the successful experimental results together with the design and fabrication of the advanced MCA toward on-chip analysis of neuronal network.
Unidirectional signal propagation in primary neurons micropatterned at a single-cell resolution
NASA Astrophysics Data System (ADS)
Yamamoto, H.; Matsumura, R.; Takaoki, H.; Katsurabayashi, S.; Hirano-Iwata, A.; Niwano, M.
2016-07-01
The structure and connectivity of cultured neuronal networks can be controlled by using micropatterned surfaces. Here, we demonstrate that the direction of signal propagation can be precisely controlled at a single-cell resolution by growing primary neurons on micropatterns. To achieve this, we first examined the process by which axons develop and how synapses form in micropatterned primary neurons using immunocytochemistry. By aligning asymmetric micropatterns with a marginal gap, it was possible to pattern primary neurons with a directed polarization axis at the single-cell level. We then examined how synapses develop on micropatterned hippocampal neurons. Three types of micropatterns with different numbers of short paths for dendrite growth were compared. A normal development in synapse density was observed when micropatterns with three or more short paths were used. Finally, we performed double patch clamp recordings on micropatterned neurons to confirm that these synapses are indeed functional, and that the neuronal signal is transmitted unidirectionally in the intended orientation. This work provides a practical guideline for patterning single neurons to design functional neuronal networks in vitro with the direction of signal propagation being controlled.
High-Degree Neurons Feed Cortical Computations
Timme, Nicholas M.; Ito, Shinya; Shimono, Masanori; Yeh, Fang-Chin; Litke, Alan M.; Beggs, John M.
2016-01-01
Recent work has shown that functional connectivity among cortical neurons is highly varied, with a small percentage of neurons having many more connections than others. Also, recent theoretical developments now make it possible to quantify how neurons modify information from the connections they receive. Therefore, it is now possible to investigate how information modification, or computation, depends on the number of connections a neuron receives (in-degree) or sends out (out-degree). To do this, we recorded the simultaneous spiking activity of hundreds of neurons in cortico-hippocampal slice cultures using a high-density 512-electrode array. This preparation and recording method combination produced large numbers of neurons recorded at temporal and spatial resolutions that are not currently available in any in vivo recording system. We utilized transfer entropy (a well-established method for detecting linear and nonlinear interactions in time series) and the partial information decomposition (a powerful, recently developed tool for dissecting multivariate information processing into distinct parts) to quantify computation between neurons where information flows converged. We found that computations did not occur equally in all neurons throughout the networks. Surprisingly, neurons that computed large amounts of information tended to receive connections from high out-degree neurons. However, the in-degree of a neuron was not related to the amount of information it computed. To gain insight into these findings, we developed a simple feedforward network model. We found that a degree-modified Hebbian wiring rule best reproduced the pattern of computation and degree correlation results seen in the real data. Interestingly, this rule also maximized signal propagation in the presence of network-wide correlations, suggesting a mechanism by which cortex could deal with common random background input. These are the first results to show that the extent to which a neuron modifies incoming information streams depends on its topological location in the surrounding functional network. PMID:27159884
Population equations for degree-heterogenous neural networks
NASA Astrophysics Data System (ADS)
Kähne, M.; Sokolov, I. M.; Rüdiger, S.
2017-11-01
We develop a statistical framework for studying recurrent networks with broad distributions of the number of synaptic links per neuron. We treat each group of neurons with equal input degree as one population and derive a system of equations determining the population-averaged firing rates. The derivation rests on an assumption of a large number of neurons and, additionally, an assumption of a large number of synapses per neuron. For the case of binary neurons, analytical solutions can be constructed, which correspond to steps in the activity versus degree space. We apply this theory to networks with degree-correlated topology and show that complex, multi-stable regimes can result for increasing correlations. Our work is motivated by the recent finding of subnetworks of highly active neurons and the fact that these neurons tend to be connected to each other with higher probability.
NASA Astrophysics Data System (ADS)
Mohammed, Ali Ibrahim Ali
The understanding and treatment of brain disorders as well as the development of intelligent machines is hampered by the lack of knowledge of how the brain fundamentally functions. Over the past century, we have learned much about how individual neurons and neural networks behave, however new tools are critically needed to interrogate how neural networks give rise to complex brain processes and disease conditions. Recent innovations in molecular techniques, such as optogenetics, have enabled neuroscientists unprecedented precision to excite, inhibit and record defined neurons. The impressive sensitivity of currently available optogenetic sensors and actuators has now enabled the possibility of analyzing a large number of individual neurons in the brains of behaving animals. To promote the use of these optogenetic tools, this thesis integrates cutting edge optogenetic molecular sensors which is ultrasensitive for imaging neuronal activity with custom wide field optical microscope to analyze a large number of individual neurons in living brains. Wide-field microscopy provides a large field of view and better spatial resolution approaching the Abbe diffraction limit of fluorescent microscope. To demonstrate the advantages of this optical platform, we imaged a deep brain structure, the Hippocampus, and tracked hundreds of neurons over time while mouse was performing a memory task to investigate how those individual neurons related to behavior. In addition, we tested our optical platform in investigating transient neural network changes upon mechanical perturbation related to blast injuries. In this experiment, all blasted mice show a consistent change in neural network. A small portion of neurons showed a sustained calcium increase for an extended period of time, whereas the majority lost their activities. Finally, using optogenetic silencer to control selective motor cortex neurons, we examined their contributions to the network pathology of basal ganglia related to Parkinson's disease. We found that inhibition of motor cortex does not alter exaggerated beta oscillations in the striatum that are associated with parkinsonianism. Together, these results demonstrate the potential of developing integrated optogenetic system to advance our understanding of the principles underlying neural network computation, which would have broad applications from advancing artificial intelligence to disease diagnosis and treatment.
Egorov, Alexei V; Draguhn, Andreas
2013-01-01
Many mammals are born in a very immature state and develop their rich repertoire of behavioral and cognitive functions postnatally. This development goes in parallel with changes in the anatomical and functional organization of cortical structures which are involved in most complex activities. The emerging spatiotemporal activity patterns in multi-neuronal cortical networks may indeed form a direct neuronal correlate of systemic functions like perception, sensorimotor integration, decision making or memory formation. During recent years, several studies--mostly in rodents--have shed light on the ontogenesis of such highly organized patterns of network activity. While each local network has its own peculiar properties, some general rules can be derived. We therefore review and compare data from the developing hippocampus, neocortex and--as an intermediate region--entorhinal cortex. All cortices seem to follow a characteristic sequence starting with uncorrelated activity in uncoupled single neurons where transient activity seems to have mostly trophic effects. In rodents, before and shortly after birth, cortical networks develop weakly coordinated multineuronal discharges which have been termed synchronous plateau assemblies (SPAs). While these patterns rely mostly on electrical coupling by gap junctions, the subsequent increase in number and maturation of chemical synapses leads to the generation of large-scale coherent discharges. These patterns have been termed giant depolarizing potentials (GDPs) for predominantly GABA-induced events or early network oscillations (ENOs) for mostly glutamatergic bursts, respectively. During the third to fourth postnatal week, cortical areas reach their final activity patterns with distinct network oscillations and highly specific neuronal discharge sequences which support adult behavior. While some of the mechanisms underlying maturation of network activity have been elucidated much work remains to be done in order to fully understand the rules governing transition from immature to mature patterns of network activity. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
A novel enteric neuron-glia coculture system reveals the role of glia in neuronal development.
Le Berre-Scoul, Catherine; Chevalier, Julien; Oleynikova, Elena; Cossais, François; Talon, Sophie; Neunlist, Michel; Boudin, Hélène
2017-01-15
Unlike astrocytes in the brain, the potential role of enteric glial cells (EGCs) in the formation of the enteric neuronal circuit is currently unknown. To examine the role of EGCs in the formation of the neuronal network, we developed a novel neuron-enriched culture model from embryonic rat intestine grown in indirect coculture with EGCs. We found that EGCs shape axonal complexity and synapse density in enteric neurons, through purinergic- and glial cell line-derived neurotrophic factor-dependent pathways. Using a novel and valuable culture model to study enteric neuron-glia interactions, our study identified EGCs as a key cellular actor regulating neuronal network maturation. In the nervous system, the formation of neuronal circuitry results from a complex and coordinated action of intrinsic and extrinsic factors. In the CNS, extrinsic mediators derived from astrocytes have been shown to play a key role in neuronal maturation, including dendritic shaping, axon guidance and synaptogenesis. In the enteric nervous system (ENS), the potential role of enteric glial cells (EGCs) in the maturation of developing enteric neuronal circuit is currently unknown. A major obstacle in addressing this question is the difficulty in obtaining a valuable experimental model in which enteric neurons could be isolated and maintained without EGCs. We adapted a cell culture method previously developed for CNS neurons to establish a neuron-enriched primary culture from embryonic rat intestine which was cultured in indirect coculture with EGCs. We demonstrated that enteric neurons grown in such conditions showed several structural, phenotypic and functional hallmarks of proper development and maturation. However, when neurons were grown without EGCs, the complexity of the axonal arbour and the density of synapses were markedly reduced, suggesting that glial-derived factors contribute strongly to the formation of the neuronal circuitry. We found that these effects played by EGCs were mediated in part through purinergic P2Y 1 receptor- and glial cell line-derived neurotrophic factor-dependent pathways. Using a novel and valuable culture model to study enteric neuron-glia interactions, our study identified EGCs as a key cellular actor required for neuronal network maturation. © 2016 The Authors. The Journal of Physiology © 2016 The Physiological Society.
Li, Wen-Chang; Cooke, Tom; Sautois, Bart; Soffe, Stephen R; Borisyuk, Roman; Roberts, Alan
2007-09-10
How specific are the synaptic connections formed as neuronal networks develop and can simple rules account for the formation of functioning circuits? These questions are assessed in the spinal circuits controlling swimming in hatchling frog tadpoles. This is possible because detailed information is now available on the identity and synaptic connections of the main types of neuron. The probabilities of synapses between 7 types of identified spinal neuron were measured directly by making electrical recordings from 500 pairs of neurons. For the same neuron types, the dorso-ventral distributions of axons and dendrites were measured and then used to calculate the probabilities that axons would encounter particular dendrites and so potentially form synaptic connections. Surprisingly, synapses were found between all types of neuron but contact probabilities could be predicted simply by the anatomical overlap of their axons and dendrites. These results suggested that synapse formation may not require axons to recognise specific, correct dendrites. To test the plausibility of simpler hypotheses, we first made computational models that were able to generate longitudinal axon growth paths and reproduce the axon distribution patterns and synaptic contact probabilities found in the spinal cord. To test if probabilistic rules could produce functioning spinal networks, we then made realistic computational models of spinal cord neurons, giving them established cell-specific properties and connecting them into networks using the contact probabilities we had determined. A majority of these networks produced robust swimming activity. Simple factors such as morphogen gradients controlling dorso-ventral soma, dendrite and axon positions may sufficiently constrain the synaptic connections made between different types of neuron as the spinal cord first develops and allow functional networks to form. Our analysis implies that detailed cellular recognition between spinal neuron types may not be necessary for the reliable formation of functional networks to generate early behaviour like swimming.
Shimba, Kenta; Sakai, Koji; Takayama, Yuzo; Kotani, Kiyoshi; Jimbo, Yasuhiko
2015-10-01
Stem cell transplantation is a promising therapy to treat neurodegenerative disorders, and a number of in vitro models have been developed for studying interactions between grafted neurons and the host neuronal network to promote drug discovery. However, methods capable of evaluating the process by which stem cells integrate into the host neuronal network are lacking. In this study, we applied an axonal conduction-based analysis to a co-culture study of primary and differentiated neurons. Mouse cortical neurons and neuronal cells differentiated from P19 embryonal carcinoma cells, a model for early neural differentiation of pluripotent stem cells, were co-cultured in a microfabricated device. The somata of these cells were separated by the co-culture device, but their axons were able to elongate through microtunnels and then form synaptic contacts. Propagating action potentials were recorded from these axons by microelectrodes embedded at the bottom of the microtunnels and sorted into clusters representing individual axons. While the number of axons of cortical neurons increased until 14 days in vitro and then decreased, those of P19 neurons increased throughout the culture period. Network burst analysis showed that P19 neurons participated in approximately 80% of the bursting activity after 14 days in vitro. Interestingly, the axonal conduction delay of P19 neurons was significantly greater than that of cortical neurons, suggesting that there are some physiological differences in their axons. These results suggest that our method is feasible to evaluate the process by which stem cell-derived neurons integrate into a host neuronal network.
Network synchronization in hippocampal neurons.
Penn, Yaron; Segal, Menahem; Moses, Elisha
2016-03-22
Oscillatory activity is widespread in dynamic neuronal networks. The main paradigm for the origin of periodicity consists of specialized pacemaking elements that synchronize and drive the rest of the network; however, other models exist. Here, we studied the spontaneous emergence of synchronized periodic bursting in a network of cultured dissociated neurons from rat hippocampus and cortex. Surprisingly, about 60% of all active neurons were self-sustained oscillators when disconnected, each with its own natural frequency. The individual neuron's tendency to oscillate and the corresponding oscillation frequency are controlled by its excitability. The single neuron intrinsic oscillations were blocked by riluzole, and are thus dependent on persistent sodium leak currents. Upon a gradual retrieval of connectivity, the synchrony evolves: Loose synchrony appears already at weak connectivity, with the oscillators converging to one common oscillation frequency, yet shifted in phase across the population. Further strengthening of the connectivity causes a reduction in the mean phase shifts until zero-lag is achieved, manifested by synchronous periodic network bursts. Interestingly, the frequency of network bursting matches the average of the intrinsic frequencies. Overall, the network behaves like other universal systems, where order emerges spontaneously by entrainment of independent rhythmic units. Although simplified with respect to circuitry in the brain, our results attribute a basic functional role for intrinsic single neuron excitability mechanisms in driving the network's activity and dynamics, contributing to our understanding of developing neural circuits.
Quantitative 3D investigation of Neuronal network in mouse spinal cord model
NASA Astrophysics Data System (ADS)
Bukreeva, I.; Campi, G.; Fratini, M.; Spanò, R.; Bucci, D.; Battaglia, G.; Giove, F.; Bravin, A.; Uccelli, A.; Venturi, C.; Mastrogiacomo, M.; Cedola, A.
2017-01-01
The investigation of the neuronal network in mouse spinal cord models represents the basis for the research on neurodegenerative diseases. In this framework, the quantitative analysis of the single elements in different districts is a crucial task. However, conventional 3D imaging techniques do not have enough spatial resolution and contrast to allow for a quantitative investigation of the neuronal network. Exploiting the high coherence and the high flux of synchrotron sources, X-ray Phase-Contrast multiscale-Tomography allows for the 3D investigation of the neuronal microanatomy without any aggressive sample preparation or sectioning. We investigated healthy-mouse neuronal architecture by imaging the 3D distribution of the neuronal-network with a spatial resolution of 640 nm. The high quality of the obtained images enables a quantitative study of the neuronal structure on a subject-by-subject basis. We developed and applied a spatial statistical analysis on the motor neurons to obtain quantitative information on their 3D arrangement in the healthy-mice spinal cord. Then, we compared the obtained results with a mouse model of multiple sclerosis. Our approach paves the way to the creation of a “database” for the characterization of the neuronal network main features for a comparative investigation of neurodegenerative diseases and therapies.
McLean, David L; Fetcho, Joseph R
2009-10-28
Studies of neuronal networks have revealed few general principles that link patterns of development with later functional roles. While investigating the neural control of movements, we recently discovered a topographic map in the spinal cord of larval zebrafish that relates the position of motoneurons and interneurons to their order of recruitment during swimming. Here, we show that the map reflects an orderly pattern of differentiation of neurons driving different movements. First, we use high-speed filming to show that large-amplitude swimming movements with bending along much of the body appear first, with smaller, regional swimming movements emerging later. Next, using whole-cell patch recordings, we demonstrate that the excitatory circuits that drive large-amplitude, fast swimming movements at larval stages are present and functional early on in embryos. Finally, we systematically assess the orderly emergence of spinal circuits according to swimming speed using transgenic fish expressing the photoconvertible protein Kaede to track neuronal differentiation in vivo. We conclude that a simple principle governs the development of spinal networks in which the neurons driving the fastest, most powerful swimming in larvae develop first with ones that drive increasingly weaker and slower larval movements layered on over time. Because the neurons are arranged by time of differentiation in the spinal cord, the result is a topographic map that represents the speed/strength of movements at which neurons are recruited and the temporal emergence of networks. This pattern may represent a general feature of neuronal network development throughout the brain and spinal cord.
Angulo-Garcia, David; Berke, Joshua D; Torcini, Alessandro
2016-02-01
Striatal projection neurons form a sparsely-connected inhibitory network, and this arrangement may be essential for the appropriate temporal organization of behavior. Here we show that a simplified, sparse inhibitory network of Leaky-Integrate-and-Fire neurons can reproduce some key features of striatal population activity, as observed in brain slices. In particular we develop a new metric to determine the conditions under which sparse inhibitory networks form anti-correlated cell assemblies with time-varying activity of individual cells. We find that under these conditions the network displays an input-specific sequence of cell assembly switching, that effectively discriminates similar inputs. Our results support the proposal that GABAergic connections between striatal projection neurons allow stimulus-selective, temporally-extended sequential activation of cell assemblies. Furthermore, we help to show how altered intrastriatal GABAergic signaling may produce aberrant network-level information processing in disorders such as Parkinson's and Huntington's diseases.
The formation and distribution of hippocampal synapses on patterned neuronal networks
NASA Astrophysics Data System (ADS)
Dowell-Mesfin, Natalie M.
Communication within the central nervous system is highly orchestrated with neurons forming trillions of specialized junctions called synapses. In vivo, biochemical and topographical cues can regulate neuronal growth. Biochemical cues also influence synaptogenesis and synaptic plasticity. The effects of topography on the development of synapses have been less studied. In vitro, neuronal growth is unorganized and complex making it difficult to study the development of networks. Patterned topographical cues guide and control the growth of neuronal processes (axons and dendrites) into organized networks. The aim of this dissertation was to determine if patterned topographical cues can influence synapse formation and distribution. Standard fabrication and compression molding procedures were used to produce silicon masters and polystyrene replicas with topographical cues presented as 1 mum high pillars with diameters of 0.5 and 2.0 mum and gaps of 1.0 to 5.0 mum. Embryonic rat hippocampal neurons grown unto patterned surfaces. A developmental analysis with immunocytochemistry was used to assess the distribution of pre- and post-synaptic proteins. Activity-dependent pre-synaptic vesicle uptake using functional imaging dyes was also performed. Adaptive filtering computer algorithms identified synapses by segmenting juxtaposed pairs of pre- and post-synaptic labels. Synapse number and area were automatically extracted from each deconvolved data set. In addition, neuronal processes were traced automatically to assess changes in synapse distribution. The results of these experiments demonstrated that patterned topographic cues can induce organized and functional neuronal networks that can serve as models for the study of synapse formation and plasticity as well as for the development of neuroprosthetic devices.
A Markovian event-based framework for stochastic spiking neural networks.
Touboul, Jonathan D; Faugeras, Olivier D
2011-11-01
In spiking neural networks, the information is conveyed by the spike times, that depend on the intrinsic dynamics of each neuron, the input they receive and on the connections between neurons. In this article we study the Markovian nature of the sequence of spike times in stochastic neural networks, and in particular the ability to deduce from a spike train the next spike time, and therefore produce a description of the network activity only based on the spike times regardless of the membrane potential process. To study this question in a rigorous manner, we introduce and study an event-based description of networks of noisy integrate-and-fire neurons, i.e. that is based on the computation of the spike times. We show that the firing times of the neurons in the networks constitute a Markov chain, whose transition probability is related to the probability distribution of the interspike interval of the neurons in the network. In the cases where the Markovian model can be developed, the transition probability is explicitly derived in such classical cases of neural networks as the linear integrate-and-fire neuron models with excitatory and inhibitory interactions, for different types of synapses, possibly featuring noisy synaptic integration, transmission delays and absolute and relative refractory period. This covers most of the cases that have been investigated in the event-based description of spiking deterministic neural networks.
Neural electrical activity and neural network growth.
Gafarov, F M
2018-05-01
The development of central and peripheral neural system depends in part on the emergence of the correct functional connectivity in its input and output pathways. Now it is generally accepted that molecular factors guide neurons to establish a primary scaffold that undergoes activity-dependent refinement for building a fully functional circuit. However, a number of experimental results obtained recently shows that the neuronal electrical activity plays an important role in the establishing of initial interneuronal connections. Nevertheless, these processes are rather difficult to study experimentally, due to the absence of theoretical description and quantitative parameters for estimation of the neuronal activity influence on growth in neural networks. In this work we propose a general framework for a theoretical description of the activity-dependent neural network growth. The theoretical description incorporates a closed-loop growth model in which the neural activity can affect neurite outgrowth, which in turn can affect neural activity. We carried out the detailed quantitative analysis of spatiotemporal activity patterns and studied the relationship between individual cells and the network as a whole to explore the relationship between developing connectivity and activity patterns. The model, developed in this work will allow us to develop new experimental techniques for studying and quantifying the influence of the neuronal activity on growth processes in neural networks and may lead to a novel techniques for constructing large-scale neural networks by self-organization. Copyright © 2018 Elsevier Ltd. All rights reserved.
Optoelectronic Integrated Circuits For Neural Networks
NASA Technical Reports Server (NTRS)
Psaltis, D.; Katz, J.; Kim, Jae-Hoon; Lin, S. H.; Nouhi, A.
1990-01-01
Many threshold devices placed on single substrate. Integrated circuits containing optoelectronic threshold elements developed for use as planar arrays of artificial neurons in research on neural-network computers. Mounted with volume holograms recorded in photorefractive crystals serving as dense arrays of variable interconnections between neurons.
Saniotis, Arthur; Henneberg, Maciej; Sawalma, Abdul-Rahman
2018-01-01
Recent neuroscientific research demonstrates that the human brain is becoming altered by technological devices. Improvements in biotechnologies and computer based technologies are now increasing the likelihood for the development of brain augmentation devices in the next 20 years. We have developed the idea of an "Endomyccorhizae like interface" (ELI) nanocognitive device as a new kind of future neuroprosthetic which aims to facilitate neuronal network properties in individuals with neurodegenerative disorders. The design of our ELI may overcome the problems of invasive neuroprosthetics, post-operative inflammation, and infection and neuroprosthetic degradation. The method in which our ELI is connected and integrated to neuronal networks is based on a mechanism similar to endomyccorhizae which is the oldest and most widespread form of plant symbiosis. We propose that the principle of Endomyccorhizae could be relevant for developing a crossing point between the ELI and neuronal networks. Similar to endomyccorhizae the ELI will be designed to form webs, each of which connects multiple neurons together. The ELI will function to sense action potentials and deliver it to the neurons it connects to. This is expected to compensate for neuronal loss in some neurodegenerative disorders, such as Alzheimer's disease and Parkinson's disease.
A distance constrained synaptic plasticity model of C. elegans neuronal network
NASA Astrophysics Data System (ADS)
Badhwar, Rahul; Bagler, Ganesh
2017-03-01
Brain research has been driven by enquiry for principles of brain structure organization and its control mechanisms. The neuronal wiring map of C. elegans, the only complete connectome available till date, presents an incredible opportunity to learn basic governing principles that drive structure and function of its neuronal architecture. Despite its apparently simple nervous system, C. elegans is known to possess complex functions. The nervous system forms an important underlying framework which specifies phenotypic features associated to sensation, movement, conditioning and memory. In this study, with the help of graph theoretical models, we investigated the C. elegans neuronal network to identify network features that are critical for its control. The 'driver neurons' are associated with important biological functions such as reproduction, signalling processes and anatomical structural development. We created 1D and 2D network models of C. elegans neuronal system to probe the role of features that confer controllability and small world nature. The simple 1D ring model is critically poised for the number of feed forward motifs, neuronal clustering and characteristic path-length in response to synaptic rewiring, indicating optimal rewiring. Using empirically observed distance constraint in the neuronal network as a guiding principle, we created a distance constrained synaptic plasticity model that simultaneously explains small world nature, saturation of feed forward motifs as well as observed number of driver neurons. The distance constrained model suggests optimum long distance synaptic connections as a key feature specifying control of the network.
Tonomura, Wataru; Moriguchi, Hiroyuki; Jimbo, Yasuhiko; Konishi, Satoshi
2010-08-01
This paper describes an advanced Micro Channel Array (MCA) for recording electrophysiological signals of neuronal networks at multiple points simultaneously. The developed MCA is designed for neuronal network analysis which has been studied by the co-authors using the Micro Electrode Arrays (MEA) system, and employs the principles of extracellular recordings. A prerequisite for extracellular recordings with good signal-to-noise ratio is a tight contact between cells and electrodes. The MCA described herein has the following advantages. The electrodes integrated around individual micro channels are electrically isolated to enable parallel multipoint recording. Reliable clamping of a targeted cell through micro channels is expected to improve the cellular selectivity and the attachment between the cell and the electrode toward steady electrophysiological recordings. We cultured hippocampal neurons on the developed MCA. As a result, the spontaneous and evoked spike potentials could be recorded by sucking and clamping the cells at multiple points. In this paper, we describe the design and fabrication of the MCA and the successful electrophysiological recordings leading to the development of an effective cellular network analysis device.
NASA Astrophysics Data System (ADS)
Wu, Xinyi; Ma, Jun; Li, Fan; Jia, Ya
2013-12-01
Some experimental evidences show that spiral wave could be observed in the cortex of brain, and the propagation of this spiral wave plays an important role in signal communication as a pacemaker. The profile of spiral wave generated in a numerical way is often perfect while the observed profile in experiments is not perfect and smooth. In this paper, formation and development of spiral wave in a regular network of Morris-Lecar neurons, which neurons are placed on nodes uniformly in a two-dimensional array and each node is coupled with nearest-neighbor type, are investigated by considering the effect of stochastic ion channels poisoning and channel noise. The formation and selection of spiral wave could be detected as follows. (1) External forcing currents with diversity are imposed on neurons in the network of excitatory neurons with nearest-neighbor connection, a target-like wave emerges and its potential mechanism is discussed; (2) artificial defects and local poisoned area are selected in the network to induce new wave to interact with the target wave; (3) spiral wave can be induced to occupy the network when the target wave is blocked by the artificial defects or poisoned area with regular border lines; (4) the stochastic poisoning effect is introduced by randomly modifying the border lines (areas) of specific regions in the network. It is found that spiral wave can be also developed to occupy the network under appropriate poisoning ratio. The process of growth for the poisoned area of ion channels poisoning is measured, the effect of channels noise is also investigated. It is confirmed that perfect spiral wave emerges in the network under gradient poisoning even if the channel noise is considered.
Effects of Morphology Constraint on Electrophysiological Properties of Cortical Neurons
NASA Astrophysics Data System (ADS)
Zhu, Geng; Du, Liping; Jin, Lei; Offenhäusser, Andreas
2016-04-01
There is growing interest in engineering nerve cells in vitro to control architecture and connectivity of cultured neuronal networks or to build neuronal networks with predictable computational function. Pattern technologies, such as micro-contact printing, have been developed to design ordered neuronal networks. However, electrophysiological characteristics of the single patterned neuron haven’t been reported. Here, micro-contact printing, using polyolefine polymer (POP) stamps with high resolution, was employed to grow cortical neurons in a designed structure. The results demonstrated that the morphology of patterned neurons was well constrained, and the number of dendrites was decreased to be about 2. Our electrophysiological results showed that alterations of dendritic morphology affected firing patterns of neurons and neural excitability. When stimulated by current, though both patterned and un-patterned neurons presented regular spiking, the dynamics and strength of the response were different. The un-patterned neurons exhibited a monotonically increasing firing frequency in response to injected current, while the patterned neurons first exhibited frequency increase and then a slow decrease. Our findings indicate that the decrease in dendritic complexity of cortical neurons will influence their electrophysiological characteristics and alter their information processing activity, which could be considered when designing neuronal circuitries.
Ratas, Irmantas; Pyragas, Kestutis
2016-09-01
We analyze the dynamics of a large network of coupled quadratic integrate-and-fire neurons, which represent the canonical model for class I neurons near the spiking threshold. The network is heterogeneous in that it includes both inherently spiking and excitable neurons. The coupling is global via synapses that take into account the finite width of synaptic pulses. Using a recently developed reduction method based on the Lorentzian ansatz, we derive a closed system of equations for the neuron's firing rate and the mean membrane potential, which are exact in the infinite-size limit. The bifurcation analysis of the reduced equations reveals a rich scenario of asymptotic behavior, the most interesting of which is the macroscopic limit-cycle oscillations. It is shown that the finite width of synaptic pulses is a necessary condition for the existence of such oscillations. The robustness of the oscillations against aging damage, which transforms spiking neurons into nonspiking neurons, is analyzed. The validity of the reduced equations is confirmed by comparing their solutions with the solutions of microscopic equations for the finite-size networks.
Matrix stiffness modulates formation and activity of neuronal networks of controlled architectures.
Lantoine, Joséphine; Grevesse, Thomas; Villers, Agnès; Delhaye, Geoffrey; Mestdagh, Camille; Versaevel, Marie; Mohammed, Danahe; Bruyère, Céline; Alaimo, Laura; Lacour, Stéphanie P; Ris, Laurence; Gabriele, Sylvain
2016-05-01
The ability to construct easily in vitro networks of primary neurons organized with imposed topologies is required for neural tissue engineering as well as for the development of neuronal interfaces with desirable characteristics. However, accumulating evidence suggests that the mechanical properties of the culture matrix can modulate important neuronal functions such as growth, extension, branching and activity. Here we designed robust and reproducible laminin-polylysine grid micropatterns on cell culture substrates that have similar biochemical properties but a 100-fold difference in Young's modulus to investigate the role of the matrix rigidity on the formation and activity of cortical neuronal networks. We found that cell bodies of primary cortical neurons gradually accumulate in circular islands, whereas axonal extensions spread on linear tracks to connect circular islands. Our findings indicate that migration of cortical neurons is enhanced on soft substrates, leading to a faster formation of neuronal networks. Furthermore, the pre-synaptic density was two times higher on stiff substrates and consistently the number of action potentials and miniature synaptic currents was enhanced on stiff substrates. Taken together, our results provide compelling evidence to indicate that matrix stiffness is a key parameter to modulate the growth dynamics, synaptic density and electrophysiological activity of cortical neuronal networks, thus providing useful information on scaffold design for neural tissue engineering. Copyright © 2016 Elsevier Ltd. All rights reserved.
Cui, Yiqian; Shi, Junyou; Wang, Zili
2015-11-01
Quantum Neural Networks (QNN) models have attracted great attention since it innovates a new neural computing manner based on quantum entanglement. However, the existing QNN models are mainly based on the real quantum operations, and the potential of quantum entanglement is not fully exploited. In this paper, we proposes a novel quantum neuron model called Complex Quantum Neuron (CQN) that realizes a deep quantum entanglement. Also, a novel hybrid networks model Complex Rotation Quantum Dynamic Neural Networks (CRQDNN) is proposed based on Complex Quantum Neuron (CQN). CRQDNN is a three layer model with both CQN and classical neurons. An infinite impulse response (IIR) filter is embedded in the Networks model to enable the memory function to process time series inputs. The Levenberg-Marquardt (LM) algorithm is used for fast parameter learning. The networks model is developed to conduct time series predictions. Two application studies are done in this paper, including the chaotic time series prediction and electronic remaining useful life (RUL) prediction. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Kanagasabapathi, Thirukumaran T.; Massobrio, Paolo; Barone, Rocco Andrea; Tedesco, Mariateresa; Martinoia, Sergio; Wadman, Wytse J.; Decré, Michel M. J.
2012-06-01
Co-cultures containing dissociated cortical and thalamic cells may provide a unique model for understanding the pathophysiology in the respective neuronal sub-circuitry. In addition, developing an in vitro dissociated co-culture model offers the possibility of studying the system without influence from other neuronal sub-populations. Here we demonstrate a dual compartment system coupled to microelectrode arrays (MEAs) for co-culturing and recording spontaneous activities from neuronal sub-populations. Propagation of electrical activities between cortical and thalamic regions and their interdependence in connectivity is verified by means of a cross-correlation algorithm. We found that burst events originate in the cortical region and drive the entire cortical-thalamic network bursting behavior while mutually weak thalamic connections play a relevant role in sustaining longer burst events in cortical cells. To support these experimental findings, a neuronal network model was developed and used to investigate the interplay between network dynamics and connectivity in the cortical-thalamic system.
A patterned recombinant human IgM guides neurite outgrowth of CNS neurons
Xu, Xiaohua; Wittenberg, Nathan J.; Jordan, Luke R.; Kumar, Shailabh; Watzlawik, Jens O.; Warrington, Arthur E.; Oh, Sang-Hyun; Rodriguez, Moses
2013-01-01
Matrix molecules convey biochemical and physical guiding signals to neurons in the central nervous system (CNS) and shape the trajectory of neuronal fibers that constitute neural networks. We have developed recombinant human IgMs that bind to epitopes on neural cells, with the aim of treating neurological diseases. Here we test the hypothesis that recombinant human IgMs (rHIgM) can guide neurite outgrowth of CNS neurons. Microcontact printing was employed to pattern rHIgM12 and rHIgM22, antibodies that were bioengineered to have variable regions capable of binding to neurons or oligodendrocytes, respectively. rHIgM12 promoted neuronal attachment and guided outgrowth of neurites from hippocampal neurons. Processes from spinal neurons followed grid patterns of rHIgM12 and formed a physical network. Comparison between rHIgM12 and rHIgM22 suggested the biochemistry that facilitates anchoring the neuronal surfaces is a prerequisite for the function of IgM, and spatial properties cooperate in guiding the assembly of neuronal networks. PMID:23881231
The many faces of REST oversee epigenetic programming of neuronal genes.
Ballas, Nurit; Mandel, Gail
2005-10-01
Nervous system development relies on a complex signaling network to engineer the orderly transitions that lead to the acquisition of a neural cell fate. Progression from the non-neuronal pluripotent stem cell to a restricted neural lineage is characterized by distinct patterns of gene expression, particularly the restriction of neuronal gene expression to neurons. Concurrently, cells outside the nervous system acquire and maintain a non-neuronal fate that permanently excludes expression of neuronal genes. Studies of the transcriptional repressor REST, which regulates a large network of neuronal genes, provide a paradigm for elucidating the link between epigenetic mechanisms and neurogenesis. REST orchestrates a set of epigenetic modifications that are distinct between non-neuronal cells that give rise to neurons and those that are destined to remain as nervous system outsiders.
Characterization of emergent synaptic topologies in noisy neural networks
NASA Astrophysics Data System (ADS)
Miller, Aaron James
Learned behaviors are one of the key contributors to an animal's ultimate survival. It is widely believed that the brain's microcircuitry undergoes structural changes when a new behavior is learned. In particular, motor learning, during which an animal learns a sequence of muscular movements, often requires precisely-timed coordination between muscles and becomes very natural once ingrained. Experiments show that neurons in the motor cortex exhibit precisely-timed spike activity when performing a learned motor behavior, and constituent stereotypical elements of the behavior can last several hundred milliseconds. The subject of this manuscript concerns how organized synaptic structures that produce stereotypical spike sequences emerge from random, dynamical networks. After a brief introduction in Chapter 1, we begin Chapter 2 by introducing a spike-timing-dependent plasticity (STDP) rule that defines how the activity of the network drives changes in network topology. The rule is then applied to idealized networks of leaky integrate-and-fire neurons (LIF). These neurons are not subjected to the variability that typically characterize neurons in vivo. In noiseless networks, synapses develop closed loops of strong connectivity that reproduce stereotypical, precisely-timed spike patterns from an initially random network. We demonstrate the characteristics of the asymptotic synaptic configuration are dependent on the statistics of the initial random network. The spike timings of the neurons simulated in Chapter 2 are generated exactly by a computationally economical, nonlinear mapping which is extended to LIF neurons injected with fluctuating current in Chapter 3. Development of an economical mapping that incorporates noise provides a practical solution to the long simulation times required to produce asymptotic synaptic topologies in networks with STDP in the presence of realistic neuronal variability. The mapping relies on generating numerical solutions to the dynamics of a LIF neuron subjected to Gaussian white noise (GWN). The system reduces to the Ornstein-Uhlenbeck first passage time problem, the solution of which we build into the mapping method of Chapter 2. We demonstrate that simulations using the stochastic mapping have reduced computation time compared to traditional Runge-Kutta methods by more than a factor of 150. In Chapter 4, we use the stochastic mapping to study the dynamics of emerging synaptic topologies in noisy networks. With the addition of membrane noise, networks with dynamical synapses can admit states in which the distribution of the synaptic weights is static under spontaneous activity, but the random connectivity between neurons is dynamical. The widely cited problem of instabilities in networks with STDP is avoided with the implementation of a synaptic decay and an activation threshold on each synapse. When such networks are presented with stimulus modeled by a focused excitatory current, chain-like networks can emerge with the addition of an axon-remodeling plasticity rule, a topological constraint on the connectivity modeling the finite resources available to each neuron. The emergent topologies are the result of an iterative stochastic process. The dynamics of the growth process suggest a strong interplay between the network topology and the spike sequences they produce during development. Namely, the existence of an embedded spike sequence alters the distribution of synaptic weights through the entire network. The roles of model parameters that affect the interplay between network structure and activity are elucidated. Finally, we propose two mathematical growth models, which are complementary, that capture the essence of the growth dynamics observed in simulations. In Chapter 5, we present an extension of the stochastic mapping that allows the possibility of neuronal cooperation. We demonstrate that synaptic topologies admitting stereotypical sequences can emerge in yet higher, biologically realistic levels of membrane potential variability when neurons cooperate to innervate shared targets. The structure that is most robust to the variability is that of a synfire chain. The principles of growth dynamics detailed in Chapter 4 are the same that sculpt the emergent synfire topologies. We conclude by discussing avenues for extensions of these results.
Constructing Neuronal Network Models in Massively Parallel Environments.
Ippen, Tammo; Eppler, Jochen M; Plesser, Hans E; Diesmann, Markus
2017-01-01
Recent advances in the development of data structures to represent spiking neuron network models enable us to exploit the complete memory of petascale computers for a single brain-scale network simulation. In this work, we investigate how well we can exploit the computing power of such supercomputers for the creation of neuronal networks. Using an established benchmark, we divide the runtime of simulation code into the phase of network construction and the phase during which the dynamical state is advanced in time. We find that on multi-core compute nodes network creation scales well with process-parallel code but exhibits a prohibitively large memory consumption. Thread-parallel network creation, in contrast, exhibits speedup only up to a small number of threads but has little overhead in terms of memory. We further observe that the algorithms creating instances of model neurons and their connections scale well for networks of ten thousand neurons, but do not show the same speedup for networks of millions of neurons. Our work uncovers that the lack of scaling of thread-parallel network creation is due to inadequate memory allocation strategies and demonstrates that thread-optimized memory allocators recover excellent scaling. An analysis of the loop order used for network construction reveals that more complex tests on the locality of operations significantly improve scaling and reduce runtime by allowing construction algorithms to step through large networks more efficiently than in existing code. The combination of these techniques increases performance by an order of magnitude and harnesses the increasingly parallel compute power of the compute nodes in high-performance clusters and supercomputers.
Constructing Neuronal Network Models in Massively Parallel Environments
Ippen, Tammo; Eppler, Jochen M.; Plesser, Hans E.; Diesmann, Markus
2017-01-01
Recent advances in the development of data structures to represent spiking neuron network models enable us to exploit the complete memory of petascale computers for a single brain-scale network simulation. In this work, we investigate how well we can exploit the computing power of such supercomputers for the creation of neuronal networks. Using an established benchmark, we divide the runtime of simulation code into the phase of network construction and the phase during which the dynamical state is advanced in time. We find that on multi-core compute nodes network creation scales well with process-parallel code but exhibits a prohibitively large memory consumption. Thread-parallel network creation, in contrast, exhibits speedup only up to a small number of threads but has little overhead in terms of memory. We further observe that the algorithms creating instances of model neurons and their connections scale well for networks of ten thousand neurons, but do not show the same speedup for networks of millions of neurons. Our work uncovers that the lack of scaling of thread-parallel network creation is due to inadequate memory allocation strategies and demonstrates that thread-optimized memory allocators recover excellent scaling. An analysis of the loop order used for network construction reveals that more complex tests on the locality of operations significantly improve scaling and reduce runtime by allowing construction algorithms to step through large networks more efficiently than in existing code. The combination of these techniques increases performance by an order of magnitude and harnesses the increasingly parallel compute power of the compute nodes in high-performance clusters and supercomputers. PMID:28559808
Costalago Meruelo, Alicia; Simpson, David M; Veres, Sandor M; Newland, Philip L
2016-03-01
Mathematical modelling is used routinely to understand the coding properties and dynamics of responses of neurons and neural networks. Here we analyse the effectiveness of Artificial Neural Networks (ANNs) as a modelling tool for motor neuron responses. We used ANNs to model the synaptic responses of an identified motor neuron, the fast extensor motor neuron, of the desert locust in response to displacement of a sensory organ, the femoral chordotonal organ, which monitors movements of the tibia relative to the femur of the leg. The aim of the study was threefold: first to determine the potential value of ANNs as tools to model and investigate neural networks, second to understand the generalisation properties of ANNs across individuals and to different input signals and third, to understand individual differences in responses of an identified neuron. A metaheuristic algorithm was developed to design the ANN architectures. The performance of the models generated by the ANNs was compared with those generated through previous mathematical models of the same neuron. The results suggest that ANNs are significantly better than LNL and Wiener models in predicting specific neural responses to Gaussian White Noise, but not significantly different when tested with sinusoidal inputs. They are also able to predict responses of the same neuron in different individuals irrespective of which animal was used to develop the model, although notable differences between some individuals were evident. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Plasticity in the Developing Brain: Implications for Rehabilitation
ERIC Educational Resources Information Center
Johnston, Michael V.
2009-01-01
Neuronal plasticity allows the central nervous system to learn skills and remember information, to reorganize neuronal networks in response to environmental stimulation, and to recover from brain and spinal cord injuries. Neuronal plasticity is enhanced in the developing brain and it is usually adaptive and beneficial but can also be maladaptive…
Blanco, Wilfredo; Bertram, Richard; Tabak, Joël
2017-01-01
Early in development, neural systems have primarily excitatory coupling, where even GABAergic synapses are excitatory. Many of these systems exhibit spontaneous episodes of activity that have been characterized through both experimental and computational studies. As development progress the neural system goes through many changes, including synaptic remodeling, intrinsic plasticity in the ion channel expression, and a transformation of GABAergic synapses from excitatory to inhibitory. What effect each of these, and other, changes have on the network behavior is hard to know from experimental studies since they all happen in parallel. One advantage of a computational approach is that one has the ability to study developmental changes in isolation. Here, we examine the effects of GABAergic synapse polarity change on the spontaneous activity of both a mean field and a neural network model that has both glutamatergic and GABAergic coupling, representative of a developing neural network. We find some intuitive behavioral changes as the GABAergic neurons go from excitatory to inhibitory, shared by both models, such as a decrease in the duration of episodes. We also find some paradoxical changes in the activity that are only present in the neural network model. In particular, we find that during early development the inter-episode durations become longer on average, while later in development they become shorter. In addressing this unexpected finding, we uncover a priming effect that is particularly important for a small subset of neurons, called the "intermediate neurons." We characterize these neurons and demonstrate why they are crucial to episode initiation, and why the paradoxical behavioral change result from priming of these neurons. The study illustrates how even arguably the simplest of developmental changes that occurs in neural systems can present non-intuitive behaviors. It also makes predictions about neural network behavioral changes that occur during development that may be observable even in actual neural systems where these changes are convoluted with changes in synaptic connectivity and intrinsic neural plasticity.
Cymerblit-Sabba, Adi; Schiller, Yitzhak
2012-03-01
The prevailing view of epileptic seizures is that they are caused by increased hypersynchronous activity in the cortical network. However, this view is based mostly on electroencephalography (EEG) recordings that do not directly monitor neuronal synchronization of action potential firing. In this study, we used multielectrode single-unit recordings from the hippocampus to investigate firing of individual CA1 neurons and directly monitor synchronization of action potential firing between neurons during the different ictal phases of chemoconvulsant-induced epileptic seizures in vivo. During the early phase of seizures manifesting as low-amplitude rhythmic β-electrocorticography (ECoG) activity, the firing frequency of most neurons markedly increased. To our surprise, the average overall neuronal synchronization as measured by the cross-correlation function was reduced compared with control conditions with ~60% of neuronal pairs showing no significant correlated firing. However, correlated firing was not uniform and a minority of neuronal pairs showed a high degree of correlated firing. Moreover, during the early phase of seizures, correlated firing between 9.8 ± 5.1% of all stably recorded pairs increased compared with control conditions. As seizures progressed and high-frequency ECoG polyspikes developed, the firing frequency of neurons further increased and enhanced correlated firing was observed between virtually all neuronal pairs. These findings indicated that epileptic seizures represented a hyperactive state with widespread increase in action potential firing. Hypersynchrony also characterized seizures. However, it initially developed in a small subset of neurons and gradually spread to involve the entire cortical network only in the later more intense ictal phases.
Control strategies of 3-cell Central Pattern Generator via global stimuli
NASA Astrophysics Data System (ADS)
Lozano, Álvaro; Rodríguez, Marcos; Barrio, Roberto
2016-03-01
The study of the synchronization patterns of small neuron networks that control several biological processes has become an interesting growing discipline. Some of these synchronization patterns of individual neurons are related to some undesirable neurological diseases, and they are believed to play a crucial role in the emergence of pathological rhythmic brain activity in different diseases, like Parkinson’s disease. We show how, with a suitable combination of short and weak global inhibitory and excitatory stimuli over the whole network, we can switch between different stable bursting patterns in small neuron networks (in our case a 3-neuron network). We develop a systematic study showing and explaining the effects of applying the pulses at different moments. Moreover, we compare the technique on a completely symmetric network and on a slightly perturbed one (a much more realistic situation). The present approach of using global stimuli may allow to avoid undesirable synchronization patterns with nonaggressive stimuli.
NASA Astrophysics Data System (ADS)
Radivojevic, Milos; Jäckel, David; Altermatt, Michael; Müller, Jan; Viswam, Vijay; Hierlemann, Andreas; Bakkum, Douglas J.
2016-08-01
A detailed, high-spatiotemporal-resolution characterization of neuronal responses to local electrical fields and the capability of precise extracellular microstimulation of selected neurons are pivotal for studying and manipulating neuronal activity and circuits in networks and for developing neural prosthetics. Here, we studied cultured neocortical neurons by using high-density microelectrode arrays and optical imaging, complemented by the patch-clamp technique, and with the aim to correlate morphological and electrical features of neuronal compartments with their responsiveness to extracellular stimulation. We developed strategies to electrically identify any neuron in the network, while subcellular spatial resolution recording of extracellular action potential (AP) traces enabled their assignment to the axon initial segment (AIS), axonal arbor and proximal somatodendritic compartments. Stimulation at the AIS required low voltages and provided immediate, selective and reliable neuronal activation, whereas stimulation at the soma required high voltages and produced delayed and unreliable responses. Subthreshold stimulation at the soma depolarized the somatic membrane potential without eliciting APs.
Temporal neural networks and transient analysis of complex engineering systems
NASA Astrophysics Data System (ADS)
Uluyol, Onder
A theory is introduced for a multi-layered Local Output Gamma Feedback (LOGF) neural network within the paradigm of Locally-Recurrent Globally-Feedforward neural networks. It is developed for the identification, prediction, and control tasks of spatio-temporal systems and allows for the presentation of different time scales through incorporation of a gamma memory. It is initially applied to the tasks of sunspot and Mackey-Glass series prediction as benchmarks, then it is extended to the task of power level control of a nuclear reactor at different fuel cycle conditions. The developed LOGF neuron model can also be viewed as a Transformed Input and State (TIS) Gamma memory for neural network architectures for temporal processing. The novel LOGF neuron model extends the static neuron model by incorporating into it a short-term memory structure in the form of a digital gamma filter. A feedforward neural network made up of LOGF neurons can thus be used to model dynamic systems. A learning algorithm based upon the Backpropagation-Through-Time (BTT) approach is derived. It is applicable for training a general L-layer LOGF neural network. The spatial and temporal weights and parameters of the network are iteratively optimized for a given problem using the derived learning algorithm.
Aćimović, Jugoslava; Mäki-Marttunen, Tuomo; Linne, Marja-Leena
2015-01-01
We developed a two-level statistical model that addresses the question of how properties of neurite morphology shape the large-scale network connectivity. We adopted a low-dimensional statistical description of neurites. From the neurite model description we derived the expected number of synapses, node degree, and the effective radius, the maximal distance between two neurons expected to form at least one synapse. We related these quantities to the network connectivity described using standard measures from graph theory, such as motif counts, clustering coefficient, minimal path length, and small-world coefficient. These measures are used in a neuroscience context to study phenomena from synaptic connectivity in the small neuronal networks to large scale functional connectivity in the cortex. For these measures we provide analytical solutions that clearly relate different model properties. Neurites that sparsely cover space lead to a small effective radius. If the effective radius is small compared to the overall neuron size the obtained networks share similarities with the uniform random networks as each neuron connects to a small number of distant neurons. Large neurites with densely packed branches lead to a large effective radius. If this effective radius is large compared to the neuron size, the obtained networks have many local connections. In between these extremes, the networks maximize the variability of connection repertoires. The presented approach connects the properties of neuron morphology with large scale network properties without requiring heavy simulations with many model parameters. The two-steps procedure provides an easier interpretation of the role of each modeled parameter. The model is flexible and each of its components can be further expanded. We identified a range of model parameters that maximizes variability in network connectivity, the property that might affect network capacity to exhibit different dynamical regimes.
A novel enteric neuron–glia coculture system reveals the role of glia in neuronal development
Le Berre‐Scoul, Catherine; Chevalier, Julien; Oleynikova, Elena; Cossais, François; Talon, Sophie; Neunlist, Michel
2016-01-01
Key points Unlike astrocytes in the brain, the potential role of enteric glial cells (EGCs) in the formation of the enteric neuronal circuit is currently unknown.To examine the role of EGCs in the formation of the neuronal network, we developed a novel neuron‐enriched culture model from embryonic rat intestine grown in indirect coculture with EGCs.We found that EGCs shape axonal complexity and synapse density in enteric neurons, through purinergic‐ and glial cell line‐derived neurotrophic factor‐dependent pathways.Using a novel and valuable culture model to study enteric neuron–glia interactions, our study identified EGCs as a key cellular actor regulating neuronal network maturation. Abstract In the nervous system, the formation of neuronal circuitry results from a complex and coordinated action of intrinsic and extrinsic factors. In the CNS, extrinsic mediators derived from astrocytes have been shown to play a key role in neuronal maturation, including dendritic shaping, axon guidance and synaptogenesis. In the enteric nervous system (ENS), the potential role of enteric glial cells (EGCs) in the maturation of developing enteric neuronal circuit is currently unknown. A major obstacle in addressing this question is the difficulty in obtaining a valuable experimental model in which enteric neurons could be isolated and maintained without EGCs. We adapted a cell culture method previously developed for CNS neurons to establish a neuron‐enriched primary culture from embryonic rat intestine which was cultured in indirect coculture with EGCs. We demonstrated that enteric neurons grown in such conditions showed several structural, phenotypic and functional hallmarks of proper development and maturation. However, when neurons were grown without EGCs, the complexity of the axonal arbour and the density of synapses were markedly reduced, suggesting that glial‐derived factors contribute strongly to the formation of the neuronal circuitry. We found that these effects played by EGCs were mediated in part through purinergic P2Y1 receptor‐ and glial cell line‐derived neurotrophic factor‐dependent pathways. Using a novel and valuable culture model to study enteric neuron–glia interactions, our study identified EGCs as a key cellular actor required for neuronal network maturation. PMID:27436013
One-to-one neuron-electrode interfacing.
Greenbaum, Alon; Anava, Sarit; Ayali, Amir; Shein, Mark; David-Pur, Moshe; Ben-Jacob, Eshel; Hanein, Yael
2009-09-15
The question of neuronal network development and organization is a principle one, which is closely related to aspects of neuronal and network form-function interactions. In-vitro two-dimensional neuronal cultures have proved to be an attractive and successful model for the study of these questions. Research is constraint however by the search for techniques aimed at culturing stable networks, whose electrical activity can be reliably and consistently monitored. A simple approach to form small interconnected neuronal circuits while achieving one-to-one neuron-electrode interfacing is presented. Locust neurons were cultured on a novel bio-chip consisting of carbon-nanotube multi-electrode-arrays. The cells self-organized to position themselves in close proximity to the bio-chip electrodes. The organization of the cells on the electrodes was analyzed using time lapse microscopy, fluorescence imaging and scanning electron microscopy. Electrical recordings from well identified cells is presented and discussed. The unique properties of the bio-chip and the specific neuron-nanotube interactions, together with the use of relatively large insect ganglion cells, allowed long-term stabilization (as long as 10 days) of predefined neural network topology as well as high fidelity electrical recording of individual neuron firing. This novel preparation opens ample opportunity for future investigation into key neurobiological questions and principles.
Cavallari, Stefano; Panzeri, Stefano; Mazzoni, Alberto
2014-01-01
Models of networks of Leaky Integrate-and-Fire (LIF) neurons are a widely used tool for theoretical investigations of brain function. These models have been used both with current- and conductance-based synapses. However, the differences in the dynamics expressed by these two approaches have been so far mainly studied at the single neuron level. To investigate how these synaptic models affect network activity, we compared the single neuron and neural population dynamics of conductance-based networks (COBNs) and current-based networks (CUBNs) of LIF neurons. These networks were endowed with sparse excitatory and inhibitory recurrent connections, and were tested in conditions including both low- and high-conductance states. We developed a novel procedure to obtain comparable networks by properly tuning the synaptic parameters not shared by the models. The so defined comparable networks displayed an excellent and robust match of first order statistics (average single neuron firing rates and average frequency spectrum of network activity). However, these comparable networks showed profound differences in the second order statistics of neural population interactions and in the modulation of these properties by external inputs. The correlation between inhibitory and excitatory synaptic currents and the cross-neuron correlation between synaptic inputs, membrane potentials and spike trains were stronger and more stimulus-modulated in the COBN. Because of these properties, the spike train correlation carried more information about the strength of the input in the COBN, although the firing rates were equally informative in both network models. Moreover, the network activity of COBN showed stronger synchronization in the gamma band, and spectral information about the input higher and spread over a broader range of frequencies. These results suggest that the second order statistics of network dynamics depend strongly on the choice of synaptic model. PMID:24634645
Cavallari, Stefano; Panzeri, Stefano; Mazzoni, Alberto
2014-01-01
Models of networks of Leaky Integrate-and-Fire (LIF) neurons are a widely used tool for theoretical investigations of brain function. These models have been used both with current- and conductance-based synapses. However, the differences in the dynamics expressed by these two approaches have been so far mainly studied at the single neuron level. To investigate how these synaptic models affect network activity, we compared the single neuron and neural population dynamics of conductance-based networks (COBNs) and current-based networks (CUBNs) of LIF neurons. These networks were endowed with sparse excitatory and inhibitory recurrent connections, and were tested in conditions including both low- and high-conductance states. We developed a novel procedure to obtain comparable networks by properly tuning the synaptic parameters not shared by the models. The so defined comparable networks displayed an excellent and robust match of first order statistics (average single neuron firing rates and average frequency spectrum of network activity). However, these comparable networks showed profound differences in the second order statistics of neural population interactions and in the modulation of these properties by external inputs. The correlation between inhibitory and excitatory synaptic currents and the cross-neuron correlation between synaptic inputs, membrane potentials and spike trains were stronger and more stimulus-modulated in the COBN. Because of these properties, the spike train correlation carried more information about the strength of the input in the COBN, although the firing rates were equally informative in both network models. Moreover, the network activity of COBN showed stronger synchronization in the gamma band, and spectral information about the input higher and spread over a broader range of frequencies. These results suggest that the second order statistics of network dynamics depend strongly on the choice of synaptic model.
Testa-Silva, Guilherme; Loebel, Alex; Giugliano, Michele; de Kock, Christiaan P J; Mansvelder, Huibert D; Meredith, Rhiannon M
2012-06-01
Neuronal theories of neurodevelopmental disorders (NDDs) of autism and mental retardation propose that abnormal connectivity underlies deficits in attentional processing. We tested this theory by studying unitary synaptic connections between layer 5 pyramidal neurons within medial prefrontal cortex (mPFC) networks in the Fmr1-KO mouse model for mental retardation and autism. In line with predictions from neurocognitive theory, we found that neighboring pyramidal neurons were hyperconnected during a critical period in early mPFC development. Surprisingly, excitatory synaptic connections between Fmr1-KO pyramidal neurons were significantly slower and failed to recover from short-term depression as quickly as wild type (WT) synapses. By 4-5 weeks of mPFC development, connectivity rates were identical for both KO and WT pyramidal neurons and synapse dynamics changed from depressing to facilitating responses with similar properties in both groups. We propose that the early alteration in connectivity and synaptic recovery are tightly linked: using a network model, we show that slower synapses are essential to counterbalance hyperconnectivity in order to maintain a dynamic range of excitatory activity. However, the slow synaptic time constants induce decreased responsiveness to low-frequency stimulation, which may explain deficits in integration and early information processing in attentional neuronal networks in NDDs.
Testa-Silva, Guilherme; Loebel, Alex; Giugliano, Michele; de Kock, Christiaan P.J.; Mansvelder, Huibert D.; Meredith, Rhiannon M.
2013-01-01
Neuronal theories of neurodevelopmental disorders (NDDs) of autism and mental retardation propose that abnormal connectivity underlies deficits in attentional processing. We tested this theory by studying unitary synaptic connections between layer 5 pyramidal neurons within medial prefrontal cortex (mPFC) networks in the Fmr1-KO mouse model for mental retardation and autism. In line with predictions from neurocognitive theory, we found that neighboring pyramidal neurons were hyperconnected during a critical period in early mPFC development. Surprisingly, excitatory synaptic connections between Fmr1-KO pyramidal neurons were significantly slower and failed to recover from short-term depression as quickly as wild type (WT) synapses. By 4--5 weeks of mPFC development, connectivity rates were identical for both KO and WT pyramidal neurons and synapse dynamics changed from depressing to facilitating responses with similar properties in both groups. We propose that the early alteration in connectivity and synaptic recovery are tightly linked: using a network model, we show that slower synapses are essential to counterbalance hyperconnectivity in order to maintain a dynamic range of excitatory activity. However, the slow synaptic time constants induce decreased responsiveness to low-frequency stimulation, which may explain deficits in integration and early information processing in attentional neuronal networks in NDDs. PMID:21856714
Network feedback regulates motor output across a range of modulatory neuron activity
Spencer, Robert M.
2016-01-01
Modulatory projection neurons alter network neuron synaptic and intrinsic properties to elicit multiple different outputs. Sensory and other inputs elicit a range of modulatory neuron activity that is further shaped by network feedback, yet little is known regarding how the impact of network feedback on modulatory neurons regulates network output across a physiological range of modulatory neuron activity. Identified network neurons, a fully described connectome, and a well-characterized, identified modulatory projection neuron enabled us to address this issue in the crab (Cancer borealis) stomatogastric nervous system. The modulatory neuron modulatory commissural neuron 1 (MCN1) activates and modulates two networks that generate rhythms via different cellular mechanisms and at distinct frequencies. MCN1 is activated at rates of 5–35 Hz in vivo and in vitro. Additionally, network feedback elicits MCN1 activity time-locked to motor activity. We asked how network activation, rhythm speed, and neuron activity levels are regulated by the presence or absence of network feedback across a physiological range of MCN1 activity rates. There were both similarities and differences in responses of the two networks to MCN1 activity. Many parameters in both networks were sensitive to network feedback effects on MCN1 activity. However, for most parameters, MCN1 activity rate did not determine the extent to which network output was altered by the addition of network feedback. These data demonstrate that the influence of network feedback on modulatory neuron activity is an important determinant of network output and feedback can be effective in shaping network output regardless of the extent of network modulation. PMID:27030739
Parasuram, Harilal; Nair, Bipin; D'Angelo, Egidio; Hines, Michael; Naldi, Giovanni; Diwakar, Shyam
2016-01-01
Local Field Potentials (LFPs) are population signals generated by complex spatiotemporal interaction of current sources and dipoles. Mathematical computations of LFPs allow the study of circuit functions and dysfunctions via simulations. This paper introduces LFPsim, a NEURON-based tool for computing population LFP activity and single neuron extracellular potentials. LFPsim was developed to be used on existing cable compartmental neuron and network models. Point source, line source, and RC based filter approximations can be used to compute extracellular activity. As a demonstration of efficient implementation, we showcase LFPs from mathematical models of electrotonically compact cerebellum granule neurons and morphologically complex neurons of the neocortical column. LFPsim reproduced neocortical LFP at 8, 32, and 56 Hz via current injection, in vitro post-synaptic N2a, N2b waves and in vivo T-C waves in cerebellum granular layer. LFPsim also includes a simulation of multi-electrode array of LFPs in network populations to aid computational inference between biophysical activity in neural networks and corresponding multi-unit activity resulting in extracellular and evoked LFP signals.
Serotonin neuron development: shaping molecular and structural identities.
Deneris, Evan; Gaspar, Patricia
2018-01-01
The continuing fascination with serotonin (5-hydroxytryptamine, 5-HT) as a nervous system chemical messenger began with its discovery in the brains of mammals in 1953. Among the many reasons for this decades-long interest is that the small numbers of neurons that make 5-HT influence the excitability of neural circuits in nearly every region of the brain and spinal cord. A further reason is that 5-HT dysfunction has been linked to a range of psychiatric and neurological disorders many of which have a neurodevelopmental component. This has led to intense interest in understanding 5-HT neuron development with the aim of determining whether early alterations in their generation lead to brain disease susceptibility. Here, we present an overview of the neuroanatomical organization of vertebrate 5-HT neurons, their neurogenesis, and prodigious axonal architectures, which enables the expansive reach of 5-HT neuromodulation in the central nervous system. We review recent findings that have revealed the molecular basis for the tremendous diversity of 5-HT neuron subtypes, the impact of environmental factors on 5-HT neuron development, and how 5-HT axons are topographically organized through disparate signaling pathways. We summarize studies of the gene regulatory networks that control the differentiation, maturation, and maintenance of 5-HT neurons. These studies show that the regulatory factors controlling acquisition of 5-HT-type transmitter identity continue to play critical roles in the functional maturation and the maintenance of 5-HT neurons. New insights are presented into how continuously expressed 5-HT regulatory factors control 5-HT neurons at different stages of life and how the regulatory networks themselves are maintained. WIREs Dev Biol 2018, 7:e301. doi: 10.1002/wdev.301 This article is categorized under: Nervous System Development > Vertebrates: General Principles Gene Expression and Transcriptional Hierarchies > Gene Networks and Genomics Gene Expression and Transcriptional Hierarchies > Cellular Differentiation Nervous System Development > Secondary: Vertebrates: Regional Development. © 2017 Wiley Periodicals, Inc.
Biffi, Emilia; Menegon, Andrea; Piraino, Francesco; Pedrocchi, Alessandra; Fiore, Gianfranco B; Rasponi, Marco
2012-01-01
In vitro recording of neuronal electrical activity is a widely used technique to understand brain functions and to study the effect of drugs on the central nervous system. The integration of microfluidic devices with microelectrode arrays (MEAs) enables the recording of networks activity in a controlled microenvironment. In this work, an integrated microfluidic system for neuronal cultures was developed, reversibly coupling a PDMS microfluidic device with a commercial flat MEA through magnetic forces. Neurons from mouse embryos were cultured in a 100 µm channel and their activity was followed up to 18 days in vitro. The maturation of the networks and their morphological and functional characteristics were comparable with those of networks cultured in macro-environments and described in literature. In this work, we successfully demonstrated the ability of long-term culturing of primary neuronal cells in a reversible bonded microfluidic device (based on magnetism) that will be fundamental for neuropharmacological studies. Copyright © 2011 Wiley Periodicals, Inc.
Visible rodent brain-wide networks at single-neuron resolution
Yuan, Jing; Gong, Hui; Li, Anan; Li, Xiangning; Chen, Shangbin; Zeng, Shaoqun; Luo, Qingming
2015-01-01
There are some unsolvable fundamental questions, such as cell type classification, neural circuit tracing and neurovascular coupling, though great progresses are being made in neuroscience. Because of the structural features of neurons and neural circuits, the solution of these questions needs us to break through the current technology of neuroanatomy for acquiring the exactly fine morphology of neuron and vessels and tracing long-distant circuit at axonal resolution in the whole brain of mammals. Combined with fast-developing labeling techniques, efficient whole-brain optical imaging technology emerging at the right moment presents a huge potential in the structure and function research of specific-function neuron and neural circuit. In this review, we summarize brain-wide optical tomography techniques, review the progress on visible brain neuronal/vascular networks benefit from these novel techniques, and prospect the future technical development. PMID:26074784
The formation mechanism of defects, spiral wave in the network of neurons.
Wu, Xinyi; Ma, Jun
2013-01-01
A regular network of neurons is constructed by using the Morris-Lecar (ML) neuron with the ion channels being considered, and the potential mechnism of the formation of a spiral wave is investigated in detail. Several spiral waves are initiated by blocking the target wave with artificial defects and/or partial blocking (poisoning) in ion channels. Furthermore, possible conditions for spiral wave formation and the effect of partial channel blocking are discussed completely. Our results are summarized as follows. 1) The emergence of a target wave depends on the transmembrane currents with diversity, which mapped from the external forcing current and this kind of diversity is associated with spatial heterogeneity in the media. 2) Distinct spiral wave could be induced to occupy the network when the target wave is broken by partially blocking the ion channels of a fraction of neurons (local poisoned area), and these generated spiral waves are similar with the spiral waves induced by artificial defects. It is confirmed that partial channel blocking of some neurons in the network could play a similar role in breaking a target wave as do artificial defects; 3) Channel noise and additive Gaussian white noise are also considered, and it is confirmed that spiral waves are also induced in the network in the presence of noise. According to the results mentioned above, we conclude that appropriate poisoning in ion channels of neurons in the network acts as 'defects' on the evolution of the spatiotemporal pattern, and accounts for the emergence of a spiral wave in the network of neurons. These results could be helpful to understand the potential cause of the formation and development of spiral waves in the cortex of a neuronal system.
The Formation Mechanism of Defects, Spiral Wave in the Network of Neurons
Wu, Xinyi; Ma, Jun
2013-01-01
A regular network of neurons is constructed by using the Morris-Lecar (ML) neuron with the ion channels being considered, and the potential mechnism of the formation of a spiral wave is investigated in detail. Several spiral waves are initiated by blocking the target wave with artificial defects and/or partial blocking (poisoning) in ion channels. Furthermore, possible conditions for spiral wave formation and the effect of partial channel blocking are discussed completely. Our results are summarized as follows. 1) The emergence of a target wave depends on the transmembrane currents with diversity, which mapped from the external forcing current and this kind of diversity is associated with spatial heterogeneity in the media. 2) Distinct spiral wave could be induced to occupy the network when the target wave is broken by partially blocking the ion channels of a fraction of neurons (local poisoned area), and these generated spiral waves are similar with the spiral waves induced by artificial defects. It is confirmed that partial channel blocking of some neurons in the network could play a similar role in breaking a target wave as do artificial defects; 3) Channel noise and additive Gaussian white noise are also considered, and it is confirmed that spiral waves are also induced in the network in the presence of noise. According to the results mentioned above, we conclude that appropriate poisoning in ion channels of neurons in the network acts as ‘defects’ on the evolution of the spatiotemporal pattern, and accounts for the emergence of a spiral wave in the network of neurons. These results could be helpful to understand the potential cause of the formation and development of spiral waves in the cortex of a neuronal system. PMID:23383179
Network feedback regulates motor output across a range of modulatory neuron activity.
Spencer, Robert M; Blitz, Dawn M
2016-06-01
Modulatory projection neurons alter network neuron synaptic and intrinsic properties to elicit multiple different outputs. Sensory and other inputs elicit a range of modulatory neuron activity that is further shaped by network feedback, yet little is known regarding how the impact of network feedback on modulatory neurons regulates network output across a physiological range of modulatory neuron activity. Identified network neurons, a fully described connectome, and a well-characterized, identified modulatory projection neuron enabled us to address this issue in the crab (Cancer borealis) stomatogastric nervous system. The modulatory neuron modulatory commissural neuron 1 (MCN1) activates and modulates two networks that generate rhythms via different cellular mechanisms and at distinct frequencies. MCN1 is activated at rates of 5-35 Hz in vivo and in vitro. Additionally, network feedback elicits MCN1 activity time-locked to motor activity. We asked how network activation, rhythm speed, and neuron activity levels are regulated by the presence or absence of network feedback across a physiological range of MCN1 activity rates. There were both similarities and differences in responses of the two networks to MCN1 activity. Many parameters in both networks were sensitive to network feedback effects on MCN1 activity. However, for most parameters, MCN1 activity rate did not determine the extent to which network output was altered by the addition of network feedback. These data demonstrate that the influence of network feedback on modulatory neuron activity is an important determinant of network output and feedback can be effective in shaping network output regardless of the extent of network modulation. Copyright © 2016 the American Physiological Society.
NASA Astrophysics Data System (ADS)
Benotmane, Rafi
During orbital or interplanetary space flights, astronauts are exposed to cosmic radiations and microgravity. This study aimed at assessing the effect of these combined conditions on neuronal network density, cell morphology and survival, using well-connected mouse cortical neuron cultures. To this end, neurons were exposed to acute low and high doses of low LET (X-rays) radiation or to chronic low dose-rate of high LET neutron irradiation (Californium-252), under the simulated microgravity generated by the Random Positioning Machine (RPM, Dutch space). High content image analysis of cortical neurons positive for the neuronal marker βIII-tubulin unveiled a reduced neuronal network integrity and connectivity, and an altered cell morphology after exposure to acute/chronic radiation or to simulated microgravity. Additionally, in both conditions, a defect in DNA-repair efficiency was revealed by an increased number of γH2AX-positive foci, as well as an increased number of Annexin V-positive apoptotic neurons. Of interest, when combining both simulated space conditions, we noted a synergistic effect on neuronal network density, neuronal morphology, cell survival and DNA repair. Furthermore, these observations are in agreement with preliminary gene expression data, revealing modulations in cytoskeletal and apoptosis-related genes after exposure to simulated microgravity. In conclusion, the observed in vitro changes in neuronal network integrity and cell survival induced by space simulated conditions provide us with mechanistic understanding to evaluate health risks and the development of countermeasures to prevent neurological disorders in astronauts over long-term space travels. Acknowledgements: This work is supported partly by the EU-FP7 projects CEREBRAD (n° 295552)
Prakash, Nilima; Brodski, Claude; Naserke, Thorsten; Puelles, Eduardo; Gogoi, Robindra; Hall, Anita; Panhuysen, Markus; Echevarria, Diego; Sussel, Lori; Weisenhorn, Daniela M Vogt; Martinez, Salvador; Arenas, Ernest; Simeone, Antonio; Wurst, Wolfgang
2006-01-01
Midbrain neurons synthesizing the neurotransmitter dopamine play a central role in the modulation of different brain functions and are associated with major neurological and psychiatric disorders. Despite the importance of these cells, the molecular mechanisms controlling their development are still poorly understood. The secreted glycoprotein Wnt1 is expressed in close vicinity to developing midbrain dopaminergic neurons. Here, we show that Wnt1 regulates the genetic network, including Otx2 and Nkx2-2, that is required for the establishment of the midbrain dopaminergic progenitor domain during embryonic development. In addition, Wnt1 is required for the terminal differentiation of midbrain dopaminergic neurons at later stages of embryogenesis. These results identify Wnt1 as a key molecule in the development of midbrain dopaminergic neurons in vivo. They also suggest the Wnt1-controlled signaling pathway as a promising target for new therapeutic strategies in the treatment of Parkinson's disease.
NEVESIM: event-driven neural simulation framework with a Python interface.
Pecevski, Dejan; Kappel, David; Jonke, Zeno
2014-01-01
NEVESIM is a software package for event-driven simulation of networks of spiking neurons with a fast simulation core in C++, and a scripting user interface in the Python programming language. It supports simulation of heterogeneous networks with different types of neurons and synapses, and can be easily extended by the user with new neuron and synapse types. To enable heterogeneous networks and extensibility, NEVESIM is designed to decouple the simulation logic of communicating events (spikes) between the neurons at a network level from the implementation of the internal dynamics of individual neurons. In this paper we will present the simulation framework of NEVESIM, its concepts and features, as well as some aspects of the object-oriented design approaches and simulation strategies that were utilized to efficiently implement the concepts and functionalities of the framework. We will also give an overview of the Python user interface, its basic commands and constructs, and also discuss the benefits of integrating NEVESIM with Python. One of the valuable capabilities of the simulator is to simulate exactly and efficiently networks of stochastic spiking neurons from the recently developed theoretical framework of neural sampling. This functionality was implemented as an extension on top of the basic NEVESIM framework. Altogether, the intended purpose of the NEVESIM framework is to provide a basis for further extensions that support simulation of various neural network models incorporating different neuron and synapse types that can potentially also use different simulation strategies.
NEVESIM: event-driven neural simulation framework with a Python interface
Pecevski, Dejan; Kappel, David; Jonke, Zeno
2014-01-01
NEVESIM is a software package for event-driven simulation of networks of spiking neurons with a fast simulation core in C++, and a scripting user interface in the Python programming language. It supports simulation of heterogeneous networks with different types of neurons and synapses, and can be easily extended by the user with new neuron and synapse types. To enable heterogeneous networks and extensibility, NEVESIM is designed to decouple the simulation logic of communicating events (spikes) between the neurons at a network level from the implementation of the internal dynamics of individual neurons. In this paper we will present the simulation framework of NEVESIM, its concepts and features, as well as some aspects of the object-oriented design approaches and simulation strategies that were utilized to efficiently implement the concepts and functionalities of the framework. We will also give an overview of the Python user interface, its basic commands and constructs, and also discuss the benefits of integrating NEVESIM with Python. One of the valuable capabilities of the simulator is to simulate exactly and efficiently networks of stochastic spiking neurons from the recently developed theoretical framework of neural sampling. This functionality was implemented as an extension on top of the basic NEVESIM framework. Altogether, the intended purpose of the NEVESIM framework is to provide a basis for further extensions that support simulation of various neural network models incorporating different neuron and synapse types that can potentially also use different simulation strategies. PMID:25177291
Time evolution of coherent structures in networks of Hindmarch Rose neurons
NASA Astrophysics Data System (ADS)
Mainieri, M. S.; Erichsen, R.; Brunnet, L. G.
2005-08-01
In the regime of partial synchronization, networks of diffusively coupled Hindmarch-Rose neurons show coherent structures developing in a region of the phase space which is wider than in the correspondent single neuron. Such structures are kept, without important changes, during several bursting periods. In this work, we study the time evolution of these structures and their dynamical stability under damage. This system may model the behavior of ensembles of neurons coupled through a bidirectional gap junction or, in a broader sense, it could also account for the molecular cascades present in the formation of flash and short time memory.
NASA Astrophysics Data System (ADS)
Li, Jie; Yu, Wan-Qing; Xu, Ding; Liu, Feng; Wang, Wei
2009-12-01
Using numerical simulations, we explore the mechanism for propagation of rate signals through a 10-layer feedforward network composed of Hodgkin-Huxley (HH) neurons with sparse connectivity. When white noise is afferent to the input layer, neuronal firing becomes progressively more synchronous in successive layers and synchrony is well developed in deeper layers owing to the feedforward connections between neighboring layers. The synchrony ensures the successful propagation of rate signals through the network when the synaptic conductance is weak. As the synaptic time constant τsyn varies, coherence resonance is observed in the network activity due to the intrinsic property of HH neurons. This makes the output firing rate single-peaked as a function of τsyn, suggesting that the signal propagation can be modulated by the synaptic time constant. These results are consistent with experimental results and advance our understanding of how information is processed in feedforward networks.
Prüss, Harald; Grosse, Gisela; Brunk, Irene; Veh, Rüdiger W; Ahnert-Hilger, Gudrun
2010-03-01
The development of the hippocampal network requires neuronal activity, which is shaped by the differential expression and sorting of a variety of potassium channels. Parallel to their maturation, hippocampal neurons undergo a distinct development of their ion channel profile. The age-dependent dimension of ion channel occurrence is of utmost importance as it is interdependently linked to network formation. However, data regarding the exact temporal expression of potassium channels during postnatal hippocampal development are scarce. We therefore studied the expression of several voltage-gated potassium channel proteins during hippocampal development in vivo and in primary cultures, focusing on channels that were sorted to the axonal compartment. The Kv1.1, Kv1.2, Kv1.4, and Kv3.4 proteins showed a considerable temporal variation of axonal localization among neuronal subpopulations. It is possible, therefore, that hippocampal neurons possess cell type-specific mechanisms for channel compartmentalization. Thus, age-dependent axonal sorting of the potassium channel proteins offers a new approach to functionally distinguish classes of hippocampal neurons and may extend our understanding of hippocampal circuitry and memory processing.
Emergent spatial synaptic structure from diffusive plasticity.
Sweeney, Yann; Clopath, Claudia
2017-04-01
Some neurotransmitters can diffuse freely across cell membranes, influencing neighbouring neurons regardless of their synaptic coupling. This provides a means of neural communication, alternative to synaptic transmission, which can influence the way in which neural networks process information. Here, we ask whether diffusive neurotransmission can also influence the structure of synaptic connectivity in a network undergoing plasticity. We propose a form of Hebbian synaptic plasticity which is mediated by a diffusive neurotransmitter. Whenever a synapse is modified at an individual neuron through our proposed mechanism, similar but smaller modifications occur in synapses connecting to neighbouring neurons. The effects of this diffusive plasticity are explored in networks of rate-based neurons. This leads to the emergence of spatial structure in the synaptic connectivity of the network. We show that this spatial structure can coexist with other forms of structure in the synaptic connectivity, such as with groups of strongly interconnected neurons that form in response to correlated external drive. Finally, we explore diffusive plasticity in a simple feedforward network model of receptive field development. We show that, as widely observed across sensory cortex, the preferred stimulus identity of neurons in our network become spatially correlated due to diffusion. Our proposed mechanism of diffusive plasticity provides an efficient mechanism for generating these spatial correlations in stimulus preference which can flexibly interact with other forms of synaptic organisation. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Hyysalo, Anu; Ristola, Mervi; Mäkinen, Meeri E-L; Häyrynen, Sergei; Nykter, Matti; Narkilahti, Susanna
2017-10-01
Laminins are one of the major protein groups in the extracellular matrix (ECM) and specific laminin isoforms are crucial for neuronal functions in the central nervous system in vivo. In the present study, we compared recombinant human laminin isoforms (LN211, LN332, LN411, LN511, and LN521) and laminin isoform fragment (LN511-E8) in in vitro cultures of human pluripotent stem cell (hPSC)-derived neurons. We showed that laminin substrates containing the α5-chain are important for neuronal attachment, viability and network formation, as detected by phase contrast imaging, viability staining, and immunocytochemistry. Gene expression analysis showed that the molecular mechanisms involved in the preference of hPSC-derived neurons for specific laminin isoforms could be related to ECM remodeling and cell adhesion. Importantly, the microelectrode array analysis revealed the widest distribution of electrophysiologically active neurons on laminin α5 substrates, indicating most efficient development of neuronal network functionality. This study shows that specific laminin α5 substrates provide a controlled in vitro culture environment for hPSC-derived neurons. These substrates can be utilized not only to enhance the production of functional hPSC-derived neurons for in vitro applications like disease modeling, toxicological studies, and drug discovery, but also for the production of clinical grade hPSC-derived cells for regenerative medicine applications. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Mean-field equations for neuronal networks with arbitrary degree distributions.
Nykamp, Duane Q; Friedman, Daniel; Shaker, Sammy; Shinn, Maxwell; Vella, Michael; Compte, Albert; Roxin, Alex
2017-04-01
The emergent dynamics in networks of recurrently coupled spiking neurons depends on the interplay between single-cell dynamics and network topology. Most theoretical studies on network dynamics have assumed simple topologies, such as connections that are made randomly and independently with a fixed probability (Erdös-Rényi network) (ER) or all-to-all connected networks. However, recent findings from slice experiments suggest that the actual patterns of connectivity between cortical neurons are more structured than in the ER random network. Here we explore how introducing additional higher-order statistical structure into the connectivity can affect the dynamics in neuronal networks. Specifically, we consider networks in which the number of presynaptic and postsynaptic contacts for each neuron, the degrees, are drawn from a joint degree distribution. We derive mean-field equations for a single population of homogeneous neurons and for a network of excitatory and inhibitory neurons, where the neurons can have arbitrary degree distributions. Through analysis of the mean-field equations and simulation of networks of integrate-and-fire neurons, we show that such networks have potentially much richer dynamics than an equivalent ER network. Finally, we relate the degree distributions to so-called cortical motifs.
Mean-field equations for neuronal networks with arbitrary degree distributions
NASA Astrophysics Data System (ADS)
Nykamp, Duane Q.; Friedman, Daniel; Shaker, Sammy; Shinn, Maxwell; Vella, Michael; Compte, Albert; Roxin, Alex
2017-04-01
The emergent dynamics in networks of recurrently coupled spiking neurons depends on the interplay between single-cell dynamics and network topology. Most theoretical studies on network dynamics have assumed simple topologies, such as connections that are made randomly and independently with a fixed probability (Erdös-Rényi network) (ER) or all-to-all connected networks. However, recent findings from slice experiments suggest that the actual patterns of connectivity between cortical neurons are more structured than in the ER random network. Here we explore how introducing additional higher-order statistical structure into the connectivity can affect the dynamics in neuronal networks. Specifically, we consider networks in which the number of presynaptic and postsynaptic contacts for each neuron, the degrees, are drawn from a joint degree distribution. We derive mean-field equations for a single population of homogeneous neurons and for a network of excitatory and inhibitory neurons, where the neurons can have arbitrary degree distributions. Through analysis of the mean-field equations and simulation of networks of integrate-and-fire neurons, we show that such networks have potentially much richer dynamics than an equivalent ER network. Finally, we relate the degree distributions to so-called cortical motifs.
From in silico astrocyte cell models to neuron-astrocyte network models: A review.
Oschmann, Franziska; Berry, Hugues; Obermayer, Klaus; Lenk, Kerstin
2018-01-01
The idea that astrocytes may be active partners in synaptic information processing has recently emerged from abundant experimental reports. Because of their spatial proximity to neurons and their bidirectional communication with them, astrocytes are now considered as an important third element of the synapse. Astrocytes integrate and process synaptic information and by doing so generate cytosolic calcium signals that are believed to reflect neuronal transmitter release. Moreover, they regulate neuronal information transmission by releasing gliotransmitters into the synaptic cleft affecting both pre- and postsynaptic receptors. Concurrent with the first experimental reports of the astrocytic impact on neural network dynamics, computational models describing astrocytic functions have been developed. In this review, we give an overview over the published computational models of astrocytic functions, from single-cell dynamics to the tripartite synapse level and network models of astrocytes and neurons. Copyright © 2017 Elsevier Inc. All rights reserved.
Rhythmogenic neuronal networks, emergent leaders, and k-cores.
Schwab, David J; Bruinsma, Robijn F; Feldman, Jack L; Levine, Alex J
2010-11-01
Neuronal network behavior results from a combination of the dynamics of individual neurons and the connectivity of the network that links them together. We study a simplified model, based on the proposal of Feldman and Del Negro (FDN) [Nat. Rev. Neurosci. 7, 232 (2006)], of the preBötzinger Complex, a small neuronal network that participates in the control of the mammalian breathing rhythm through periodic firing bursts. The dynamics of this randomly connected network of identical excitatory neurons differ from those of a uniformly connected one. Specifically, network connectivity determines the identity of emergent leader neurons that trigger the firing bursts. When neuronal desensitization is controlled by the number of input signals to the neurons (as proposed by FDN), the network's collective desensitization--required for successful burst termination--is mediated by k-core clusters of neurons.
Molecular codes for neuronal individuality and cell assembly in the brain
Yagi, Takeshi
2012-01-01
The brain contains an enormous, but finite, number of neurons. The ability of this limited number of neurons to produce nearly limitless neural information over a lifetime is typically explained by combinatorial explosion; that is, by the exponential amplification of each neuron's contribution through its incorporation into “cell assemblies” and neural networks. In development, each neuron expresses diverse cellular recognition molecules that permit the formation of the appropriate neural cell assemblies to elicit various brain functions. The mechanism for generating neuronal assemblies and networks must involve molecular codes that give neurons individuality and allow them to recognize one another and join appropriate networks. The extensive molecular diversity of cell-surface proteins on neurons is likely to contribute to their individual identities. The clustered protocadherins (Pcdh) is a large subfamily within the diverse cadherin superfamily. The clustered Pcdh genes are encoded in tandem by three gene clusters, and are present in all known vertebrate genomes. The set of clustered Pcdh genes is expressed in a random and combinatorial manner in each neuron. In addition, cis-tetramers composed of heteromultimeric clustered Pcdh isoforms represent selective binding units for cell-cell interactions. Here I present the mathematical probabilities for neuronal individuality based on the random and combinatorial expression of clustered Pcdh isoforms and their formation of cis-tetramers in each neuron. Notably, clustered Pcdh gene products are known to play crucial roles in correct axonal projections, synaptic formation, and neuronal survival. Their molecular and biological features induce a hypothesis that the diverse clustered Pcdh molecules provide the molecular code by which neuronal individuality and cell assembly permit the combinatorial explosion of networks that supports enormous processing capability and plasticity of the brain. PMID:22518100
Dynamics of moment neuronal networks.
Feng, Jianfeng; Deng, Yingchun; Rossoni, Enrico
2006-04-01
A theoretical framework is developed for moment neuronal networks (MNNs). Within this framework, the behavior of the system of spiking neurons is specified in terms of the first- and second-order statistics of their interspike intervals, i.e., the mean, the variance, and the cross correlations of spike activity. Since neurons emit and receive spike trains which can be described by renewal--but generally non-Poisson--processes, we first derive a suitable diffusion-type approximation of such processes. Two approximation schemes are introduced: the usual approximation scheme (UAS) and the Ornstein-Uhlenbeck scheme. It is found that both schemes approximate well the input-output characteristics of spiking models such as the IF and the Hodgkin-Huxley models. The MNN framework is then developed according to the UAS scheme, and its predictions are tested on a few examples.
Knowlton, Chris; Meliza, C Daniel; Margoliash, Daniel; Abarbanel, Henry D I
2014-06-01
Estimating the behavior of a network of neurons requires accurate models of the individual neurons along with accurate characterizations of the connections among them. Whereas for a single cell, measurements of the intracellular voltage are technically feasible and sufficient to characterize a useful model of its behavior, making sufficient numbers of simultaneous intracellular measurements to characterize even small networks is infeasible. This paper builds on prior work on single neurons to explore whether knowledge of the time of spiking of neurons in a network, once the nodes (neurons) have been characterized biophysically, can provide enough information to usefully constrain the functional architecture of the network: the existence of synaptic links among neurons and their strength. Using standardized voltage and synaptic gating variable waveforms associated with a spike, we demonstrate that the functional architecture of a small network of model neurons can be established.
Bonifazi, Paolo; Difato, Francesco; Massobrio, Paolo; Breschi, Gian L; Pasquale, Valentina; Levi, Timothée; Goldin, Miri; Bornat, Yannick; Tedesco, Mariateresa; Bisio, Marta; Kanner, Sivan; Galron, Ronit; Tessadori, Jacopo; Taverna, Stefano; Chiappalone, Michela
2013-01-01
Brain-machine interfaces (BMI) were born to control "actions from thoughts" in order to recover motor capability of patients with impaired functional connectivity between the central and peripheral nervous system. The final goal of our studies is the development of a new proof-of-concept BMI-a neuromorphic chip for brain repair-to reproduce the functional organization of a damaged part of the central nervous system. To reach this ambitious goal, we implemented a multidisciplinary "bottom-up" approach in which in vitro networks are the paradigm for the development of an in silico model to be incorporated into a neuromorphic device. In this paper we present the overall strategy and focus on the different building blocks of our studies: (i) the experimental characterization and modeling of "finite size networks" which represent the smallest and most general self-organized circuits capable of generating spontaneous collective dynamics; (ii) the induction of lesions in neuronal networks and the whole brain preparation with special attention on the impact on the functional organization of the circuits; (iii) the first production of a neuromorphic chip able to implement a real-time model of neuronal networks. A dynamical characterization of the finite size circuits with single cell resolution is provided. A neural network model based on Izhikevich neurons was able to replicate the experimental observations. Changes in the dynamics of the neuronal circuits induced by optical and ischemic lesions are presented respectively for in vitro neuronal networks and for a whole brain preparation. Finally the implementation of a neuromorphic chip reproducing the network dynamics in quasi-real time (10 ns precision) is presented.
Su, Li-Ning; Song, Xiao-Qing; Wei, Hui-Ping; Yin, Hai-Feng
Bone mesenchymal stem cells (BMSCs) differentiated into neurons have been widely proposed for use in cell therapy of many neurological disorders. It is therefore important to understand the molecular mechanisms underlying this differentiation. We screened differentially expressed genes between immature neural tissues and untreated BMSCs to identify the genes responsible for neuronal differentiation from BMSCs. GSE68243 gene microarray data of rat BMSCs and GSE18860 gene microarray data of rat neurons were received from the Gene Expression Omnibus database. Transcriptome Analysis Console software showed that 1248 genes were up-regulated and 1273 were down-regulated in neurons compared with BMSCs. Gene Ontology functional enrichment, protein-protein interaction networks, functional modules, and hub genes were analyzed using DAVID, STRING 10, BiNGO tool, and Network Analyzer software, revealing that nine hub genes, Nrcam, Sema3a, Mapk8, Dlg4, Slit1, Creb1, Ntrk2, Cntn2, and Pax6, may play a pivotal role in neuronal differentiation from BMSCs. Seven genes, Dcx, Nrcam, sema3a, Cntn2, Slit1, Ephb1, and Pax6, were shown to be hub nodes within the neuronal development network, while six genes, Fgf2, Tgfβ1, Vegfa, Serpine1, Il6, and Stat1, appeared to play an important role in suppressing neuronal differentiation. However, additional studies are required to confirm these results.
Simultaneous profiling of activity patterns in multiple neuronal subclasses.
Parrish, R Ryley; Grady, John; Codadu, Neela K; Trevelyan, Andrew J; Racca, Claudia
2018-06-01
Neuronal networks typically comprise heterogeneous populations of neurons. A core objective when seeking to understand such networks, therefore, is to identify what roles these different neuronal classes play. Acquiring single cell electrophysiology data for multiple cell classes can prove to be a large and daunting task. Alternatively, Ca 2+ network imaging provides activity profiles of large numbers of neurons simultaneously, but without distinguishing between cell classes. We therefore developed a strategy for combining cellular electrophysiology, Ca 2+ network imaging, and immunohistochemistry to provide activity profiles for multiple cell classes at once. This involves cross-referencing easily identifiable landmarks between imaging of the live and fixed tissue, and then using custom MATLAB functions to realign the two imaging data sets, to correct for distortions of the tissue introduced by the fixation or immunohistochemical processing. We illustrate the methodology for analyses of activity profiles during epileptiform events recorded in mouse brain slices. We further demonstrate the activity profile of a population of parvalbumin-positive interneurons prior, during, and following a seizure-like event. Current approaches to Ca 2+ network imaging analyses are severely limited in their ability to subclassify neurons, and often rely on transgenic approaches to identify cell classes. In contrast, our methodology is a generic, affordable, and flexible technique to characterize neuronal behaviour with respect to classification based on morphological and neurochemical identity. We present a new approach for analysing Ca 2+ network imaging datasets, and use this to explore the parvalbumin-positive interneuron activity during epileptiform events. Copyright © 2018 Elsevier B.V. All rights reserved.
Claus, Lena; Philippot, Camille; Griemsmann, Stephanie; Timmermann, Aline; Jabs, Ronald; Henneberger, Christian; Kettenmann, Helmut; Steinhäuser, Christian
2018-01-01
The ventral posterior nucleus of the thalamus plays an important role in somatosensory information processing. It contains elongated cellular domains called barreloids, which are the structural basis for the somatotopic organization of vibrissae representation. So far, the organization of glial networks in these barreloid structures and its modulation by neuronal activity has not been studied. We have developed a method to visualize thalamic barreloid fields in acute slices. Combining electrophysiology, immunohistochemistry, and electroporation in transgenic mice with cell type-specific fluorescence labeling, we provide the first structure-function analyses of barreloidal glial gap junction networks. We observed coupled networks, which comprised both astrocytes and oligodendrocytes. The spread of tracers or a fluorescent glucose derivative through these networks was dependent on neuronal activity and limited by the barreloid borders, which were formed by uncoupled or weakly coupled oligodendrocytes. Neuronal somata were distributed homogeneously across barreloid fields with their processes running in parallel to the barreloid borders. Many astrocytes and oligodendrocytes were not part of the panglial networks. Thus, oligodendrocytes are the cellular elements limiting the communicating panglial network to a single barreloid, which might be important to ensure proper metabolic support to active neurons located within a particular vibrissae signaling pathway. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Signal transfer within a cultured asymmetric cortical neuron circuit
NASA Astrophysics Data System (ADS)
Isomura, Takuya; Shimba, Kenta; Takayama, Yuzo; Takeuchi, Akimasa; Kotani, Kiyoshi; Jimbo, Yasuhiko
2015-12-01
Objective. Simplified neuronal circuits are required for investigating information representation in nervous systems and for validating theoretical neural network models. Here, we developed patterned neuronal circuits using micro fabricated devices, comprising a micro-well array bonded to a microelectrode-array substrate. Approach. The micro-well array consisted of micrometre-scale wells connected by tunnels, all contained within a silicone slab called a micro-chamber. The design of the micro-chamber confined somata to the wells and allowed axons to grow through the tunnels bidirectionally but with a designed, unidirectional bias. We guided axons into the point of the arrow structure where one of the two tunnel entrances is located, making that the preferred direction. Main results. When rat cortical neurons were cultured in the wells, their axons grew through the tunnels and connected to neurons in adjoining wells. Unidirectional burst transfers and other asymmetric signal-propagation phenomena were observed via the substrate-embedded electrodes. Seventy-nine percent of burst transfers were in the forward direction. We also observed rapid propagation of activity from sites of local electrical stimulation, and significant effects of inhibitory synapse blockade on bursting activity. Significance. These results suggest that this simple, substrate-controlled neuronal circuit can be applied to develop in vitro models of the function of cortical microcircuits or deep neural networks, better to elucidate the laws governing the dynamics of neuronal networks.
Signal transfer within a cultured asymmetric cortical neuron circuit.
Isomura, Takuya; Shimba, Kenta; Takayama, Yuzo; Takeuchi, Akimasa; Kotani, Kiyoshi; Jimbo, Yasuhiko
2015-12-01
Simplified neuronal circuits are required for investigating information representation in nervous systems and for validating theoretical neural network models. Here, we developed patterned neuronal circuits using micro fabricated devices, comprising a micro-well array bonded to a microelectrode-array substrate. The micro-well array consisted of micrometre-scale wells connected by tunnels, all contained within a silicone slab called a micro-chamber. The design of the micro-chamber confined somata to the wells and allowed axons to grow through the tunnels bidirectionally but with a designed, unidirectional bias. We guided axons into the point of the arrow structure where one of the two tunnel entrances is located, making that the preferred direction. When rat cortical neurons were cultured in the wells, their axons grew through the tunnels and connected to neurons in adjoining wells. Unidirectional burst transfers and other asymmetric signal-propagation phenomena were observed via the substrate-embedded electrodes. Seventy-nine percent of burst transfers were in the forward direction. We also observed rapid propagation of activity from sites of local electrical stimulation, and significant effects of inhibitory synapse blockade on bursting activity. These results suggest that this simple, substrate-controlled neuronal circuit can be applied to develop in vitro models of the function of cortical microcircuits or deep neural networks, better to elucidate the laws governing the dynamics of neuronal networks.
Simulation of Code Spectrum and Code Flow of Cultured Neuronal Networks.
Tamura, Shinichi; Nishitani, Yoshi; Hosokawa, Chie; Miyoshi, Tomomitsu; Sawai, Hajime
2016-01-01
It has been shown that, in cultured neuronal networks on a multielectrode, pseudorandom-like sequences (codes) are detected, and they flow with some spatial decay constant. Each cultured neuronal network is characterized by a specific spectrum curve. That is, we may consider the spectrum curve as a "signature" of its associated neuronal network that is dependent on the characteristics of neurons and network configuration, including the weight distribution. In the present study, we used an integrate-and-fire model of neurons with intrinsic and instantaneous fluctuations of characteristics for performing a simulation of a code spectrum from multielectrodes on a 2D mesh neural network. We showed that it is possible to estimate the characteristics of neurons such as the distribution of number of neurons around each electrode and their refractory periods. Although this process is a reverse problem and theoretically the solutions are not sufficiently guaranteed, the parameters seem to be consistent with those of neurons. That is, the proposed neural network model may adequately reflect the behavior of a cultured neuronal network. Furthermore, such prospect is discussed that code analysis will provide a base of communication within a neural network that will also create a base of natural intelligence.
Neural Network Development Tool (NETS)
NASA Technical Reports Server (NTRS)
Baffes, Paul T.
1990-01-01
Artificial neural networks formed from hundreds or thousands of simulated neurons, connected in manner similar to that in human brain. Such network models learning behavior. Using NETS involves translating problem to be solved into input/output pairs, designing network configuration, and training network. Written in C.
Computational properties of networks of synchronous groups of spiking neurons.
Dayhoff, Judith E
2007-09-01
We demonstrate a model in which synchronously firing ensembles of neurons are networked to produce computational results. Each ensemble is a group of biological integrate-and-fire spiking neurons, with probabilistic interconnections between groups. An analogy is drawn in which each individual processing unit of an artificial neural network corresponds to a neuronal group in a biological model. The activation value of a unit in the artificial neural network corresponds to the fraction of active neurons, synchronously firing, in a biological neuronal group. Weights of the artificial neural network correspond to the product of the interconnection density between groups, the group size of the presynaptic group, and the postsynaptic potential heights in the synchronous group model. All three of these parameters can modulate connection strengths between neuronal groups in the synchronous group models. We give an example of nonlinear classification (XOR) and a function approximation example in which the capability of the artificial neural network can be captured by a neural network model with biological integrate-and-fire neurons configured as a network of synchronously firing ensembles of such neurons. We point out that the general function approximation capability proven for feedforward artificial neural networks appears to be approximated by networks of neuronal groups that fire in synchrony, where the groups comprise integrate-and-fire neurons. We discuss the advantages of this type of model for biological systems, its possible learning mechanisms, and the associated timing relationships.
Aebersold, Mathias J.; Thompson-Steckel, Greta; Joutang, Adriane; Schneider, Moritz; Burchert, Conrad; Forró, Csaba; Weydert, Serge; Han, Hana; Vörös, János
2018-01-01
Bottom-up neuroscience aims to engineer well-defined networks of neurons to investigate the functions of the brain. By reducing the complexity of the brain to achievable target questions, such in vitro bioassays better control experimental variables and can serve as a versatile tool for fundamental and pharmacological research. Astrocytes are a cell type critical to neuronal function, and the addition of astrocytes to neuron cultures can improve the quality of in vitro assays. Here, we present cellulose as an astrocyte culture substrate. Astrocytes cultured on the cellulose fiber matrix thrived and formed a dense 3D network. We devised a novel co-culture platform by suspending the easy-to-handle astrocytic paper cultures above neuronal networks of low densities typically needed for bottom-up neuroscience. There was significant improvement in neuronal viability after 5 days in vitro at densities ranging from 50,000 cells/cm2 down to isolated cells at 1,000 cells/cm2. Cultures exhibited spontaneous spiking even at the very low densities, with a significantly greater spike frequency per cell compared to control mono-cultures. Applying the co-culture platform to an engineered network of neurons on a patterned substrate resulted in significantly improved viability and almost doubled the density of live cells. Lastly, the shape of the cellulose substrate can easily be customized to a wide range of culture vessels, making the platform versatile for different applications that will further enable research in bottom-up neuroscience and drug development. PMID:29535595
Network reconfiguration and neuronal plasticity in rhythm-generating networks.
Koch, Henner; Garcia, Alfredo J; Ramirez, Jan-Marino
2011-12-01
Neuronal networks are highly plastic and reconfigure in a state-dependent manner. The plasticity at the network level emerges through multiple intrinsic and synaptic membrane properties that imbue neurons and their interactions with numerous nonlinear properties. These properties are continuously regulated by neuromodulators and homeostatic mechanisms that are critical to maintain not only network stability and also adapt networks in a short- and long-term manner to changes in behavioral, developmental, metabolic, and environmental conditions. This review provides concrete examples from neuronal networks in invertebrates and vertebrates, and illustrates that the concepts and rules that govern neuronal networks and behaviors are universal.
Posttranscriptional control of neuronal development by microRNA networks.
Gao, Fen-Biao
2008-01-01
The proper development of the nervous system requires precise spatial and temporal control of gene expression at both the transcriptional and translational levels. In different experimental model systems, microRNAs (miRNAs) - a class of small, endogenous, noncoding RNAs that control the translation and stability of many mRNAs - are emerging as important regulators of various aspects of neuronal development. Further dissection of the in vivo physiological functions of individual miRNAs promises to offer novel mechanistic insights into the gene regulatory networks that ensure the precise assembly of a functional nervous system.
Rich, Scott; Booth, Victoria; Zochowski, Michal
2016-01-01
The plethora of inhibitory interneurons in the hippocampus and cortex play a pivotal role in generating rhythmic activity by clustering and synchronizing cell firing. Results of our simulations demonstrate that both the intrinsic cellular properties of neurons and the degree of network connectivity affect the characteristics of clustered dynamics exhibited in randomly connected, heterogeneous inhibitory networks. We quantify intrinsic cellular properties by the neuron's current-frequency relation (IF curve) and Phase Response Curve (PRC), a measure of how perturbations given at various phases of a neurons firing cycle affect subsequent spike timing. We analyze network bursting properties of networks of neurons with Type I or Type II properties in both excitability and PRC profile; Type I PRCs strictly show phase advances and IF curves that exhibit frequencies arbitrarily close to zero at firing threshold while Type II PRCs display both phase advances and delays and IF curves that have a non-zero frequency at threshold. Type II neurons whose properties arise with or without an M-type adaptation current are considered. We analyze network dynamics under different levels of cellular heterogeneity and as intrinsic cellular firing frequency and the time scale of decay of synaptic inhibition are varied. Many of the dynamics exhibited by these networks diverge from the predictions of the interneuron network gamma (ING) mechanism, as well as from results in all-to-all connected networks. Our results show that randomly connected networks of Type I neurons synchronize into a single cluster of active neurons while networks of Type II neurons organize into two mutually exclusive clusters segregated by the cells' intrinsic firing frequencies. Networks of Type II neurons containing the adaptation current behave similarly to networks of either Type I or Type II neurons depending on network parameters; however, the adaptation current creates differences in the cluster dynamics compared to those in networks of Type I or Type II neurons. To understand these results, we compute neuronal PRCs calculated with a perturbation matching the profile of the synaptic current in our networks. Differences in profiles of these PRCs across the different neuron types reveal mechanisms underlying the divergent network dynamics. PMID:27812323
A novel environmental chamber for neuronal network multisite recordings.
Biffi, E; Regalia, G; Ghezzi, D; De Ceglia, R; Menegon, A; Ferrigno, G; Fiore, G B; Pedrocchi, A
2012-10-01
Environmental stability is a critical issue for neuronal networks in vitro. Hence, the ability to control the physical and chemical environment of cell cultures during electrophysiological measurements is an important requirement in the experimental design. In this work, we describe the development and the experimental verification of a closed chamber for multisite electrophysiology and optical monitoring. The chamber provides stable temperature, pH and humidity and guarantees cell viability comparable to standard incubators. Besides, it integrates the electronics for long-term neuronal activity recording. The system is portable and adaptable for multiple network housings, which allows performing parallel experiments in the same environment. Our results show that this device can be a solution for long-term electrophysiology, for dual network experiments and for coupled optical and electrical measurements. Copyright © 2012 Wiley Periodicals, Inc.
Soft chitosan microbeads scaffold for 3D functional neuronal networks.
Tedesco, Maria Teresa; Di Lisa, Donatella; Massobrio, Paolo; Colistra, Nicolò; Pesce, Mattia; Catelani, Tiziano; Dellacasa, Elena; Raiteri, Roberto; Martinoia, Sergio; Pastorino, Laura
2018-02-01
The availability of 3D biomimetic in vitro neuronal networks of mammalian neurons represents a pivotal step for the development of brain-on-a-chip experimental models to study neuronal (dys)functions and particularly neuronal connectivity. The use of hydrogel-based scaffolds for 3D cell cultures has been extensively studied in the last years. However, limited work on biomimetic 3D neuronal cultures has been carried out to date. In this respect, here we investigated the use of a widely popular polysaccharide, chitosan (CHI), for the fabrication of a microbead based 3D scaffold to be coupled to primary neuronal cells. CHI microbeads were characterized by optical and atomic force microscopies. The cell/scaffold interaction was deeply characterized by transmission electron microscopy and by immunocytochemistry using confocal microscopy. Finally, a preliminary electrophysiological characterization by micro-electrode arrays was carried out. Copyright © 2017 Elsevier Ltd. All rights reserved.
Simulator for neural networks and action potentials.
Baxter, Douglas A; Byrne, John H
2007-01-01
A key challenge for neuroinformatics is to devise methods for representing, accessing, and integrating vast amounts of diverse and complex data. A useful approach to represent and integrate complex data sets is to develop mathematical models [Arbib (The Handbook of Brain Theory and Neural Networks, pp. 741-745, 2003); Arbib and Grethe (Computing the Brain: A Guide to Neuroinformatics, 2001); Ascoli (Computational Neuroanatomy: Principles and Methods, 2002); Bower and Bolouri (Computational Modeling of Genetic and Biochemical Networks, 2001); Hines et al. (J. Comput. Neurosci. 17, 7-11, 2004); Shepherd et al. (Trends Neurosci. 21, 460-468, 1998); Sivakumaran et al. (Bioinformatics 19, 408-415, 2003); Smolen et al. (Neuron 26, 567-580, 2000); Vadigepalli et al. (OMICS 7, 235-252, 2003)]. Models of neural systems provide quantitative and modifiable frameworks for representing data and analyzing neural function. These models can be developed and solved using neurosimulators. One such neurosimulator is simulator for neural networks and action potentials (SNNAP) [Ziv (J. Neurophysiol. 71, 294-308, 1994)]. SNNAP is a versatile and user-friendly tool for developing and simulating models of neurons and neural networks. SNNAP simulates many features of neuronal function, including ionic currents and their modulation by intracellular ions and/or second messengers, and synaptic transmission and synaptic plasticity. SNNAP is written in Java and runs on most computers. Moreover, SNNAP provides a graphical user interface (GUI) and does not require programming skills. This chapter describes several capabilities of SNNAP and illustrates methods for simulating neurons and neural networks. SNNAP is available at http://snnap.uth.tmc.edu .
Brain-Inspired Constructive Learning Algorithms with Evolutionally Additive Nonlinear Neurons
NASA Astrophysics Data System (ADS)
Fang, Le-Heng; Lin, Wei; Luo, Qiang
In this article, inspired partially by the physiological evidence of brain’s growth and development, we developed a new type of constructive learning algorithm with evolutionally additive nonlinear neurons. The new algorithms have remarkable ability in effective regression and accurate classification. In particular, the algorithms are able to sustain a certain reduction of the loss function when the dynamics of the trained network are bogged down in the vicinity of the local minima. The algorithm augments the neural network by adding only a few connections as well as neurons whose activation functions are nonlinear, nonmonotonic, and self-adapted to the dynamics of the loss functions. Indeed, we analytically demonstrate the reduction dynamics of the algorithm for different problems, and further modify the algorithms so as to obtain an improved generalization capability for the augmented neural networks. Finally, through comparing with the classical algorithm and architecture for neural network construction, we show that our constructive learning algorithms as well as their modified versions have better performances, such as faster training speed and smaller network size, on several representative benchmark datasets including the MNIST dataset for handwriting digits.
Frega, Monica; Tedesco, Mariateresa; Massobrio, Paolo; Pesce, Mattia; Martinoia, Sergio
2014-06-30
Despite the extensive use of in-vitro models for neuroscientific investigations and notwithstanding the growing field of network electrophysiology, all studies on cultured cells devoted to elucidate neurophysiological mechanisms and computational properties, are based on 2D neuronal networks. These networks are usually grown onto specific rigid substrates (also with embedded electrodes) and lack of most of the constituents of the in-vivo like environment: cell morphology, cell-to-cell interaction and neuritic outgrowth in all directions. Cells in a brain region develop in a 3D space and interact with a complex multi-cellular environment and extracellular matrix. Under this perspective, 3D networks coupled to micro-transducer arrays, represent a new and powerful in-vitro model capable of better emulating in-vivo physiology. In this work, we present a new experimental paradigm constituted by 3D hippocampal networks coupled to Micro-Electrode-Arrays (MEAs) and we show how the features of the recorded network dynamics differ from the corresponding 2D network model. Further development of the proposed 3D in-vitro model by adding embedded functionalized scaffolds might open new prospects for manipulating, stimulating and recording the neuronal activity to elucidate neurophysiological mechanisms and to design bio-hybrid microsystems.
Frega, Monica; Tedesco, Mariateresa; Massobrio, Paolo; Pesce, Mattia; Martinoia, Sergio
2014-01-01
Despite the extensive use of in-vitro models for neuroscientific investigations and notwithstanding the growing field of network electrophysiology, all studies on cultured cells devoted to elucidate neurophysiological mechanisms and computational properties, are based on 2D neuronal networks. These networks are usually grown onto specific rigid substrates (also with embedded electrodes) and lack of most of the constituents of the in-vivo like environment: cell morphology, cell-to-cell interaction and neuritic outgrowth in all directions. Cells in a brain region develop in a 3D space and interact with a complex multi-cellular environment and extracellular matrix. Under this perspective, 3D networks coupled to micro-transducer arrays, represent a new and powerful in-vitro model capable of better emulating in-vivo physiology. In this work, we present a new experimental paradigm constituted by 3D hippocampal networks coupled to Micro-Electrode-Arrays (MEAs) and we show how the features of the recorded network dynamics differ from the corresponding 2D network model. Further development of the proposed 3D in-vitro model by adding embedded functionalized scaffolds might open new prospects for manipulating, stimulating and recording the neuronal activity to elucidate neurophysiological mechanisms and to design bio-hybrid microsystems. PMID:24976386
Choe, Eugenie; Lee, Tae Young; Kim, Minah; Hur, Ji-Won; Yoon, Youngwoo Bryan; Cho, Kang-Ik K; Kwon, Jun Soo
2018-03-26
It has been suggested that the mentalizing network and the mirror neuron system network support important social cognitive processes that are impaired in schizophrenia. However, the integrity and interaction of these two networks have not been sufficiently studied, and their effects on social cognition in schizophrenia remain unclear. Our study included 26 first-episode psychosis (FEP) patients and 26 healthy controls. We utilized resting-state functional connectivity to examine the a priori-defined mirror neuron system network and the mentalizing network and to assess the within- and between-network connectivities of the networks in FEP patients. We also assessed the correlation between resting-state functional connectivity measures and theory of mind performance. FEP patients showed altered within-network connectivity of the mirror neuron system network, and aberrant between-network connectivity between the mirror neuron system network and the mentalizing network. The within-network connectivity of the mirror neuron system network was noticeably correlated with theory of mind task performance in FEP patients. The integrity and interaction of the mirror neuron system network and the mentalizing network may be altered during the early stages of psychosis. Additionally, this study suggests that alterations in the integrity of the mirror neuron system network are highly related to deficient theory of mind in schizophrenia, and this problem would be present from the early stage of psychosis. Copyright © 2018 Elsevier B.V. All rights reserved.
Computational exploration of neuron and neural network models in neurobiology.
Prinz, Astrid A
2007-01-01
The electrical activity of individual neurons and neuronal networks is shaped by the complex interplay of a large number of non-linear processes, including the voltage-dependent gating of ion channels and the activation of synaptic receptors. These complex dynamics make it difficult to understand how individual neuron or network parameters-such as the number of ion channels of a given type in a neuron's membrane or the strength of a particular synapse-influence neural system function. Systematic exploration of cellular or network model parameter spaces by computational brute force can overcome this difficulty and generate comprehensive data sets that contain information about neuron or network behavior for many different combinations of parameters. Searching such data sets for parameter combinations that produce functional neuron or network output provides insights into how narrowly different neural system parameters have to be tuned to produce a desired behavior. This chapter describes the construction and analysis of databases of neuron or neuronal network models and describes some of the advantages and downsides of such exploration methods.
Dann, Benjamin; Michaels, Jonathan A; Schaffelhofer, Stefan; Scherberger, Hansjörg
2016-08-15
The functional communication of neurons in cortical networks underlies higher cognitive processes. Yet, little is known about the organization of the single neuron network or its relationship to the synchronization processes that are essential for its formation. Here, we show that the functional single neuron network of three fronto-parietal areas during active behavior of macaque monkeys is highly complex. The network was closely connected (small-world) and consisted of functional modules spanning these areas. Surprisingly, the importance of different neurons to the network was highly heterogeneous with a small number of neurons contributing strongly to the network function (hubs), which were in turn strongly inter-connected (rich-club). Examination of the network synchronization revealed that the identified rich-club consisted of neurons that were synchronized in the beta or low frequency range, whereas other neurons were mostly non-oscillatory synchronized. Therefore, oscillatory synchrony may be a central communication mechanism for highly organized functional spiking networks.
[Extinction and Reconsolidation of Memory].
Zuzina, A B; Balaban, P M
2015-01-01
Retrieval of memory followed by reconsolidation can strengthen a memory, while retrieval followed by extinction results in a decrease of memory performance due to weakening of existing memory or formation of a competing memory. In our study we analyzed the behavior and responses of identified neurons involved in the network underlying aversive learning in terrestrial snail Helix, and made an attempt to describe the conditions in which the retrieval of memory leads either to extinction or reconsolidation. In the network underlying the withdrawal behavior, sensory neurons, premotor interneurons, motor neurons, and modulatory for this network serotonergic neurons are identified and recordings from representatives of these groups were made before and after aversive learning. In the network underlying feeding behavior, the premotor modulatory serotonergic interneurons and motor neurons involved in motor program of feeding are identified. Analysis of changes in neural activity after aversive learning showed that modulatory neurons of feeding behavior do not demonstrate any changes (sometimes a decrease of responses to food was observed), while responses to food in withdrawal behavior premotor interneurons changed qualitatively, from under threshold EPSPs to spike discharges. Using a specific for serotonergic neurons neurotoxin 5,7-DiHT it was shown previously that the serotonergic system is necessary for the aversive learning, but is not necessary for maintenance and retrieval of this memory. These results suggest that the serotonergic neurons that are necessary as part of a reinforcement for developing the associative changes in the network may be not necessary for the retrieval of memory. The hypothesis presented in this review concerns the activity of the "reinforcement" serotonergic neurons that is suggested to be the gate condition for the choice between extinction/reconsolidation triggered by memory retrieval: if these serotonergic neurons do not respond during the retrieval due to adaptation, habituation, changes in environment, etc., then we will observe the extinction; while if these neurons respond to the CS during memory retrieval, we will observe the reconsolidation phenomenon.
Emergence of Adaptive Computation by Single Neurons in the Developing Cortex
Famulare, Michael; Gjorgjieva, Julijana; Moody, William J.
2013-01-01
Adaptation is a fundamental computational motif in neural processing. To maintain stable perception in the face of rapidly shifting input, neural systems must extract relevant information from background fluctuations under many different contexts. Many neural systems are able to adjust their input–output properties such that an input's ability to trigger a response depends on the size of that input relative to its local statistical context. This “gain-scaling” strategy has been shown to be an efficient coding strategy. We report here that this property emerges during early development as an intrinsic property of single neurons in mouse sensorimotor cortex, coinciding with the disappearance of spontaneous waves of network activity, and can be modulated by changing the balance of spike-generating currents. Simultaneously, developing neurons move toward a common intrinsic operating point and a stable ratio of spike-generating currents. This developmental trajectory occurs in the absence of sensory input or spontaneous network activity. Through a combination of electrophysiology and modeling, we demonstrate that developing cortical neurons develop the ability to perform nearly perfect gain scaling by virtue of the maturing spike-generating currents alone. We use reduced single neuron models to identify the conditions for this property to hold. PMID:23884925
Neuronal avalanches of a self-organized neural network with active-neuron-dominant structure.
Li, Xiumin; Small, Michael
2012-06-01
Neuronal avalanche is a spontaneous neuronal activity which obeys a power-law distribution of population event sizes with an exponent of -3/2. It has been observed in the superficial layers of cortex both in vivo and in vitro. In this paper, we analyze the information transmission of a novel self-organized neural network with active-neuron-dominant structure. Neuronal avalanches can be observed in this network with appropriate input intensity. We find that the process of network learning via spike-timing dependent plasticity dramatically increases the complexity of network structure, which is finally self-organized to be active-neuron-dominant connectivity. Both the entropy of activity patterns and the complexity of their resulting post-synaptic inputs are maximized when the network dynamics are propagated as neuronal avalanches. This emergent topology is beneficial for information transmission with high efficiency and also could be responsible for the large information capacity of this network compared with alternative archetypal networks with different neural connectivity.
Yao, Zepeng; Bennett, Amelia J; Clem, Jenna L; Shafer, Orie T
2016-12-13
In animals, networks of clock neurons containing molecular clocks orchestrate daily rhythms in physiology and behavior. However, how various types of clock neurons communicate and coordinate with one another to produce coherent circadian rhythms is not well understood. Here, we investigate clock neuron coupling in the brain of Drosophila and demonstrate that the fly's various groups of clock neurons display unique and complex coupling relationships to core pacemaker neurons. Furthermore, we find that coordinated free-running rhythms require molecular clock synchrony not only within the well-characterized lateral clock neuron classes but also between lateral clock neurons and dorsal clock neurons. These results uncover unexpected patterns of coupling in the clock neuron network and reveal that robust free-running behavioral rhythms require a coherence of molecular oscillations across most of the fly's clock neuron network. Copyright © 2016 The Author(s). Published by Elsevier Inc. All rights reserved.
Honegger, Thibault; Thielen, Moritz I; Feizi, Soheil; Sanjana, Neville E; Voldman, Joel
2016-06-22
The central nervous system is a dense, layered, 3D interconnected network of populations of neurons, and thus recapitulating that complexity for in vitro CNS models requires methods that can create defined topologically-complex neuronal networks. Several three-dimensional patterning approaches have been developed but none have demonstrated the ability to control the connections between populations of neurons. Here we report a method using AC electrokinetic forces that can guide, accelerate, slow down and push up neurites in un-modified collagen scaffolds. We present a means to create in vitro neural networks of arbitrary complexity by using such forces to create 3D intersections of primary neuronal populations that are plated in a 2D plane. We report for the first time in vitro basic brain motifs that have been previously observed in vivo and show that their functional network is highly decorrelated to their structure. This platform can provide building blocks to reproduce in vitro the complexity of neural circuits and provide a minimalistic environment to study the structure-function relationship of the brain circuitry.
Field coupling-induced pattern formation in two-layer neuronal network
NASA Astrophysics Data System (ADS)
Qin, Huixin; Wang, Chunni; Cai, Ning; An, Xinlei; Alzahrani, Faris
2018-07-01
The exchange of charged ions across membrane can generate fluctuation of membrane potential and also complex effect of electromagnetic induction. Diversity in excitability of neurons induces different modes selection and dynamical responses to external stimuli. Based on a neuron model with electromagnetic induction, which is described by magnetic flux and memristor, a two-layer network is proposed to discuss the pattern control and wave propagation in the network. In each layer, gap junction coupling is applied to connect the neurons, while field coupling is considered between two layers of the network. The field coupling is approached by using coupling of magnetic flux, which is associated with distribution of electromagnetic field. It is found that appropriate intensity of field coupling can enhance wave propagation from one layer to another one, and beautiful spatial patterns are formed. The developed target wave in the second layer shows some difference from target wave triggered in the first layer of the network when two layers are considered by different excitabilities. The potential mechanism could be pacemaker-like driving from the first layer will be encoded by the second layer.
NASA Astrophysics Data System (ADS)
Honegger, Thibault; Thielen, Moritz I.; Feizi, Soheil; Sanjana, Neville E.; Voldman, Joel
2016-06-01
The central nervous system is a dense, layered, 3D interconnected network of populations of neurons, and thus recapitulating that complexity for in vitro CNS models requires methods that can create defined topologically-complex neuronal networks. Several three-dimensional patterning approaches have been developed but none have demonstrated the ability to control the connections between populations of neurons. Here we report a method using AC electrokinetic forces that can guide, accelerate, slow down and push up neurites in un-modified collagen scaffolds. We present a means to create in vitro neural networks of arbitrary complexity by using such forces to create 3D intersections of primary neuronal populations that are plated in a 2D plane. We report for the first time in vitro basic brain motifs that have been previously observed in vivo and show that their functional network is highly decorrelated to their structure. This platform can provide building blocks to reproduce in vitro the complexity of neural circuits and provide a minimalistic environment to study the structure-function relationship of the brain circuitry.
Ito, Hidekatsu; Minoshima, Wataru; Kudoh, Suguru N
2015-08-01
To investigate relationships between neuronal network activity and electrical stimulus, we analyzed autonomous activity before and after electrical stimulus. Recordings of autonomous activity were performed using dissociated culture of rat hippocampal neurons on a multi-electrodes array (MEA) dish. Single stimulus and pared stimuli were applied to a cultured neuronal network. Single stimulus was applied every 1 min, and paired stimuli was performed by two sequential stimuli every 1 min. As a result, the patterns of synchronized activities of a neuronal network were changed after stimulus. Especially, long range synchronous activities were induced by paired stimuli. When 1 s inter-stimulus-intervals (ISI) and 1.5 s ISI paired stimuli are applied to a neuronal network, relatively long range synchronous activities expressed in case of 1.5 s ISI. Temporal synchronous activity of neuronal network is changed according to inter-stimulus-intervals (ISI) of electrical stimulus. In other words, dissociated neuronal network can maintain given information in temporal pattern and a certain type of an information maintenance mechanism was considered to be implemented in a semi-artificial dissociated neuronal network. The result is useful toward manipulation technology of neuronal activity in a brain system.
ARACHNE: A neural-neuroglial network builder with remotely controlled parallel computing
Rusakov, Dmitri A.; Savtchenko, Leonid P.
2017-01-01
Creating and running realistic models of neural networks has hitherto been a task for computing professionals rather than experimental neuroscientists. This is mainly because such networks usually engage substantial computational resources, the handling of which requires specific programing skills. Here we put forward a newly developed simulation environment ARACHNE: it enables an investigator to build and explore cellular networks of arbitrary biophysical and architectural complexity using the logic of NEURON and a simple interface on a local computer or a mobile device. The interface can control, through the internet, an optimized computational kernel installed on a remote computer cluster. ARACHNE can combine neuronal (wired) and astroglial (extracellular volume-transmission driven) network types and adopt realistic cell models from the NEURON library. The program and documentation (current version) are available at GitHub repository https://github.com/LeonidSavtchenko/Arachne under the MIT License (MIT). PMID:28362877
Molecular model of cannabis sensitivity in developing neuronal circuits
Keimpema, Erik; Mackie, Ken; Harkany, Tibor
2011-01-01
Prenatal cannabis exposure can complicate in utero development of the nervous system. Cannabis impacts the formation and functions of neuronal circuitries by targeting cannabinoid receptors. Endocannabinoid signaling emerges as a signaling cassette to orchestrate neuronal differentiation programs through the precisely timed interaction of endocannabinoid ligands with their cognate cannabinoid receptors. By indiscriminately prolonging the ‘switched-on’ period of cannabinoid receptors, cannabis can hijack endocannabinoid signals to evoke molecular rearrangements, leading to the erroneous wiring of neuronal networks. Here, we formulate a hierarchical network design necessary and sufficient to describe molecular underpinnings of cannabis-induced neural growth defects. We integrate signalosome components deduced from genome- and proteome-wide arrays and candidate analyses to propose a mechanistic hypothesis on how cannabis-induced ectopic cannabinoid receptor activity overrides physiological neurodevelopmental endocannabinoid signals, affecting the timely formation of synapses. PMID:21757242
Observing complex action sequences: The role of the fronto-parietal mirror neuron system.
Molnar-Szakacs, Istvan; Kaplan, Jonas; Greenfield, Patricia M; Iacoboni, Marco
2006-11-15
A fronto-parietal mirror neuron network in the human brain supports the ability to represent and understand observed actions allowing us to successfully interact with others and our environment. Using functional magnetic resonance imaging (fMRI), we wanted to investigate the response of this network in adults during observation of hierarchically organized action sequences of varying complexity that emerge at different developmental stages. We hypothesized that fronto-parietal systems may play a role in coding the hierarchical structure of object-directed actions. The observation of all action sequences recruited a common bilateral network including the fronto-parietal mirror neuron system and occipito-temporal visual motion areas. Activity in mirror neuron areas varied according to the motoric complexity of the observed actions, but not according to the developmental sequence of action structures, possibly due to the fact that our subjects were all adults. These results suggest that the mirror neuron system provides a fairly accurate simulation process of observed actions, mimicking internally the level of motoric complexity. We also discuss the results in terms of the links between mirror neurons, language development and evolution.
Barrows, Caitlynn M; McCabe, Matthew P; Chen, Hongmei; Swann, John W; Weston, Matthew C
2017-09-06
Changes in synaptic strength and connectivity are thought to be a major mechanism through which many gene variants cause neurological disease. Hyperactivation of the PI3K-mTOR signaling network, via loss of function of repressors such as PTEN, causes epilepsy in humans and animal models, and altered mTOR signaling may contribute to a broad range of neurological diseases. Changes in synaptic transmission have been reported in animal models of PTEN loss; however, the full extent of these changes, and their effect on network function, is still unknown. To better understand the scope of these changes, we recorded from pairs of mouse hippocampal neurons cultured in a two-neuron microcircuit configuration that allowed us to characterize all four major connection types within the hippocampus. Loss of PTEN caused changes in excitatory and inhibitory connectivity, and these changes were postsynaptic, presynaptic, and transynaptic, suggesting that disruption of PTEN has the potential to affect most connection types in the hippocampal circuit. Given the complexity of the changes at the synaptic level, we measured changes in network behavior after deleting Pten from neurons in an organotypic hippocampal slice network. Slices containing Pten -deleted neurons showed increased recruitment of neurons into network bursts. Importantly, these changes were not confined to Pten -deleted neurons, but involved the entire network, suggesting that the extensive changes in synaptic connectivity rewire the entire network in such a way that promotes a widespread increase in functional connectivity. SIGNIFICANCE STATEMENT Homozygous deletion of the Pten gene in neuronal subpopulations in the mouse serves as a valuable model of epilepsy caused by mTOR hyperactivation. To better understand how gene deletions lead to altered neuronal activity, we investigated the synaptic and network effects that occur 1 week after Pten deletion. PTEN loss increased the connectivity of all four types of hippocampal synaptic connections, including two forms of increased inhibition of inhibition, and increased network functional connectivity. These data suggest that single gene mutations that cause neurological diseases such as epilepsy may affect a surprising range of connection types. Moreover, given the robustness of homeostatic plasticity, these diverse effects on connection types may be necessary to cause network phenotypes such as increased synchrony. Copyright © 2017 the authors 0270-6474/17/378595-17$15.00/0.
McCabe, Matthew P.; Chen, Hongmei; Swann, John W.
2017-01-01
Changes in synaptic strength and connectivity are thought to be a major mechanism through which many gene variants cause neurological disease. Hyperactivation of the PI3K-mTOR signaling network, via loss of function of repressors such as PTEN, causes epilepsy in humans and animal models, and altered mTOR signaling may contribute to a broad range of neurological diseases. Changes in synaptic transmission have been reported in animal models of PTEN loss; however, the full extent of these changes, and their effect on network function, is still unknown. To better understand the scope of these changes, we recorded from pairs of mouse hippocampal neurons cultured in a two-neuron microcircuit configuration that allowed us to characterize all four major connection types within the hippocampus. Loss of PTEN caused changes in excitatory and inhibitory connectivity, and these changes were postsynaptic, presynaptic, and transynaptic, suggesting that disruption of PTEN has the potential to affect most connection types in the hippocampal circuit. Given the complexity of the changes at the synaptic level, we measured changes in network behavior after deleting Pten from neurons in an organotypic hippocampal slice network. Slices containing Pten-deleted neurons showed increased recruitment of neurons into network bursts. Importantly, these changes were not confined to Pten-deleted neurons, but involved the entire network, suggesting that the extensive changes in synaptic connectivity rewire the entire network in such a way that promotes a widespread increase in functional connectivity. SIGNIFICANCE STATEMENT Homozygous deletion of the Pten gene in neuronal subpopulations in the mouse serves as a valuable model of epilepsy caused by mTOR hyperactivation. To better understand how gene deletions lead to altered neuronal activity, we investigated the synaptic and network effects that occur 1 week after Pten deletion. PTEN loss increased the connectivity of all four types of hippocampal synaptic connections, including two forms of increased inhibition of inhibition, and increased network functional connectivity. These data suggest that single gene mutations that cause neurological diseases such as epilepsy may affect a surprising range of connection types. Moreover, given the robustness of homeostatic plasticity, these diverse effects on connection types may be necessary to cause network phenotypes such as increased synchrony. PMID:28751459
Weick, Jason P.; Liu, Yan; Zhang, Su-Chun
2011-01-01
Whether hESC-derived neurons can fully integrate with and functionally regulate an existing neural network remains unknown. Here, we demonstrate that hESC-derived neurons receive unitary postsynaptic currents both in vitro and in vivo and adopt the rhythmic firing behavior of mouse cortical networks via synaptic integration. Optical stimulation of hESC-derived neurons expressing Channelrhodopsin-2 elicited both inhibitory and excitatory postsynaptic currents and triggered network bursting in mouse neurons. Furthermore, light stimulation of hESC-derived neurons transplanted to the hippocampus of adult mice triggered postsynaptic currents in host pyramidal neurons in acute slice preparations. Thus, hESC-derived neurons can participate in and modulate neural network activity through functional synaptic integration, suggesting they are capable of contributing to neural network information processing both in vitro and in vivo. PMID:22106298
Cultured Neuronal Networks Express Complex Patterns of Activity and Morphological Memory
NASA Astrophysics Data System (ADS)
Raichman, Nadav; Rubinsky, Liel; Shein, Mark; Baruchi, Itay; Volman, Vladislav; Ben-Jacob, Eshel
The following sections are included: * Cultured Neuronal Networks * Recording the Network Activity * Network Engineering * The Formation of Synchronized Bursting Events * The Characterization of the SBEs * Highly-Active Neurons * Function-Form Relations in Cultured Networks * Analyzing the SBEs Motifs * Network Repertoire * Network under Hypothermia * Summary * Acknowledgments * References
Biocytin-Derived MRI Contrast Agent for Longitudinal Brain Connectivity Studies
2011-01-01
To investigate the connectivity of brain networks noninvasively and dynamically, we have developed a new strategy to functionalize neuronal tracers and designed a biocompatible probe that can be visualized in vivo using magnetic resonance imaging (MRI). Furthermore, the multimodal design used allows combined ex vivo studies with microscopic spatial resolution by conventional histochemical techniques. We present data on the functionalization of biocytin, a well-known neuronal tract tracer, and demonstrate the validity of the approach by showing brain networks of cortical connectivity in live rats under MRI, together with the corresponding microscopic details, such as fibers and neuronal morphology under light microscopy. We further demonstrate that the developed molecule is the first MRI-visible probe to preferentially trace retrograde connections. Our study offers a new platform for the development of multimodal molecular imaging tools of broad interest in neuroscience, that capture in vivo the dynamics of large scale neural networks together with their microscopic characteristics, thereby spanning several organizational levels. PMID:22860157
Gunhanlar, N; Shpak, G; van der Kroeg, M; Gouty-Colomer, L A; Munshi, S T; Lendemeijer, B; Ghazvini, M; Dupont, C; Hoogendijk, W J G; Gribnau, J; de Vrij, F M S; Kushner, S A
2018-05-01
Progress in elucidating the molecular and cellular pathophysiology of neuropsychiatric disorders has been hindered by the limited availability of living human brain tissue. The emergence of induced pluripotent stem cells (iPSCs) has offered a unique alternative strategy using patient-derived functional neuronal networks. However, methods for reliably generating iPSC-derived neurons with mature electrophysiological characteristics have been difficult to develop. Here, we report a simplified differentiation protocol that yields electrophysiologically mature iPSC-derived cortical lineage neuronal networks without the need for astrocyte co-culture or specialized media. This protocol generates a consistent 60:40 ratio of neurons and astrocytes that arise from a common forebrain neural progenitor. Whole-cell patch-clamp recordings of 114 neurons derived from three independent iPSC lines confirmed their electrophysiological maturity, including resting membrane potential (-58.2±1.0 mV), capacitance (49.1±2.9 pF), action potential (AP) threshold (-50.9±0.5 mV) and AP amplitude (66.5±1.3 mV). Nearly 100% of neurons were capable of firing APs, of which 79% had sustained trains of mature APs with minimal accommodation (peak AP frequency: 11.9±0.5 Hz) and 74% exhibited spontaneous synaptic activity (amplitude, 16.03±0.82 pA; frequency, 1.09±0.17 Hz). We expect this protocol to be of broad applicability for implementing iPSC-based neuronal network models of neuropsychiatric disorders.
Jang, Min Jee; Nam, Yoonkey
2015-01-01
Abstract. Optical recording facilitates monitoring the activity of a large neural network at the cellular scale, but the analysis and interpretation of the collected data remain challenging. Here, we present a MATLAB-based toolbox, named NeuroCa, for the automated processing and quantitative analysis of large-scale calcium imaging data. Our tool includes several computational algorithms to extract the calcium spike trains of individual neurons from the calcium imaging data in an automatic fashion. Two algorithms were developed to decompose the imaging data into the activity of individual cells and subsequently detect calcium spikes from each neuronal signal. Applying our method to dense networks in dissociated cultures, we were able to obtain the calcium spike trains of ∼1000 neurons in a few minutes. Further analyses using these data permitted the quantification of neuronal responses to chemical stimuli as well as functional mapping of spatiotemporal patterns in neuronal firing within the spontaneous, synchronous activity of a large network. These results demonstrate that our method not only automates time-consuming, labor-intensive tasks in the analysis of neural data obtained using optical recording techniques but also provides a systematic way to visualize and quantify the collective dynamics of a network in terms of its cellular elements. PMID:26229973
Pesce, Lorenzo L.; Lee, Hyong C.; Hereld, Mark; ...
2013-01-01
Our limited understanding of the relationship between the behavior of individual neurons and large neuronal networks is an important limitation in current epilepsy research and may be one of the main causes of our inadequate ability to treat it. Addressing this problem directly via experiments is impossibly complex; thus, we have been developing and studying medium-large-scale simulations of detailed neuronal networks to guide us. Flexibility in the connection schemas and a complete description of the cortical tissue seem necessary for this purpose. In this paper we examine some of the basic issues encountered in these multiscale simulations. We have determinedmore » the detailed behavior of two such simulators on parallel computer systems. The observed memory and computation-time scaling behavior for a distributed memory implementation were very good over the range studied, both in terms of network sizes (2,000 to 400,000 neurons) and processor pool sizes (1 to 256 processors). Our simulations required between a few megabytes and about 150 gigabytes of RAM and lasted between a few minutes and about a week, well within the capability of most multinode clusters. Therefore, simulations of epileptic seizures on networks with millions of cells should be feasible on current supercomputers.« less
Numerical simulation of coherent resonance in a model network of Rulkov neurons
NASA Astrophysics Data System (ADS)
Andreev, Andrey V.; Runnova, Anastasia E.; Pisarchik, Alexander N.
2018-04-01
In this paper we study the spiking behaviour of a neuronal network consisting of Rulkov elements. We find that the regularity of this behaviour maximizes at a certain level of environment noise. This effect referred to as coherence resonance is demonstrated in a random complex network of Rulkov neurons. An external stimulus added to some of neurons excites them, and then activates other neurons in the network. The network coherence is also maximized at the certain stimulus amplitude.
Perspectives for computational modeling of cell replacement for neurological disorders
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aimone, James B.; Weick, Jason P.
In mathematical modeling of anatomically-constrained neural networks we provide significant insights regarding the response of networks to neurological disorders or injury. Furthermore, a logical extension of these models is to incorporate treatment regimens to investigate network responses to intervention. The addition of nascent neurons from stem cell precursors into damaged or diseased tissue has been used as a successful therapeutic tool in recent decades. Interestingly, models have been developed to examine the incorporation of new neurons into intact adult structures, particularly the dentate granule neurons of the hippocampus. These studies suggest that the unique properties of maturing neurons, can impactmore » circuit behavior in unanticipated ways. In this perspective, we review the current status of models used to examine damaged CNS structures with particular focus on cortical damage due to stroke. Secondly, we suggest that computational modeling of cell replacement therapies can be made feasible by implementing approaches taken by current models of adult neurogenesis. The development of these models is critical for generating hypotheses regarding transplant therapies and improving outcomes by tailoring transplants to desired effects.« less
Perspectives for computational modeling of cell replacement for neurological disorders
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aimone, James B.; Weick, Jason P.
Mathematical modeling of anatomically-constrained neural networks has provided significant insights regarding the response of networks to neurological disorders or injury. A logical extension of these models is to incorporate treatment regimens to investigate network responses to intervention. The addition of nascent neurons from stem cell precursors into damaged or diseased tissue has been used as a successful therapeutic tool in recent decades. Interestingly, models have been developed to examine the incorporation of new neurons into intact adult structures, particularly the dentate granule neurons of the hippocampus. These studies suggest that the unique properties of maturing neurons, can impact circuit behaviormore » in unanticipated ways. In this perspective, we review the current status of models used to examine damaged CNS structures with particular focus on cortical damage due to stroke. Secondly, we suggest that computational modeling of cell replacement therapies can be made feasible by implementing approaches taken by current models of adult neurogenesis. The development of these models is critical for generating hypotheses regarding transplant therapies and improving outcomes by tailoring transplants to desired effects.« less
Born, Jannis; Galeazzi, Juan M; Stringer, Simon M
2017-01-01
A subset of neurons in the posterior parietal and premotor areas of the primate brain respond to the locations of visual targets in a hand-centred frame of reference. Such hand-centred visual representations are thought to play an important role in visually-guided reaching to target locations in space. In this paper we show how a biologically plausible, Hebbian learning mechanism may account for the development of localized hand-centred representations in a hierarchical neural network model of the primate visual system, VisNet. The hand-centered neurons developed in the model use an invariance learning mechanism known as continuous transformation (CT) learning. In contrast to previous theoretical proposals for the development of hand-centered visual representations, CT learning does not need a memory trace of recent neuronal activity to be incorporated in the synaptic learning rule. Instead, CT learning relies solely on a Hebbian learning rule, which is able to exploit the spatial overlap that naturally occurs between successive images of a hand-object configuration as it is shifted across different retinal locations due to saccades. Our simulations show how individual neurons in the network model can learn to respond selectively to target objects in particular locations with respect to the hand, irrespective of where the hand-object configuration occurs on the retina. The response properties of these hand-centred neurons further generalise to localised receptive fields in the hand-centred space when tested on novel hand-object configurations that have not been explored during training. Indeed, even when the network is trained with target objects presented across a near continuum of locations around the hand during training, the model continues to develop hand-centred neurons with localised receptive fields in hand-centred space. With the help of principal component analysis, we provide the first theoretical framework that explains the behavior of Hebbian learning in VisNet.
Born, Jannis; Stringer, Simon M.
2017-01-01
A subset of neurons in the posterior parietal and premotor areas of the primate brain respond to the locations of visual targets in a hand-centred frame of reference. Such hand-centred visual representations are thought to play an important role in visually-guided reaching to target locations in space. In this paper we show how a biologically plausible, Hebbian learning mechanism may account for the development of localized hand-centred representations in a hierarchical neural network model of the primate visual system, VisNet. The hand-centered neurons developed in the model use an invariance learning mechanism known as continuous transformation (CT) learning. In contrast to previous theoretical proposals for the development of hand-centered visual representations, CT learning does not need a memory trace of recent neuronal activity to be incorporated in the synaptic learning rule. Instead, CT learning relies solely on a Hebbian learning rule, which is able to exploit the spatial overlap that naturally occurs between successive images of a hand-object configuration as it is shifted across different retinal locations due to saccades. Our simulations show how individual neurons in the network model can learn to respond selectively to target objects in particular locations with respect to the hand, irrespective of where the hand-object configuration occurs on the retina. The response properties of these hand-centred neurons further generalise to localised receptive fields in the hand-centred space when tested on novel hand-object configurations that have not been explored during training. Indeed, even when the network is trained with target objects presented across a near continuum of locations around the hand during training, the model continues to develop hand-centred neurons with localised receptive fields in hand-centred space. With the help of principal component analysis, we provide the first theoretical framework that explains the behavior of Hebbian learning in VisNet. PMID:28562618
Suzuki, Ikurou; Sugio, Yoshihiro; Moriguchi, Hiroyuki; Jimbo, Yasuhiko; Yasuda, Kenji
2004-07-01
Control over spatial distribution of individual neurons and the pattern of neural network provides an important tool for studying information processing pathways during neural network formation. Moreover, the knowledge of the direction of synaptic connections between cells in each neural network can provide detailed information on the relationship between the forward and feedback signaling. We have developed a method for topographical control of the direction of synaptic connections within a living neuronal network using a new type of individual-cell-based on-chip cell-cultivation system with an agarose microchamber array (AMCA). The advantages of this system include the possibility to control positions and number of cultured cells as well as flexible control of the direction of elongation of axons through stepwise melting of narrow grooves. Such micrometer-order microchannels are obtained by photo-thermal etching of agarose where a portion of the gel is melted with a 1064-nm infrared laser beam. Using this system, we created neural network from individual Rat hippocampal cells. We were able to control elongation of individual axons during cultivation (from cells contained within the AMCA) by non-destructive stepwise photo-thermal etching. We have demonstrated the potential of our on-chip AMCA cell cultivation system for the controlled development of individual cell-based neural networks.
Forecasting PM10 in Algiers: efficacy of multilayer perceptron networks.
Abderrahim, Hamza; Chellali, Mohammed Reda; Hamou, Ahmed
2016-01-01
Air quality forecasting system has acquired high importance in atmospheric pollution due to its negative impacts on the environment and human health. The artificial neural network is one of the most common soft computing methods that can be pragmatic for carving such complex problem. In this paper, we used a multilayer perceptron neural network to forecast the daily averaged concentration of the respirable suspended particulates with aerodynamic diameter of not more than 10 μm (PM10) in Algiers, Algeria. The data for training and testing the network are based on the data sampled from 2002 to 2006 collected by SAMASAFIA network center at El Hamma station. The meteorological data, air temperature, relative humidity, and wind speed, are used as inputs network parameters in the formation of model. The training patterns used correspond to 41 days data. The performance of the developed models was evaluated on the basis index of agreement and other statistical parameters. It was seen that the overall performance of model with 15 neurons is better than the ones with 5 and 10 neurons. The results of multilayer network with as few as one hidden layer and 15 neurons were quite reasonable than the ones with 5 and 10 neurons. Finally, an error around 9% has been reached.
Billard, J-M
2008-10-01
Rather different from their initial image as passive supportive cells of the CNS, the astrocytes are now considered as active partners at synapses, able to release a set of gliotransmitter-like substances to modulate synaptic communication within neuronal networks. Whereas glutamate and ATP were first regarded as main determinants of gliotransmission, growing evidence indicates now that the amino acid D-serine is another important player in the neuronal-glial dialogue. Through the regulation of glutamatergic neurotransmission through both N-methyl-D-aspartate (NMDA-R) and non-NMDA-R, D-serine is helping in modelling the appropriate connections in the developing brain and influencing the functional plasticity within neuronal networks throughout lifespan. The understanding of D-serine signalling, which has increased linearly in the last few years, gives new insights into the critical role of impaired neuronal-glial communication in the diseased brain, and offers new opportunities for developing relevant strategies to treat cognitive deficits associated to brain disorders.
Billard, J-M
2008-01-01
Rather different from their initial image as passive supportive cells of the CNS, the astrocytes are now considered as active partners at synapses, able to release a set of gliotransmitter-like substances to modulate synaptic communication within neuronal networks. Whereas glutamate and ATP were first regarded as main determinants of gliotransmission, growing evidence indicates now that the amino acid D-serine is another important player in the neuronal-glial dialogue. Through the regulation of glutamatergic neurotransmission through both N-methyl-D-aspartate (NMDA-R) and non-NMDA-R, D-serine is helping in modelling the appropriate connections in the developing brain and influencing the functional plasticity within neuronal networks throughout lifespan. The understanding of D-serine signalling, which has increased linearly in the last few years, gives new insights into the critical role of impaired neuronal-glial communication in the diseased brain, and offers new opportunities for developing relevant strategies to treat cognitive deficits associated to brain disorders. PMID:18363840
Xu, Jin-Chong; Fan, Jing; Wang, Xueqing; Eacker, Stephen M.; Kam, Tae-In; Chen, Li; Yin, Xiling; Zhu, Juehua; Chi, Zhikai; Jiang, Haisong; Chen, Rong; Dawson, Ted M.; Dawson, Valina L.
2017-01-01
Translating neuroprotective treatments from discovery in cell and animal models to the clinic has proven challenging. To reduce the gap between basic studies of neurotoxicity and neuroprotection and clinically relevant therapies, we developed a human cortical neuron culture system from human embryonic stem cells (ESCs) or inducible pluripotent stem cells (iPSCs) that generated both excitatory and inhibitory neuronal networks resembling the composition of the human cortex. This methodology used timed administration of retinoic acid (RA) to FOXG1 neural precursor cells leading to differentiation of neuronal populations representative of the six cortical layers with both excitatory and inhibitory neuronal networks that were functional and homeostatically stable. In human cortical neuron cultures, excitotoxicity or ischemia due to oxygen and glucose deprivation led to cell death that was dependent on N-methyl-D-aspartate (NMDA) receptors, nitric oxide (NO), and the poly (ADP-ribose) polymerase (PARP)-dependent cell death, a cell death pathway designated parthanatos to separate it from apoptosis, necroptosis and other forms of cell death. Neuronal cell death was attenuated by PARP inhibitors that are currently in clinical trials for cancer treatment. This culture system provides a new platform for the study of human cortical neurotoxicity and suggests that PARP inhibitors may be useful for ameliorating excitotoxic and ischemic cell death in human neurons. PMID:27053772
Dann, Benjamin; Michaels, Jonathan A; Schaffelhofer, Stefan; Scherberger, Hansjörg
2016-01-01
The functional communication of neurons in cortical networks underlies higher cognitive processes. Yet, little is known about the organization of the single neuron network or its relationship to the synchronization processes that are essential for its formation. Here, we show that the functional single neuron network of three fronto-parietal areas during active behavior of macaque monkeys is highly complex. The network was closely connected (small-world) and consisted of functional modules spanning these areas. Surprisingly, the importance of different neurons to the network was highly heterogeneous with a small number of neurons contributing strongly to the network function (hubs), which were in turn strongly inter-connected (rich-club). Examination of the network synchronization revealed that the identified rich-club consisted of neurons that were synchronized in the beta or low frequency range, whereas other neurons were mostly non-oscillatory synchronized. Therefore, oscillatory synchrony may be a central communication mechanism for highly organized functional spiking networks. DOI: http://dx.doi.org/10.7554/eLife.15719.001 PMID:27525488
Noise focusing and the emergence of coherent activity in neuronal cultures
NASA Astrophysics Data System (ADS)
Orlandi, Javier G.; Soriano, Jordi; Alvarez-Lacalle, Enrique; Teller, Sara; Casademunt, Jaume
2013-09-01
At early stages of development, neuronal cultures in vitro spontaneously reach a coherent state of collective firing in a pattern of nearly periodic global bursts. Although understanding the spontaneous activity of neuronal networks is of chief importance in neuroscience, the origin and nature of that pulsation has remained elusive. By combining high-resolution calcium imaging with modelling in silico, we show that this behaviour is controlled by the propagation of waves that nucleate randomly in a set of points that is specific to each culture and is selected by a non-trivial interplay between dynamics and topology. The phenomenon is explained by the noise focusing effect--a strong spatio-temporal localization of the noise dynamics that originates in the complex structure of avalanches of spontaneous activity. Results are relevant to neuronal tissues and to complex networks with integrate-and-fire dynamics and metric correlations, for instance, in rumour spreading on social networks.
Identification of Neurodegenerative Factors Using Translatome-Regulatory Network Analysis
Brichta, Lars; Shin, William; Jackson-Lewis, Vernice; Blesa, Javier; Yap, Ee-Lynn; Walker, Zachary; Zhang, Jack; Roussarie, Jean-Pierre; Alvarez, Mariano J.; Califano, Andrea; Przedborski, Serge; Greengard, Paul
2016-01-01
For degenerative disorders of the central nervous system, the major obstacle to therapeutic advancement has been the challenge of identifying the key molecular mechanisms underlying neuronal loss. We developed a combinatorial approach including translational profiling and brain regulatory network analysis to search for key determinants of neuronal survival or death. Following the generation of transgenic mice for cell type-specific profiling of midbrain dopaminergic neurons, we established and compared translatome libraries reflecting the molecular signature of these cells at baseline or under degenerative stress. Analysis of these libraries by interrogating a context-specific brain regulatory network led to the identification of a repertoire of intrinsic upstream regulators that drive the dopaminergic stress response. The altered activity of these regulators was not associated with changes in their expression levels. This strategy can be generalized for the elucidation of novel molecular determinants involved in the degeneration of other classes of neurons. PMID:26214373
Solving Constraint Satisfaction Problems with Networks of Spiking Neurons
Jonke, Zeno; Habenschuss, Stefan; Maass, Wolfgang
2016-01-01
Network of neurons in the brain apply—unlike processors in our current generation of computer hardware—an event-based processing strategy, where short pulses (spikes) are emitted sparsely by neurons to signal the occurrence of an event at a particular point in time. Such spike-based computations promise to be substantially more power-efficient than traditional clocked processing schemes. However, it turns out to be surprisingly difficult to design networks of spiking neurons that can solve difficult computational problems on the level of single spikes, rather than rates of spikes. We present here a new method for designing networks of spiking neurons via an energy function. Furthermore, we show how the energy function of a network of stochastically firing neurons can be shaped in a transparent manner by composing the networks of simple stereotypical network motifs. We show that this design approach enables networks of spiking neurons to produce approximate solutions to difficult (NP-hard) constraint satisfaction problems from the domains of planning/optimization and verification/logical inference. The resulting networks employ noise as a computational resource. Nevertheless, the timing of spikes plays an essential role in their computations. Furthermore, networks of spiking neurons carry out for the Traveling Salesman Problem a more efficient stochastic search for good solutions compared with stochastic artificial neural networks (Boltzmann machines) and Gibbs sampling. PMID:27065785
Automated quantification of neuronal networks and single-cell calcium dynamics using calcium imaging
Patel, Tapan P.; Man, Karen; Firestein, Bonnie L.; Meaney, David F.
2017-01-01
Background Recent advances in genetically engineered calcium and membrane potential indicators provide the potential to estimate the activation dynamics of individual neurons within larger, mesoscale networks (100s–1000 +neurons). However, a fully integrated automated workflow for the analysis and visualization of neural microcircuits from high speed fluorescence imaging data is lacking. New method Here we introduce FluoroSNNAP, Fluorescence Single Neuron and Network Analysis Package. FluoroSNNAP is an open-source, interactive software developed in MATLAB for automated quantification of numerous biologically relevant features of both the calcium dynamics of single-cells and network activity patterns. FluoroSNNAP integrates and improves upon existing tools for spike detection, synchronization analysis, and inference of functional connectivity, making it most useful to experimentalists with little or no programming knowledge. Results We apply FluoroSNNAP to characterize the activity patterns of neuronal microcircuits undergoing developmental maturation in vitro. Separately, we highlight the utility of single-cell analysis for phenotyping a mixed population of neurons expressing a human mutant variant of the microtubule associated protein tau and wild-type tau. Comparison with existing method(s) We show the performance of semi-automated cell segmentation using spatiotemporal independent component analysis and significant improvement in detecting calcium transients using a template-based algorithm in comparison to peak-based or wavelet-based detection methods. Our software further enables automated analysis of microcircuits, which is an improvement over existing methods. Conclusions We expect the dissemination of this software will facilitate a comprehensive analysis of neuronal networks, promoting the rapid interrogation of circuits in health and disease. PMID:25629800
NASA Astrophysics Data System (ADS)
Paine, Gregory Harold
1982-03-01
The primary objective of the thesis is to explore the dynamical properties of small nerve networks by means of the methods of statistical mechanics. To this end, a general formalism is developed and applied to elementary groupings of model neurons which are driven by either constant (steady state) or nonconstant (nonsteady state) forces. Neuronal models described by a system of coupled, nonlinear, first-order, ordinary differential equations are considered. A linearized form of the neuronal equations is studied in detail. A Lagrange function corresponding to the linear neural network is constructed which, through a Legendre transformation, provides a constant of motion. By invoking the Maximum-Entropy Principle with the single integral of motion as a constraint, a probability distribution function for the network in a steady state can be obtained. The formalism is implemented for some simple networks driven by a constant force; accordingly, the analysis focuses on a study of fluctuations about the steady state. In particular, a network composed of N noninteracting neurons, termed Free Thinkers, is considered in detail, with a view to interpretation and numerical estimation of the Lagrange multiplier corresponding to the constant of motion. As an archetypical example of a net of interacting neurons, the classical neural oscillator, consisting of two mutually inhibitory neurons, is investigated. It is further shown that in the case of a network driven by a nonconstant force, the Maximum-Entropy Principle can be applied to determine a probability distribution functional describing the network in a nonsteady state. The above examples are reconsidered with nonconstant driving forces which produce small deviations from the steady state. Numerical studies are performed on simplified models of two physical systems: the starfish central nervous system and the mammalian olfactory bulb. Discussions are given as to how statistical neurodynamics can be used to gain a better understanding of the behavior of these systems.
Hybrid Scheme for Modeling Local Field Potentials from Point-Neuron Networks.
Hagen, Espen; Dahmen, David; Stavrinou, Maria L; Lindén, Henrik; Tetzlaff, Tom; van Albada, Sacha J; Grün, Sonja; Diesmann, Markus; Einevoll, Gaute T
2016-12-01
With rapidly advancing multi-electrode recording technology, the local field potential (LFP) has again become a popular measure of neuronal activity in both research and clinical applications. Proper understanding of the LFP requires detailed mathematical modeling incorporating the anatomical and electrophysiological features of neurons near the recording electrode, as well as synaptic inputs from the entire network. Here we propose a hybrid modeling scheme combining efficient point-neuron network models with biophysical principles underlying LFP generation by real neurons. The LFP predictions rely on populations of network-equivalent multicompartment neuron models with layer-specific synaptic connectivity, can be used with an arbitrary number of point-neuron network populations, and allows for a full separation of simulated network dynamics and LFPs. We apply the scheme to a full-scale cortical network model for a ∼1 mm 2 patch of primary visual cortex, predict laminar LFPs for different network states, assess the relative LFP contribution from different laminar populations, and investigate effects of input correlations and neuron density on the LFP. The generic nature of the hybrid scheme and its public implementation in hybridLFPy form the basis for LFP predictions from other and larger point-neuron network models, as well as extensions of the current application with additional biological detail. © The Author 2016. Published by Oxford University Press.
The Influence of Synaptic Weight Distribution on Neuronal Population Dynamics
Buice, Michael; Koch, Christof; Mihalas, Stefan
2013-01-01
The manner in which different distributions of synaptic weights onto cortical neurons shape their spiking activity remains open. To characterize a homogeneous neuronal population, we use the master equation for generalized leaky integrate-and-fire neurons with shot-noise synapses. We develop fast semi-analytic numerical methods to solve this equation for either current or conductance synapses, with and without synaptic depression. We show that its solutions match simulations of equivalent neuronal networks better than those of the Fokker-Planck equation and we compute bounds on the network response to non-instantaneous synapses. We apply these methods to study different synaptic weight distributions in feed-forward networks. We characterize the synaptic amplitude distributions using a set of measures, called tail weight numbers, designed to quantify the preponderance of very strong synapses. Even if synaptic amplitude distributions are equated for both the total current and average synaptic weight, distributions with sparse but strong synapses produce higher responses for small inputs, leading to a larger operating range. Furthermore, despite their small number, such synapses enable the network to respond faster and with more stability in the face of external fluctuations. PMID:24204219
Shaping Neuronal Network Activity by Presynaptic Mechanisms
Ashery, Uri
2015-01-01
Neuronal microcircuits generate oscillatory activity, which has been linked to basic functions such as sleep, learning and sensorimotor gating. Although synaptic release processes are well known for their ability to shape the interaction between neurons in microcircuits, most computational models do not simulate the synaptic transmission process directly and hence cannot explain how changes in synaptic parameters alter neuronal network activity. In this paper, we present a novel neuronal network model that incorporates presynaptic release mechanisms, such as vesicle pool dynamics and calcium-dependent release probability, to model the spontaneous activity of neuronal networks. The model, which is based on modified leaky integrate-and-fire neurons, generates spontaneous network activity patterns, which are similar to experimental data and robust under changes in the model's primary gain parameters such as excitatory postsynaptic potential and connectivity ratio. Furthermore, it reliably recreates experimental findings and provides mechanistic explanations for data obtained from microelectrode array recordings, such as network burst termination and the effects of pharmacological and genetic manipulations. The model demonstrates how elevated asynchronous release, but not spontaneous release, synchronizes neuronal network activity and reveals that asynchronous release enhances utilization of the recycling vesicle pool to induce the network effect. The model further predicts a positive correlation between vesicle priming at the single-neuron level and burst frequency at the network level; this prediction is supported by experimental findings. Thus, the model is utilized to reveal how synaptic release processes at the neuronal level govern activity patterns and synchronization at the network level. PMID:26372048
Nonlinear Maps for Design of Discrete Time Models of Neuronal Network Dynamics
2016-02-29
Performance/Technic~ 02-01-2016- 02-29-2016 4. TITLE AND SUBTITLE Sa. CONTRACT NUMBER Nonlinear Maps for Design of Discrete -Time Models of Neuronal...neuronal model in the form of difference equations that generates neuronal states in discrete moments of time. In this approach, time step can be made...propose to use modern DSP ideas to develop new efficient approaches to the design of such discrete -time models for studies of large-scale neuronal
Regulatory Mechanisms Controlling Maturation of Serotonin Neuron Identity and Function
Spencer, William C.; Deneris, Evan S.
2017-01-01
The brain serotonin (5-hydroxytryptamine; 5-HT) system has been extensively studied for its role in normal physiology and behavior, as well as, neuropsychiatric disorders. The broad influence of 5-HT on brain function, is in part due to the vast connectivity pattern of 5-HT-producing neurons throughout the CNS. 5-HT neurons are born and terminally specified midway through embryogenesis, then enter a protracted period of maturation, where they functionally integrate into CNS circuitry and then are maintained throughout life. The transcriptional regulatory networks controlling progenitor cell generation and terminal specification of 5-HT neurons are relatively well-understood, yet the factors controlling 5-HT neuron maturation are only recently coming to light. In this review, we first provide an update on the regulatory network controlling 5-HT neuron development, then delve deeper into the properties and regulatory strategies governing 5-HT neuron maturation. In particular, we discuss the role of the 5-HT neuron terminal selector transcription factor (TF) Pet-1 as a key regulator of 5-HT neuron maturation. Pet-1 was originally shown to positively regulate genes needed for 5-HT synthesis, reuptake and vesicular transport, hence 5-HT neuron-type transmitter identity. It has now been shown to regulate, both positively and negatively, many other categories of genes in 5-HT neurons including ion channels, GPCRs, transporters, neuropeptides, and other transcription factors. Its function as a terminal selector results in the maturation of 5-HT neuron excitability, firing characteristics, and synaptic modulation by several neurotransmitters. Furthermore, there is a temporal requirement for Pet-1 in the control of postmitotic gene expression trajectories thus indicating a direct role in 5-HT neuron maturation. Proper regulation of the maturation of cellular identity is critical for normal neuronal functioning and perturbations in the gene regulatory networks controlling these processes may result in long-lasting changes in brain function in adulthood. Further study of 5-HT neuron gene regulatory networks is likely to provide additional insight into how neurons acquire their mature identities and how terminal selector-type TFs function in postmitotic vertebrate neurons. PMID:28769770
Regulatory Mechanisms Controlling Maturation of Serotonin Neuron Identity and Function.
Spencer, William C; Deneris, Evan S
2017-01-01
The brain serotonin (5-hydroxytryptamine; 5-HT) system has been extensively studied for its role in normal physiology and behavior, as well as, neuropsychiatric disorders. The broad influence of 5-HT on brain function, is in part due to the vast connectivity pattern of 5-HT-producing neurons throughout the CNS. 5-HT neurons are born and terminally specified midway through embryogenesis, then enter a protracted period of maturation, where they functionally integrate into CNS circuitry and then are maintained throughout life. The transcriptional regulatory networks controlling progenitor cell generation and terminal specification of 5-HT neurons are relatively well-understood, yet the factors controlling 5-HT neuron maturation are only recently coming to light. In this review, we first provide an update on the regulatory network controlling 5-HT neuron development, then delve deeper into the properties and regulatory strategies governing 5-HT neuron maturation. In particular, we discuss the role of the 5-HT neuron terminal selector transcription factor (TF) Pet-1 as a key regulator of 5-HT neuron maturation. Pet-1 was originally shown to positively regulate genes needed for 5-HT synthesis, reuptake and vesicular transport, hence 5-HT neuron-type transmitter identity. It has now been shown to regulate, both positively and negatively, many other categories of genes in 5-HT neurons including ion channels, GPCRs, transporters, neuropeptides, and other transcription factors. Its function as a terminal selector results in the maturation of 5-HT neuron excitability, firing characteristics, and synaptic modulation by several neurotransmitters. Furthermore, there is a temporal requirement for Pet-1 in the control of postmitotic gene expression trajectories thus indicating a direct role in 5-HT neuron maturation. Proper regulation of the maturation of cellular identity is critical for normal neuronal functioning and perturbations in the gene regulatory networks controlling these processes may result in long-lasting changes in brain function in adulthood. Further study of 5-HT neuron gene regulatory networks is likely to provide additional insight into how neurons acquire their mature identities and how terminal selector-type TFs function in postmitotic vertebrate neurons.
Chips of Hope: Neuro-Electronic Hybrids for Brain Repair
NASA Astrophysics Data System (ADS)
Ben-Jacob, Eshel
2010-03-01
The field of Neuro-Electronic Hybrids kicked off 30 years ago when researchers in the US first tweaked the technology of recording and stimulation of networks of live neurons grown in a Petri dish and interfaced with a computer via an array of electrodes. Since then, many researchers have searched for ways to imprint in neural networks new ``memories" without erasing old ones. I will describe our new generation of Neuro-Electronic Hybrids and how we succeeded to turn them into the first learning Neurochips - memory and information processing chips made of live neurons. To imprint multiple memories in our new chip we used chemical stimulation at specific locations that were selected by analyzing the networks activity in real time according to our new information encoding principle. Currently we develop new-generation of neuro chips using special carbon nano tubes (CNT). These electrodes enable to engineer the networks topology and efficient electrical interfacing with the neurons. This advance bears the promise to pave the way for building a new experimental platform for testing new drugs and developing new methods for neural networks repair and regeneration. Looking into the future, the development brings us a step closer towards the dream of Brain Repair by implementable Neuro-Electronic hybrid chips.
SMN is required for sensory-motor circuit function in Drosophila
Imlach, Wendy L.; Beck, Erin S.; Choi, Ben Jiwon; Lotti, Francesco; Pellizzoni, Livio; McCabe, Brian D.
2012-01-01
Summary Spinal muscular atrophy (SMA) is a lethal human disease characterized by motor neuron dysfunction and muscle deterioration due to depletion of the ubiquitous Survival Motor Neuron (SMN) protein. Drosophila SMN mutants have reduced muscle size and defective locomotion, motor rhythm and motor neuron neurotransmission. Unexpectedly, restoration of SMN in either muscles or motor neurons did not alter these phenotypes. Instead, SMN must be expressed in proprioceptive neurons and interneurons in the motor circuit to non-autonomously correct defects in motor neurons and muscles. SMN depletion disrupts the motor system subsequent to circuit development and can be mimicked by the inhibition of motor network function. Furthermore, increasing motor circuit excitability by genetic or pharmacological inhibition of K+ channels can correct SMN-dependent phenotypes. These results establish sensory-motor circuit dysfunction as the origin of motor system deficits in this SMA model and suggest that enhancement of motor neural network activity could ameliorate the disease. PMID:23063130
2018-01-01
Abstract It is widely assumed that distributed neuronal networks are fundamental to the functioning of the brain. Consistent spike timing between neurons is thought to be one of the key principles for the formation of these networks. This can involve synchronous spiking or spiking with time delays, forming spike sequences when the order of spiking is consistent. Finding networks defined by their sequence of time-shifted spikes, denoted here as spike timing networks, is a tremendous challenge. As neurons can participate in multiple spike sequences at multiple between-spike time delays, the possible complexity of networks is prohibitively large. We present a novel approach that is capable of (1) extracting spike timing networks regardless of their sequence complexity, and (2) that describes their spiking sequences with high temporal precision. We achieve this by decomposing frequency-transformed neuronal spiking into separate networks, characterizing each network’s spike sequence by a time delay per neuron, forming a spike sequence timeline. These networks provide a detailed template for an investigation of the experimental relevance of their spike sequences. Using simulated spike timing networks, we show network extraction is robust to spiking noise, spike timing jitter, and partial occurrences of the involved spike sequences. Using rat multineuron recordings, we demonstrate the approach is capable of revealing real spike timing networks with sub-millisecond temporal precision. By uncovering spike timing networks, the prevalence, structure, and function of complex spike sequences can be investigated in greater detail, allowing us to gain a better understanding of their role in neuronal functioning. PMID:29789811
Molecular model of cannabis sensitivity in developing neuronal circuits.
Keimpema, Erik; Mackie, Ken; Harkany, Tibor
2011-09-01
Prenatal cannabis exposure can complicate in utero development of the nervous system. Cannabis impacts the formation and functions of neuronal circuitries by targeting cannabinoid receptors. Endocannabinoid signaling emerges as a signaling cassette that orchestrates neuronal differentiation programs through the precisely timed interaction of endocannabinoid ligands with their cognate cannabinoid receptors. By indiscriminately prolonging the 'switched-on' period of cannabinoid receptors, cannabis can hijack endocannabinoid signals to evoke molecular rearrangements, leading to the erroneous wiring of neuronal networks. Here, we formulate a hierarchical network design necessary and sufficient to describe the molecular underpinnings of cannabis-induced neural growth defects. We integrate signalosome components, deduced from genome- and proteome-wide arrays and candidate analyses, to propose a mechanistic hypothesis of how cannabis-induced ectopic cannabinoid receptor activity overrides physiological neurodevelopmental endocannabinoid signals, affecting the timely formation of synapses. Copyright © 2011 Elsevier Ltd. All rights reserved.
Simplicity and efficiency of integrate-and-fire neuron models.
Plesser, Hans E; Diesmann, Markus
2009-02-01
Lovelace and Cios (2008) recently proposed a very simple spiking neuron (VSSN) model for simulations of large neuronal networks as an efficient replacement for the integrate-and-fire neuron model. We argue that the VSSN model falls behind key advances in neuronal network modeling over the past 20 years, in particular, techniques that permit simulators to compute the state of the neuron without repeated summation over the history of input spikes and to integrate the subthreshold dynamics exactly. State-of-the-art solvers for networks of integrate-and-fire model neurons are substantially more efficient than the VSSN simulator and allow routine simulations of networks of some 10(5) neurons and 10(9) connections on moderate computer clusters.
Genetic strategies to investigate neuronal circuit properties using stem cell-derived neurons
Garcia, Isabella; Kim, Cynthia; Arenkiel, Benjamin R.
2012-01-01
The mammalian brain is anatomically and functionally complex, and prone to diverse forms of injury and neuropathology. Scientists have long strived to develop cell replacement therapies to repair damaged and diseased nervous tissue. However, this goal has remained unrealized for various reasons, including nascent knowledge of neuronal development, the inability to track and manipulate transplanted cells within complex neuronal networks, and host graft rejection. Recent advances in embryonic stem cell (ESC) and induced pluripotent stem cell (iPSC) technology, alongside novel genetic strategies to mark and manipulate stem cell-derived neurons, now provide unprecedented opportunities to investigate complex neuronal circuits in both healthy and diseased brains. Here, we review current technologies aimed at generating and manipulating neurons derived from ESCs and iPSCs toward investigation and manipulation of complex neuronal circuits, ultimately leading to the design and development of novel cell-based therapeutic approaches. PMID:23264761
Kwiat, Moria; Elnathan, Roey; Pevzner, Alexander; Peretz, Asher; Barak, Boaz; Peretz, Hagit; Ducobni, Tamir; Stein, Daniel; Mittelman, Leonid; Ashery, Uri; Patolsky, Fernando
2012-07-25
The use of artificial, prepatterned neuronal networks in vitro is a promising approach for studying the development and dynamics of small neural systems in order to understand the basic functionality of neurons and later on of the brain. The present work presents a high fidelity and robust procedure for controlling neuronal growth on substrates such as silicon wafers and glass, enabling us to obtain mature and durable neural networks of individual cells at designed geometries. It offers several advantages compared to other related techniques that have been reported in recent years mainly because of its high yield and reproducibility. The procedure is based on surface chemistry that allows the formation of functional, tailormade neural architectures with a micrometer high-resolution partition, that has the ability to promote or repel cells attachment. The main achievements of this work are deemed to be the creation of a large scale neuronal network at low density down to individual cells, that develop intact typical neurites and synapses without any glia-supportive cells straight from the plating stage and with a relatively long term survival rate, up to 4 weeks. An important application of this method is its use on 3D nanopillars and 3D nanowire-device arrays, enabling not only the cell bodies, but also their neurites to be positioned directly on electrical devices and grow with registration to the recording elements underneath.
Synchronization and Inter-Layer Interactions of Noise-Driven Neural Networks
Yuniati, Anis; Mai, Te-Lun; Chen, Chi-Ming
2017-01-01
In this study, we used the Hodgkin-Huxley (HH) model of neurons to investigate the phase diagram of a developing single-layer neural network and that of a network consisting of two weakly coupled neural layers. These networks are noise driven and learn through the spike-timing-dependent plasticity (STDP) or the inverse STDP rules. We described how these networks transited from a non-synchronous background activity state (BAS) to a synchronous firing state (SFS) by varying the network connectivity and the learning efficacy. In particular, we studied the interaction between a SFS layer and a BAS layer, and investigated how synchronous firing dynamics was induced in the BAS layer. We further investigated the effect of the inter-layer interaction on a BAS to SFS repair mechanism by considering three types of neuron positioning (random, grid, and lognormal distributions) and two types of inter-layer connections (random and preferential connections). Among these scenarios, we concluded that the repair mechanism has the largest effect for a network with the lognormal neuron positioning and the preferential inter-layer connections. PMID:28197088
Synchronization and Inter-Layer Interactions of Noise-Driven Neural Networks.
Yuniati, Anis; Mai, Te-Lun; Chen, Chi-Ming
2017-01-01
In this study, we used the Hodgkin-Huxley (HH) model of neurons to investigate the phase diagram of a developing single-layer neural network and that of a network consisting of two weakly coupled neural layers. These networks are noise driven and learn through the spike-timing-dependent plasticity (STDP) or the inverse STDP rules. We described how these networks transited from a non-synchronous background activity state (BAS) to a synchronous firing state (SFS) by varying the network connectivity and the learning efficacy. In particular, we studied the interaction between a SFS layer and a BAS layer, and investigated how synchronous firing dynamics was induced in the BAS layer. We further investigated the effect of the inter-layer interaction on a BAS to SFS repair mechanism by considering three types of neuron positioning (random, grid, and lognormal distributions) and two types of inter-layer connections (random and preferential connections). Among these scenarios, we concluded that the repair mechanism has the largest effect for a network with the lognormal neuron positioning and the preferential inter-layer connections.
Pattern Learning, Damage and Repair within Biological Neural Networks
NASA Astrophysics Data System (ADS)
Siu, Theodore; Fitzgerald O'Neill, Kate; Shinbrot, Troy
2015-03-01
Traumatic brain injury (TBI) causes damage to neural networks, potentially leading to disability or even death. Nearly one in ten of these patients die, and most of the remainder suffer from symptoms ranging from headaches and nausea to convulsions and paralysis. In vitro studies to develop treatments for TBI have limited in vivo applicability, and in vitro therapies have even proven to worsen the outcome of TBI patients. We propose that this disconnect between in vitro and in vivo outcomes may be associated with the fact that in vitro tests assess indirect measures of neuronal health, but do not investigate the actual function of neuronal networks. Therefore in this talk, we examine both in vitro and in silico neuronal networks that actually perform a function: pattern identification. We allow the networks to execute genetic, Hebbian, learning, and additionally, we examine the effects of damage and subsequent repair within our networks. We show that the length of repaired connections affects the overall pattern learning performance of the network and we propose therapies that may improve function following TBI in clinical settings.
Energy-efficient neural information processing in individual neurons and neuronal networks.
Yu, Lianchun; Yu, Yuguo
2017-11-01
Brains are composed of networks of an enormous number of neurons interconnected with synapses. Neural information is carried by the electrical signals within neurons and the chemical signals among neurons. Generating these electrical and chemical signals is metabolically expensive. The fundamental issue raised here is whether brains have evolved efficient ways of developing an energy-efficient neural code from the molecular level to the circuit level. Here, we summarize the factors and biophysical mechanisms that could contribute to the energy-efficient neural code for processing input signals. The factors range from ion channel kinetics, body temperature, axonal propagation of action potentials, low-probability release of synaptic neurotransmitters, optimal input and noise, the size of neurons and neuronal clusters, excitation/inhibition balance, coding strategy, cortical wiring, and the organization of functional connectivity. Both experimental and computational evidence suggests that neural systems may use these factors to maximize the efficiency of energy consumption in processing neural signals. Studies indicate that efficient energy utilization may be universal in neuronal systems as an evolutionary consequence of the pressure of limited energy. As a result, neuronal connections may be wired in a highly economical manner to lower energy costs and space. Individual neurons within a network may encode independent stimulus components to allow a minimal number of neurons to represent whole stimulus characteristics efficiently. This basic principle may fundamentally change our view of how billions of neurons organize themselves into complex circuits to operate and generate the most powerful intelligent cognition in nature. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
A biologically inspired neural network for dynamic programming.
Francelin Romero, R A; Kacpryzk, J; Gomide, F
2001-12-01
An artificial neural network with a two-layer feedback topology and generalized recurrent neurons, for solving nonlinear discrete dynamic optimization problems, is developed. A direct method to assign the weights of neural networks is presented. The method is based on Bellmann's Optimality Principle and on the interchange of information which occurs during the synaptic chemical processing among neurons. The neural network based algorithm is an advantageous approach for dynamic programming due to the inherent parallelism of the neural networks; further it reduces the severity of computational problems that can occur in methods like conventional methods. Some illustrative application examples are presented to show how this approach works out including the shortest path and fuzzy decision making problems.
A passport to neurotransmitter identity.
Smidt, Marten P; Burbach, J Peter H
2009-01-01
Comparison of a regulatory network that specifies dopaminergic neurons in Caenorhabditis elegans to the development of vertebrate dopamine systems in the mouse reveals a possible partial conservation of such a network.
Poirazi, Panayiota; Neocleous, Costas; Pattichis, Costantinos S; Schizas, Christos N
2004-05-01
A three-layer neural network (NN) with novel adaptive architecture has been developed. The hidden layer of the network consists of slabs of single neuron models, where neurons within a slab--but not between slabs--have the same type of activation function. The network activation functions in all three layers have adaptable parameters. The network was trained using a biologically inspired, guided-annealing learning rule on a variety of medical data. Good training/testing classification performance was obtained on all data sets tested. The performance achieved was comparable to that of SVM classifiers. It was shown that the adaptive network architecture, inspired from the modular organization often encountered in the mammalian cerebral cortex, can benefit classification performance.
Enhanced storage capacity with errors in scale-free Hopfield neural networks: An analytical study.
Kim, Do-Hyun; Park, Jinha; Kahng, Byungnam
2017-01-01
The Hopfield model is a pioneering neural network model with associative memory retrieval. The analytical solution of the model in mean field limit revealed that memories can be retrieved without any error up to a finite storage capacity of O(N), where N is the system size. Beyond the threshold, they are completely lost. Since the introduction of the Hopfield model, the theory of neural networks has been further developed toward realistic neural networks using analog neurons, spiking neurons, etc. Nevertheless, those advances are based on fully connected networks, which are inconsistent with recent experimental discovery that the number of connections of each neuron seems to be heterogeneous, following a heavy-tailed distribution. Motivated by this observation, we consider the Hopfield model on scale-free networks and obtain a different pattern of associative memory retrieval from that obtained on the fully connected network: the storage capacity becomes tremendously enhanced but with some error in the memory retrieval, which appears as the heterogeneity of the connections is increased. Moreover, the error rates are also obtained on several real neural networks and are indeed similar to that on scale-free model networks.
Pesavento, Michael J; Pinto, David J
2012-11-01
Rapidly changing environments require rapid processing from sensory inputs. Varying deflection velocities of a rodent's primary facial vibrissa cause varying temporal neuronal activity profiles within the ventral posteromedial thalamic nucleus. Local neuron populations in a single somatosensory layer 4 barrel transform sparsely coded input into a spike count based on the input's temporal profile. We investigate this transformation by creating a barrel-like hybrid network with whole cell recordings of in vitro neurons from a cortical slice preparation, embedding the biological neuron in the simulated network by presenting virtual synaptic conductances via a conductance clamp. Utilizing the hybrid network, we examine the reciprocal network properties (local excitatory and inhibitory synaptic convergence) and neuronal membrane properties (input resistance) by altering the barrel population response to diverse thalamic input. In the presence of local network input, neurons are more selective to thalamic input timing; this arises from strong feedforward inhibition. Strongly inhibitory (damping) network regimes are more selective to timing and less selective to the magnitude of input but require stronger initial input. Input selectivity relies heavily on the different membrane properties of excitatory and inhibitory neurons. When inhibitory and excitatory neurons had identical membrane properties, the sensitivity of in vitro neurons to temporal vs. magnitude features of input was substantially reduced. Increasing the mean leak conductance of the inhibitory cells decreased the network's temporal sensitivity, whereas increasing excitatory leak conductance enhanced magnitude sensitivity. Local network synapses are essential in shaping thalamic input, and differing membrane properties of functional classes reciprocally modulate this effect.
Population coding in sparsely connected networks of noisy neurons.
Tripp, Bryan P; Orchard, Jeff
2012-01-01
This study examines the relationship between population coding and spatial connection statistics in networks of noisy neurons. Encoding of sensory information in the neocortex is thought to require coordinated neural populations, because individual cortical neurons respond to a wide range of stimuli, and exhibit highly variable spiking in response to repeated stimuli. Population coding is rooted in network structure, because cortical neurons receive information only from other neurons, and because the information they encode must be decoded by other neurons, if it is to affect behavior. However, population coding theory has often ignored network structure, or assumed discrete, fully connected populations (in contrast with the sparsely connected, continuous sheet of the cortex). In this study, we modeled a sheet of cortical neurons with sparse, primarily local connections, and found that a network with this structure could encode multiple internal state variables with high signal-to-noise ratio. However, we were unable to create high-fidelity networks by instantiating connections at random according to spatial connection probabilities. In our models, high-fidelity networks required additional structure, with higher cluster factors and correlations between the inputs to nearby neurons.
Constructing Precisely Computing Networks with Biophysical Spiking Neurons.
Schwemmer, Michael A; Fairhall, Adrienne L; Denéve, Sophie; Shea-Brown, Eric T
2015-07-15
While spike timing has been shown to carry detailed stimulus information at the sensory periphery, its possible role in network computation is less clear. Most models of computation by neural networks are based on population firing rates. In equivalent spiking implementations, firing is assumed to be random such that averaging across populations of neurons recovers the rate-based approach. Recently, however, Denéve and colleagues have suggested that the spiking behavior of neurons may be fundamental to how neuronal networks compute, with precise spike timing determined by each neuron's contribution to producing the desired output (Boerlin and Denéve, 2011; Boerlin et al., 2013). By postulating that each neuron fires to reduce the error in the network's output, it was demonstrated that linear computations can be performed by networks of integrate-and-fire neurons that communicate through instantaneous synapses. This left open, however, the possibility that realistic networks, with conductance-based neurons with subthreshold nonlinearity and the slower timescales of biophysical synapses, may not fit into this framework. Here, we show how the spike-based approach can be extended to biophysically plausible networks. We then show that our network reproduces a number of key features of cortical networks including irregular and Poisson-like spike times and a tight balance between excitation and inhibition. Lastly, we discuss how the behavior of our model scales with network size or with the number of neurons "recorded" from a larger computing network. These results significantly increase the biological plausibility of the spike-based approach to network computation. We derive a network of neurons with standard spike-generating currents and synapses with realistic timescales that computes based upon the principle that the precise timing of each spike is important for the computation. We then show that our network reproduces a number of key features of cortical networks including irregular, Poisson-like spike times, and a tight balance between excitation and inhibition. These results significantly increase the biological plausibility of the spike-based approach to network computation, and uncover how several components of biological networks may work together to efficiently carry out computation. Copyright © 2015 the authors 0270-6474/15/3510112-23$15.00/0.
Neuronal avalanches and learning
NASA Astrophysics Data System (ADS)
de Arcangelis, Lucilla
2011-05-01
Networks of living neurons represent one of the most fascinating systems of biology. If the physical and chemical mechanisms at the basis of the functioning of a single neuron are quite well understood, the collective behaviour of a system of many neurons is an extremely intriguing subject. Crucial ingredient of this complex behaviour is the plasticity property of the network, namely the capacity to adapt and evolve depending on the level of activity. This plastic ability is believed, nowadays, to be at the basis of learning and memory in real brains. Spontaneous neuronal activity has recently shown features in common to other complex systems. Experimental data have, in fact, shown that electrical information propagates in a cortex slice via an avalanche mode. These avalanches are characterized by a power law distribution for the size and duration, features found in other problems in the context of the physics of complex systems and successful models have been developed to describe their behaviour. In this contribution we discuss a statistical mechanical model for the complex activity in a neuronal network. The model implements the main physiological properties of living neurons and is able to reproduce recent experimental results. Then, we discuss the learning abilities of this neuronal network. Learning occurs via plastic adaptation of synaptic strengths by a non-uniform negative feedback mechanism. The system is able to learn all the tested rules, in particular the exclusive OR (XOR) and a random rule with three inputs. The learning dynamics exhibits universal features as function of the strength of plastic adaptation. Any rule could be learned provided that the plastic adaptation is sufficiently slow.
Neural networks with local receptive fields and superlinear VC dimension.
Schmitt, Michael
2002-04-01
Local receptive field neurons comprise such well-known and widely used unit types as radial basis function (RBF) neurons and neurons with center-surround receptive field. We study the Vapnik-Chervonenkis (VC) dimension of feedforward neural networks with one hidden layer of these units. For several variants of local receptive field neurons, we show that the VC dimension of these networks is superlinear. In particular, we establish the bound Omega(W log k) for any reasonably sized network with W parameters and k hidden nodes. This bound is shown to hold for discrete center-surround receptive field neurons, which are physiologically relevant models of cells in the mammalian visual system, for neurons computing a difference of gaussians, which are popular in computational vision, and for standard RBF neurons, a major alternative to sigmoidal neurons in artificial neural networks. The result for RBF neural networks is of particular interest since it answers a question that has been open for several years. The results also give rise to lower bounds for networks with fixed input dimension. Regarding constants, all bounds are larger than those known thus far for similar architectures with sigmoidal neurons. The superlinear lower bounds contrast with linear upper bounds for single local receptive field neurons also derived here.
3D quantitative phase imaging of neural networks using WDT
NASA Astrophysics Data System (ADS)
Kim, Taewoo; Liu, S. C.; Iyer, Raj; Gillette, Martha U.; Popescu, Gabriel
2015-03-01
White-light diffraction tomography (WDT) is a recently developed 3D imaging technique based on a quantitative phase imaging system called spatial light interference microscopy (SLIM). The technique has achieved a sub-micron resolution in all three directions with high sensitivity granted by the low-coherence of a white-light source. Demonstrations of the technique on single cell imaging have been presented previously; however, imaging on any larger sample, including a cluster of cells, has not been demonstrated using the technique. Neurons in an animal body form a highly complex and spatially organized 3D structure, which can be characterized by neuronal networks or circuits. Currently, the most common method of studying the 3D structure of neuron networks is by using a confocal fluorescence microscope, which requires fluorescence tagging with either transient membrane dyes or after fixation of the cells. Therefore, studies on neurons are often limited to samples that are chemically treated and/or dead. WDT presents a solution for imaging live neuron networks with a high spatial and temporal resolution, because it is a 3D imaging method that is label-free and non-invasive. Using this method, a mouse or rat hippocampal neuron culture and a mouse dorsal root ganglion (DRG) neuron culture have been imaged in order to see the extension of processes between the cells in 3D. Furthermore, the tomogram is compared with a confocal fluorescence image in order to investigate the 3D structure at synapses.
Stimulus-dependent spiking relationships with the EEG
Snyder, Adam C.
2015-01-01
The development and refinement of noninvasive techniques for imaging neural activity is of paramount importance for human neuroscience. Currently, the most accessible and popular technique is electroencephalography (EEG). However, nearly all of what we know about the neural events that underlie EEG signals is based on inference, because of the dearth of studies that have simultaneously paired EEG recordings with direct recordings of single neurons. From the perspective of electrophysiologists there is growing interest in understanding how spiking activity coordinates with large-scale cortical networks. Evidence from recordings at both scales highlights that sensory neurons operate in very distinct states during spontaneous and visually evoked activity, which appear to form extremes in a continuum of coordination in neural networks. We hypothesized that individual neurons have idiosyncratic relationships to large-scale network activity indexed by EEG signals, owing to the neurons' distinct computational roles within the local circuitry. We tested this by recording neuronal populations in visual area V4 of rhesus macaques while we simultaneously recorded EEG. We found substantial heterogeneity in the timing and strength of spike-EEG relationships and that these relationships became more diverse during visual stimulation compared with the spontaneous state. The visual stimulus apparently shifts V4 neurons from a state in which they are relatively uniformly embedded in large-scale network activity to a state in which their distinct roles within the local population are more prominent, suggesting that the specific way in which individual neurons relate to EEG signals may hold clues regarding their computational roles. PMID:26108954
Sahasranamam, Ajith; Vlachos, Ioannis; Aertsen, Ad; Kumar, Arvind
2016-01-01
Spike patterns are among the most common electrophysiological descriptors of neuron types. Surprisingly, it is not clear how the diversity in firing patterns of the neurons in a network affects its activity dynamics. Here, we introduce the state-dependent stochastic bursting neuron model allowing for a change in its firing patterns independent of changes in its input-output firing rate relationship. Using this model, we show that the effect of single neuron spiking on the network dynamics is contingent on the network activity state. While spike bursting can both generate and disrupt oscillations, these patterns are ineffective in large regions of the network state space in changing the network activity qualitatively. Finally, we show that when single-neuron properties are made dependent on the population activity, a hysteresis like dynamics emerges. This novel phenomenon has important implications for determining the network response to time-varying inputs and for the network sensitivity at different operating points. PMID:27212008
Sahasranamam, Ajith; Vlachos, Ioannis; Aertsen, Ad; Kumar, Arvind
2016-05-23
Spike patterns are among the most common electrophysiological descriptors of neuron types. Surprisingly, it is not clear how the diversity in firing patterns of the neurons in a network affects its activity dynamics. Here, we introduce the state-dependent stochastic bursting neuron model allowing for a change in its firing patterns independent of changes in its input-output firing rate relationship. Using this model, we show that the effect of single neuron spiking on the network dynamics is contingent on the network activity state. While spike bursting can both generate and disrupt oscillations, these patterns are ineffective in large regions of the network state space in changing the network activity qualitatively. Finally, we show that when single-neuron properties are made dependent on the population activity, a hysteresis like dynamics emerges. This novel phenomenon has important implications for determining the network response to time-varying inputs and for the network sensitivity at different operating points.
Dias, Roberto A; Gonçalves, Bruno P; da Rocha, Joana F; da Cruz E Silva, Odete A B; da Silva, Augusto M F; Vieira, Sandra I
2017-12-01
Neurons are specialized cells of the Central Nervous System whose function is intricately related to the neuritic network they develop to transmit information. Morphological evaluation of this network and other neuronal structures is required to establish relationships between neuronal morphology and function, and may allow monitoring physiological and pathophysiologic alterations. Fluorescence-based microphotographs are the most widely used in cellular bioimaging, but phase contrast (PhC) microphotographs are easier to obtain, more affordable, and do not require invasive, complicated and disruptive techniques. Despite the various freeware tools available for fluorescence-based images analysis, few exist that can tackle the more elusive and harder-to-analyze PhC images. To surpass this, an interactive semi-automated image processing workflow was developed to easily extract relevant information (e.g. total neuritic length, average cell body area) from both PhC and fluorescence neuronal images. This workflow, named 'NeuronRead', was developed in the form of an ImageJ macro. Its robustness and adaptability were tested and validated on rat cortical primary neurons under control and differentiation inhibitory conditions. Validation included a comparison to manual determinations and to a golden standard freeware tool for fluorescence image analysis. NeuronRead was subsequently applied to PhC images of neurons at distinct differentiation days and exposed or not to DAPT, a pharmacological inhibitor of the γ-secretase enzyme, which cleaves the well-known Alzheimer's amyloid precursor protein (APP) and the Notch receptor. Data obtained confirms a neuritogenic regulatory role for γ-secretase products and validates NeuronRead as a time- and cost-effective useful monitoring tool. Copyright © 2017. Published by Elsevier Inc.
Exposure to bisphenol A affects GABAergic neuron differentiation in neurosphere cultures.
Fukushima, Nobuyuki; Nagao, Tetsuji
2018-06-13
Endocrine-disrupting chemicals (EDCs) influence not only endocrine functions but also neuronal development and functions. In-vivo studies have suggested the relationship of EDC-induced neurobehavioral disorders with dysfunctions of neurotransmitter mechanisms including γ-aminobutyric acid (GABA)ergic mechanisms. However, whether EDCs affect GABAergic neuron differentiation remains unclear. In the present study, we show that a representative EDC, bisphenol A (BPA), affects GABAergic neuron differentiation. Cortical neurospheres prepared from embryonic mice were exposed to BPA for 7 days, and then neuronal differentiation was induced. We found that BPA exposure resulted in a decrease in the ratio of GABAergic neurons to total neurons. However, the same exposure stimulated the differentiation of neurons expressing calbindin, a calcium-binding protein observed in a subpopulation of GABAergic neurons. These findings suggested that BPA might influence the formation of an inhibitory neuronal network in developing cerebral cortex involved in the occurrence of neurobehavioral disorders.
Information in a Network of Neuronal Cells: Effect of Cell Density and Short-Term Depression
Onesto, Valentina; Cosentino, Carlo; Di Fabrizio, Enzo; Cesarelli, Mario; Amato, Francesco; Gentile, Francesco
2016-01-01
Neurons are specialized, electrically excitable cells which use electrical to chemical signals to transmit and elaborate information. Understanding how the cooperation of a great many of neurons in a grid may modify and perhaps improve the information quality, in contrast to few neurons in isolation, is critical for the rational design of cell-materials interfaces for applications in regenerative medicine, tissue engineering, and personalized lab-on-a-chips. In the present paper, we couple an integrate-and-fire model with information theory variables to analyse the extent of information in a network of nerve cells. We provide an estimate of the information in the network in bits as a function of cell density and short-term depression time. In the model, neurons are connected through a Delaunay triangulation of not-intersecting edges; in doing so, the number of connecting synapses per neuron is approximately constant to reproduce the early time of network development in planar neural cell cultures. In simulations where the number of nodes is varied, we observe an optimal value of cell density for which information in the grid is maximized. In simulations in which the posttransmission latency time is varied, we observe that information increases as the latency time decreases and, for specific configurations of the grid, it is largely enhanced in a resonance effect. PMID:27403421
NASA Astrophysics Data System (ADS)
Tang, Guoning; Xu, Kesheng; Jiang, Luoluo
2011-10-01
The synchronization is investigated in a two-dimensional Hindmarsh-Rose neuronal network by introducing a global coupling scheme with time delay, where the length of time delay is proportional to the spatial distance between neurons. We find that the time delay always disturbs synchronization of the neuronal network. When both the coupling strength and length of time delay per unit distance (i.e., enlargement factor) are large enough, the time delay induces the abnormal membrane potential oscillations in neurons. Specifically, the abnormal membrane potential oscillations for the symmetrically placed neurons form an antiphase, so that the large coupling strength and enlargement factor lead to the desynchronization of the neuronal network. The complete and intermittently complete synchronization of the neuronal network are observed for the right choice of parameters. The physical mechanism underlying these phenomena is analyzed.
Integrated microfluidic platforms for investigating neuronal networks
NASA Astrophysics Data System (ADS)
Kim, Hyung Joon
This dissertation describes the development and application of integrated microfluidics-based assay platforms to study neuronal activities in the nervous system in-vitro. The assay platforms were fabricated using soft lithography and micro/nano fabrication including microfluidics, surface patterning, and nanomaterial synthesis. The use of integrated microfluidics-based assay platform allows culturing and manipulating many types of neuronal tissues in precisely controlled microenvironment. Furthermore, they provide organized multi-cellular in-vitro model, long-term monitoring with live cell imaging, and compatibility with molecular biology techniques and electrophysiology experiment. In this dissertation, the integrated microfluidics-based assay platforms are developed for investigation of neuronal activities such as local protein synthesis, impairment of axonal transport by chemical/physical variants, growth cone path finding under chemical/physical cues, and synaptic transmission in neuronal circuit. Chapter 1 describes the motivation, objectives, and scope for developing in-vitro platform to study various neuronal activities. Chapter 2 introduces microfluidic culture platform for biochemical assay with large-scale neuronal tissues that are utilized as model system in neuroscience research. Chapter 3 focuses on the investigation of impaired axonal transport by beta-Amyloid and oxidative stress. The platform allows to control neuronal processes and to quantify mitochondrial movement in various regions of axons away from applied drugs. Chapter 4 demonstrates the development of microfluidics-based growth cone turning assay to elucidate the mechanism underlying axon guidance under soluble factors and shear flow. Using this platform, the behaviors of growth cone of mammalian neurons are verified under the gradient of inhibitory molecules and also shear flow in well-controlled manner. In Chapter 5, I combine in-vitro multicellular model with microfabricated MEA (multielectrode array) or nanowire electrode array to study electrophysiology in neuronal network. Also, "diode-like" microgrooves to control the number of neuronal processes is embedded in this platform. Chapter 6 concludes with a possible future direction of this work. Interfacing micro/nanotechnology with primary neuron culture would open many doors in fundamental neuroscience research and also biomedical innovation.
Transition to subthreshold activity with the use of phase shifting in a model thalamic network
NASA Astrophysics Data System (ADS)
Thomas, Elizabeth; Grisar, Thierry
1997-05-01
Absence epilepsy involves a state of low frequency synchronous oscillations by the involved neuronal networks. These oscillations may be either above or subthreshold. In this investigation, we studied the methods which could be utilized to transform the threshold activity of neurons in the network to a subthreshold state. A model thalamic network was constructed using the Hodgkin Huxley framework. Subthreshold activity was achieved by the application of stimuli to the network which caused phase shifts in the oscillatory activity of selected neurons in the network. In some instances the stimulus was a periodic pulse train of low frequency to the reticular thalamic neurons of the network while in others, it was a constant hyperpolarizing current applied to the thalamocortical neurons.
Glover, J C
2009-11-10
The first Kavli Prize in Neuroscience recognizes a confluence of career achievements that together provide a fundamental understanding of how brain and spinal cord circuits are assembled during development and function in the adult. The members of the Kavli Neuroscience Prize Committee have decided to reward three scientists (Sten Grillner, Thomas Jessell, and Pasko Rakic) jointly "for discoveries on the developmental and functional logic of neuronal circuits". Pasko Rakic performed groundbreaking studies of the developing cerebral cortex, including the discovery of how radial glia guide the neuronal migration that establishes cortical layers and for the radial unit hypothesis and its implications for cortical connectivity and evolution. Thomas Jessell discovered molecular principles governing the specification and patterning of different neuron types and the development of their synaptic interconnection into sensorimotor circuits. Sten Grillner elucidated principles of network organization in the vertebrate locomotor central pattern generator, along with its command systems and sensory and higher order control. The discoveries of Rakic, Jessell and Grillner provide a framework for how neurons obtain their identities and ultimate locations, establish appropriate connections with each other, and how the resultant neuronal networks operate. Their work has significantly advanced our understanding of brain development and function and created new opportunities for the treatment of neurological disorders. Each has pioneered an important area of neuroscience research and left a legacy of exceptional scientific achievement, insight, communication, mentoring and leadership.
Coarse-Grained Clustering Dynamics of Heterogeneously Coupled Neurons.
Moon, Sung Joon; Cook, Katherine A; Rajendran, Karthikeyan; Kevrekidis, Ioannis G; Cisternas, Jaime; Laing, Carlo R
2015-12-01
The formation of oscillating phase clusters in a network of identical Hodgkin-Huxley neurons is studied, along with their dynamic behavior. The neurons are synaptically coupled in an all-to-all manner, yet the synaptic coupling characteristic time is heterogeneous across the connections. In a network of N neurons where this heterogeneity is characterized by a prescribed random variable, the oscillatory single-cluster state can transition-through [Formula: see text] (possibly perturbed) period-doubling and subsequent bifurcations-to a variety of multiple-cluster states. The clustering dynamic behavior is computationally studied both at the detailed and the coarse-grained levels, and a numerical approach that can enable studying the coarse-grained dynamics in a network of arbitrarily large size is suggested. Among a number of cluster states formed, double clusters, composed of nearly equal sub-network sizes are seen to be stable; interestingly, the heterogeneity parameter in each of the double-cluster components tends to be consistent with the random variable over the entire network: Given a double-cluster state, permuting the dynamical variables of the neurons can lead to a combinatorially large number of different, yet similar "fine" states that appear practically identical at the coarse-grained level. For weak heterogeneity we find that correlations rapidly develop, within each cluster, between the neuron's "identity" (its own value of the heterogeneity parameter) and its dynamical state. For single- and double-cluster states we demonstrate an effective coarse-graining approach that uses the Polynomial Chaos expansion to succinctly describe the dynamics by these quickly established "identity-state" correlations. This coarse-graining approach is utilized, within the equation-free framework, to perform efficient computations of the neuron ensemble dynamics.
Effect of Transcranial Magnetic Stimulation on Neuronal Networks
NASA Astrophysics Data System (ADS)
Unsal, Ahmet; Hadimani, Ravi; Jiles, David
2013-03-01
The human brain contains around 100 billion nerve cells controlling our day to day activities. Consequently, brain disorders often result in impairments such as paralysis, loss of coordination and seizure. It has been said that 1 in 5 Americans suffer some diagnosable mental disorder. There is an urgent need to understand the disorders, prevent them and if possible, develop permanent cure for them. As a result, a significant amount of research activities is being directed towards brain research. Transcranial Magnetic Stimulation (TMS) is a promising tool for diagnosing and treating brain disorders. It is a non-invasive treatment method that produces a current flow in the brain which excites the neurons. Even though TMS has been verified to have advantageous effects on various brain related disorders, there have not been enough studies on the impact of TMS on cells. In this study, we are investigating the electrophysiological effects of TMS on one dimensional neuronal culture grown in a circular pathway. Electrical currents are produced on the neuronal networks depending on the directionality of the applied field. This aids in understanding how neuronal networks react under TMS treatment.
Smith, Imogen; Silveirinha, Vasco; Stein, Jason L; de la Torre-Ubieta, Luis; Farrimond, Jonathan A; Williamson, Elizabeth M; Whalley, Benjamin J
2017-04-01
Differentiated human neural stem cells were cultured in an inert three-dimensional (3D) scaffold and, unlike two-dimensional (2D) but otherwise comparable monolayer cultures, formed spontaneously active, functional neuronal networks that responded reproducibly and predictably to conventional pharmacological treatments to reveal functional, glutamatergic synapses. Immunocytochemical and electron microscopy analysis revealed a neuronal and glial population, where markers of neuronal maturity were observed in the former. Oligonucleotide microarray analysis revealed substantial differences in gene expression conferred by culturing in a 3D vs a 2D environment. Notable and numerous differences were seen in genes coding for neuronal function, the extracellular matrix and cytoskeleton. In addition to producing functional networks, differentiated human neural stem cells grown in inert scaffolds offer several significant advantages over conventional 2D monolayers. These advantages include cost savings and improved physiological relevance, which make them better suited for use in the pharmacological and toxicological assays required for development of stem cell-based treatments and the reduction of animal use in medical research. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
Reinforcement Learning of Two-Joint Virtual Arm Reaching in a Computer Model of Sensorimotor Cortex
Neymotin, Samuel A.; Chadderdon, George L.; Kerr, Cliff C.; Francis, Joseph T.; Lytton, William W.
2014-01-01
Neocortical mechanisms of learning sensorimotor control involve a complex series of interactions at multiple levels, from synaptic mechanisms to cellular dynamics to network connectomics. We developed a model of sensory and motor neocortex consisting of 704 spiking model neurons. Sensory and motor populations included excitatory cells and two types of interneurons. Neurons were interconnected with AMPA/NMDA and GABAA synapses. We trained our model using spike-timing-dependent reinforcement learning to control a two-joint virtual arm to reach to a fixed target. For each of 125 trained networks, we used 200 training sessions, each involving 15 s reaches to the target from 16 starting positions. Learning altered network dynamics, with enhancements to neuronal synchrony and behaviorally relevant information flow between neurons. After learning, networks demonstrated retention of behaviorally relevant memories by using proprioceptive information to perform reach-to-target from multiple starting positions. Networks dynamically controlled which joint rotations to use to reach a target, depending on current arm position. Learning-dependent network reorganization was evident in both sensory and motor populations: learned synaptic weights showed target-specific patterning optimized for particular reach movements. Our model embodies an integrative hypothesis of sensorimotor cortical learning that could be used to interpret future electrophysiological data recorded in vivo from sensorimotor learning experiments. We used our model to make the following predictions: learning enhances synchrony in neuronal populations and behaviorally relevant information flow across neuronal populations, enhanced sensory processing aids task-relevant motor performance and the relative ease of a particular movement in vivo depends on the amount of sensory information required to complete the movement. PMID:24047323
A Functionally Conserved Gene Regulatory Network Module Governing Olfactory Neuron Diversity.
Li, Qingyun; Barish, Scott; Okuwa, Sumie; Maciejewski, Abigail; Brandt, Alicia T; Reinhold, Dominik; Jones, Corbin D; Volkan, Pelin Cayirlioglu
2016-01-01
Sensory neuron diversity is required for organisms to decipher complex environmental cues. In Drosophila, the olfactory environment is detected by 50 different olfactory receptor neuron (ORN) classes that are clustered in combinations within distinct sensilla subtypes. Each sensilla subtype houses stereotypically clustered 1-4 ORN identities that arise through asymmetric divisions from a single multipotent sensory organ precursor (SOP). How each class of SOPs acquires a unique differentiation potential that accounts for ORN diversity is unknown. Previously, we reported a critical component of SOP diversification program, Rotund (Rn), increases ORN diversity by generating novel developmental trajectories from existing precursors within each independent sensilla type lineages. Here, we show that Rn, along with BarH1/H2 (Bar), Bric-à-brac (Bab), Apterous (Ap) and Dachshund (Dac), constitutes a transcription factor (TF) network that patterns the developing olfactory tissue. This network was previously shown to pattern the segmentation of the leg, which suggests that this network is functionally conserved. In antennal imaginal discs, precursors with diverse ORN differentiation potentials are selected from concentric rings defined by unique combinations of these TFs along the proximodistal axis of the developing antennal disc. The combinatorial code that demarcates each precursor field is set up by cross-regulatory interactions among different factors within the network. Modifications of this network lead to predictable changes in the diversity of sensilla subtypes and ORN pools. In light of our data, we propose a molecular map that defines each unique SOP fate. Our results highlight the importance of the early prepatterning gene regulatory network as a modulator of SOP and terminally differentiated ORN diversity. Finally, our model illustrates how conserved developmental strategies are used to generate neuronal diversity.
Brian: a simulator for spiking neural networks in python.
Goodman, Dan; Brette, Romain
2008-01-01
"Brian" is a new simulator for spiking neural networks, written in Python (http://brian. di.ens.fr). It is an intuitive and highly flexible tool for rapidly developing new models, especially networks of single-compartment neurons. In addition to using standard types of neuron models, users can define models by writing arbitrary differential equations in ordinary mathematical notation. Python scientific libraries can also be used for defining models and analysing data. Vectorisation techniques allow efficient simulations despite the overheads of an interpreted language. Brian will be especially valuable for working on non-standard neuron models not easily covered by existing software, and as an alternative to using Matlab or C for simulations. With its easy and intuitive syntax, Brian is also very well suited for teaching computational neuroscience.
Electrophysiologic studies of neronal activities under ischemia condition.
Huang, Shun-Ho; Wang, Ping-Hsien; Chen, Jia-Jin Jason
2008-01-01
Substrate with integrated microelectrode arrays (MEAs) provides an alternative electrophysiological method. With MEAS, one can measure the impedance and elicit electrical stimulation from multiple sites of MEAs to determine the electrophysiological conditions of cells. The aims of this research were to construct an impedance and action potential measurement system for neurons cultured on MEAs for observing the electrophysiological signal transmission in neuronal network during glucose and oxygen deprivation (OGD). An extracellular stimulator producing the biphasic micro-current pulse for neuron stimulation was built in this study. From the time-course recording of impedance, OGD condition effectively induced damage in neurons in vitro. It is known that the results of cell stimulation are affected by electrode impedance, so does the result of neuron cells covered on the electrode can measure the sealing resistance. For extracellular stimulation study, cortical neuronal activity was recorded and the suitable stimulation window was determined. However, the stimulation results were affected by electrode impedance as well as sealing impedance resulting from neuron cells covering the electrode. Further development of surface modification for cultured neuron network should provide a better way for in vitro impedance and electrophysiological measurements.
Self-Consistent Scheme for Spike-Train Power Spectra in Heterogeneous Sparse Networks.
Pena, Rodrigo F O; Vellmer, Sebastian; Bernardi, Davide; Roque, Antonio C; Lindner, Benjamin
2018-01-01
Recurrent networks of spiking neurons can be in an asynchronous state characterized by low or absent cross-correlations and spike statistics which resemble those of cortical neurons. Although spatial correlations are negligible in this state, neurons can show pronounced temporal correlations in their spike trains that can be quantified by the autocorrelation function or the spike-train power spectrum. Depending on cellular and network parameters, correlations display diverse patterns (ranging from simple refractory-period effects and stochastic oscillations to slow fluctuations) and it is generally not well-understood how these dependencies come about. Previous work has explored how the single-cell correlations in a homogeneous network (excitatory and inhibitory integrate-and-fire neurons with nearly balanced mean recurrent input) can be determined numerically from an iterative single-neuron simulation. Such a scheme is based on the fact that every neuron is driven by the network noise (i.e., the input currents from all its presynaptic partners) but also contributes to the network noise, leading to a self-consistency condition for the input and output spectra. Here we first extend this scheme to homogeneous networks with strong recurrent inhibition and a synaptic filter, in which instabilities of the previous scheme are avoided by an averaging procedure. We then extend the scheme to heterogeneous networks in which (i) different neural subpopulations (e.g., excitatory and inhibitory neurons) have different cellular or connectivity parameters; (ii) the number and strength of the input connections are random (Erdős-Rényi topology) and thus different among neurons. In all heterogeneous cases, neurons are lumped in different classes each of which is represented by a single neuron in the iterative scheme; in addition, we make a Gaussian approximation of the input current to the neuron. These approximations seem to be justified over a broad range of parameters as indicated by comparison with simulation results of large recurrent networks. Our method can help to elucidate how network heterogeneity shapes the asynchronous state in recurrent neural networks.
Multiplex Networks of Cortical and Hippocampal Neurons Revealed at Different Timescales
Timme, Nicholas; Ito, Shinya; Myroshnychenko, Maxym; Yeh, Fang-Chin; Hiolski, Emma; Hottowy, Pawel; Beggs, John M.
2014-01-01
Recent studies have emphasized the importance of multiplex networks – interdependent networks with shared nodes and different types of connections – in systems primarily outside of neuroscience. Though the multiplex properties of networks are frequently not considered, most networks are actually multiplex networks and the multiplex specific features of networks can greatly affect network behavior (e.g. fault tolerance). Thus, the study of networks of neurons could potentially be greatly enhanced using a multiplex perspective. Given the wide range of temporally dependent rhythms and phenomena present in neural systems, we chose to examine multiplex networks of individual neurons with time scale dependent connections. To study these networks, we used transfer entropy – an information theoretic quantity that can be used to measure linear and nonlinear interactions – to systematically measure the connectivity between individual neurons at different time scales in cortical and hippocampal slice cultures. We recorded the spiking activity of almost 12,000 neurons across 60 tissue samples using a 512-electrode array with 60 micrometer inter-electrode spacing and 50 microsecond temporal resolution. To the best of our knowledge, this preparation and recording method represents a superior combination of number of recorded neurons and temporal and spatial recording resolutions to any currently available in vivo system. We found that highly connected neurons (“hubs”) were localized to certain time scales, which, we hypothesize, increases the fault tolerance of the network. Conversely, a large proportion of non-hub neurons were not localized to certain time scales. In addition, we found that long and short time scale connectivity was uncorrelated. Finally, we found that long time scale networks were significantly less modular and more disassortative than short time scale networks in both tissue types. As far as we are aware, this analysis represents the first systematic study of temporally dependent multiplex networks among individual neurons. PMID:25536059
Alvarellos-González, Alberto; Pazos, Alejandro; Porto-Pazos, Ana B.
2012-01-01
The importance of astrocytes, one part of the glial system, for information processing in the brain has recently been demonstrated. Regarding information processing in multilayer connectionist systems, it has been shown that systems which include artificial neurons and astrocytes (Artificial Neuron-Glia Networks) have well-known advantages over identical systems including only artificial neurons. Since the actual impact of astrocytes in neural network function is unknown, we have investigated, using computational models, different astrocyte-neuron interactions for information processing; different neuron-glia algorithms have been implemented for training and validation of multilayer Artificial Neuron-Glia Networks oriented toward classification problem resolution. The results of the tests performed suggest that all the algorithms modelling astrocyte-induced synaptic potentiation improved artificial neural network performance, but their efficacy depended on the complexity of the problem. PMID:22649480
Alvarellos-González, Alberto; Pazos, Alejandro; Porto-Pazos, Ana B
2012-01-01
The importance of astrocytes, one part of the glial system, for information processing in the brain has recently been demonstrated. Regarding information processing in multilayer connectionist systems, it has been shown that systems which include artificial neurons and astrocytes (Artificial Neuron-Glia Networks) have well-known advantages over identical systems including only artificial neurons. Since the actual impact of astrocytes in neural network function is unknown, we have investigated, using computational models, different astrocyte-neuron interactions for information processing; different neuron-glia algorithms have been implemented for training and validation of multilayer Artificial Neuron-Glia Networks oriented toward classification problem resolution. The results of the tests performed suggest that all the algorithms modelling astrocyte-induced synaptic potentiation improved artificial neural network performance, but their efficacy depended on the complexity of the problem.
Baker, Michael W; Macagno, Eduardo R
2014-04-17
Recent evidence indicates that gap junction (GJ) proteins can play a critical role in controlling neuronal connectivity as well as cell morphology in the developing nervous system. GJ proteins may function analogously to cell adhesion molecules, mediating cellular recognition and selective neurite adhesion. Moreover, during synaptogenesis electrical synapses often herald the later establishment of chemical synapses, and thus may help facilitate activity-dependent sculpting of synaptic terminals. Recent findings suggest that the morphology and connectivity of embryonic leech neurons are fundamentally organized by the type and perhaps location of the GJ proteins they express. For example, ectopic expression in embryonic leech neurons of certain innexins that define small GJ-linked networks of cells leads to the novel coupling of the expressing cell into that network. Moreover, gap junctions appear to mediate interactions among homologous neurons that modulate process outgrowth and stability. We propose that the selective formation of GJs between developing neurons and perhaps glial cells in the CNS helps orchestrate not only cellular synaptic connectivity but also can have a pronounced effect on the arborization and morphology of those cells involved. Copyright © 2014 Federation of European Biochemical Societies. Published by Elsevier B.V. All rights reserved.
Matsubara, Takashi; Torikai, Hiroyuki
2016-04-01
Modeling and implementation approaches for the reproduction of input-output relationships in biological nervous tissues contribute to the development of engineering and clinical applications. However, because of high nonlinearity, the traditional modeling and implementation approaches encounter difficulties in terms of generalization ability (i.e., performance when reproducing an unknown data set) and computational resources (i.e., computation time and circuit elements). To overcome these difficulties, asynchronous cellular automaton-based neuron (ACAN) models, which are described as special kinds of cellular automata that can be implemented as small asynchronous sequential logic circuits have been proposed. This paper presents a novel type of such ACAN and a theoretical analysis of its excitability. This paper also presents a novel network of such neurons, which can mimic input-output relationships of biological and nonlinear ordinary differential equation model neural networks. Numerical analyses confirm that the presented network has a higher generalization ability than other major modeling and implementation approaches. In addition, Field-Programmable Gate Array-implementations confirm that the presented network requires lower computational resources.
Simulating synchronization in neuronal networks
NASA Astrophysics Data System (ADS)
Fink, Christian G.
2016-06-01
We discuss several techniques used in simulating neuronal networks by exploring how a network's connectivity structure affects its propensity for synchronous spiking. Network connectivity is generated using the Watts-Strogatz small-world algorithm, and two key measures of network structure are described. These measures quantify structural characteristics that influence collective neuronal spiking, which is simulated using the leaky integrate-and-fire model. Simulations show that adding a small number of random connections to an otherwise lattice-like connectivity structure leads to a dramatic increase in neuronal synchronization.
Bio-inspired spiking neural network for nonlinear systems control.
Pérez, Javier; Cabrera, Juan A; Castillo, Juan J; Velasco, Juan M
2018-08-01
Spiking neural networks (SNN) are the third generation of artificial neural networks. SNN are the closest approximation to biological neural networks. SNNs make use of temporal spike trains to command inputs and outputs, allowing a faster and more complex computation. As demonstrated by biological organisms, they are a potentially good approach to designing controllers for highly nonlinear dynamic systems in which the performance of controllers developed by conventional techniques is not satisfactory or difficult to implement. SNN-based controllers exploit their ability for online learning and self-adaptation to evolve when transferred from simulations to the real world. SNN's inherent binary and temporary way of information codification facilitates their hardware implementation compared to analog neurons. Biological neural networks often require a lower number of neurons compared to other controllers based on artificial neural networks. In this work, these neuronal systems are imitated to perform the control of non-linear dynamic systems. For this purpose, a control structure based on spiking neural networks has been designed. Particular attention has been paid to optimizing the structure and size of the neural network. The proposed structure is able to control dynamic systems with a reduced number of neurons and connections. A supervised learning process using evolutionary algorithms has been carried out to perform controller training. The efficiency of the proposed network has been verified in two examples of dynamic systems control. Simulations show that the proposed control based on SNN exhibits superior performance compared to other approaches based on Neural Networks and SNNs. Copyright © 2018 Elsevier Ltd. All rights reserved.
Impact of Partial Time Delay on Temporal Dynamics of Watts-Strogatz Small-World Neuronal Networks
NASA Astrophysics Data System (ADS)
Yan, Hao; Sun, Xiaojuan
2017-06-01
In this paper, we mainly discuss effects of partial time delay on temporal dynamics of Watts-Strogatz (WS) small-world neuronal networks by controlling two parameters. One is the time delay τ and the other is the probability of partial time delay pdelay. Temporal dynamics of WS small-world neuronal networks are discussed with the aid of temporal coherence and mean firing rate. With the obtained simulation results, it is revealed that for small time delay τ, the probability pdelay could weaken temporal coherence and increase mean firing rate of neuronal networks, which indicates that it could improve neuronal firings of the neuronal networks while destroying firing regularity. For large time delay τ, temporal coherence and mean firing rate do not have great changes with respect to pdelay. Time delay τ always has great influence on both temporal coherence and mean firing rate no matter what is the value of pdelay. Moreover, with the analysis of spike trains and histograms of interspike intervals of neurons inside neuronal networks, it is found that the effects of partial time delays on temporal coherence and mean firing rate could be the result of locking between the period of neuronal firing activities and the value of time delay τ. In brief, partial time delay could have great influence on temporal dynamics of the neuronal networks.
Emergent Oscillations in Networks of Stochastic Spiking Neurons
van Drongelen, Wim; Cowan, Jack D.
2011-01-01
Networks of neurons produce diverse patterns of oscillations, arising from the network's global properties, the propensity of individual neurons to oscillate, or a mixture of the two. Here we describe noisy limit cycles and quasi-cycles, two related mechanisms underlying emergent oscillations in neuronal networks whose individual components, stochastic spiking neurons, do not themselves oscillate. Both mechanisms are shown to produce gamma band oscillations at the population level while individual neurons fire at a rate much lower than the population frequency. Spike trains in a network undergoing noisy limit cycles display a preferred period which is not found in the case of quasi-cycles, due to the even faster decay of phase information in quasi-cycles. These oscillations persist in sparsely connected networks, and variation of the network's connectivity results in variation of the oscillation frequency. A network of such neurons behaves as a stochastic perturbation of the deterministic Wilson-Cowan equations, and the network undergoes noisy limit cycles or quasi-cycles depending on whether these have limit cycles or a weakly stable focus. These mechanisms provide a new perspective on the emergence of rhythmic firing in neural networks, showing the coexistence of population-level oscillations with very irregular individual spike trains in a simple and general framework. PMID:21573105
γ-Aminobutyric Acid Type A Receptor Potentiation Inhibits Learning in a Computational Network Model.
Storer, Kingsley P; Reeke, George N
2018-04-17
Propofol produces memory impairment at concentrations well below those abolishing consciousness. Episodic memory, mediated by the hippocampus, is most sensitive. Two potentially overlapping scenarios may explain how γ-aminobutyric acid receptor type A (GABAA) potentiation by propofol disrupts episodic memory-the first mediated by shifting the balance from excitation to inhibition while the second involves disruption of rhythmic oscillations. We use a hippocampal network model to explore these scenarios. The basis for these experiments is the proposal that the brain represents memories as groups of anatomically dispersed strongly connected neurons. A neuronal network with connections modified by synaptic plasticity was exposed to patterned stimuli, after which spiking output demonstrated evidence of stimulus-related neuronal group development analogous to memory formation. The effect of GABAA potentiation on this memory model was studied in 100 unique networks. GABAA potentiation consistent with moderate propofol effects reduced neuronal group size formed in response to a patterned stimulus by around 70%. Concurrently, accuracy of a Bayesian classifier in identifying learned patterns in the network output was reduced. Greater potentiation led to near total failure of group formation. Theta rhythm variations had no effect on group size or classifier accuracy. Memory formation is widely thought to depend on changes in neuronal connection strengths during learning that enable neuronal groups to respond with greater facility to familiar stimuli. This experiment suggests the ability to form such groups is sensitive to alteration in the balance between excitation and inhibition such as that resulting from administration of a γ-aminobutyric acid-mediated anesthetic agent.
Pancrazio, Joseph J; Gray, Samuel A; Shubin, Yura S; Kulagina, Nadezhda; Cuttino, David S; Shaffer, Kara M; Eisemann, Kevin; Curran, Anthony; Zim, Bret; Gross, Guenter W; O'Shaughnessy, Thomas J
2003-10-01
Cultured neuronal networks, which have the capacity to respond to a wide range of neuroactive compounds, have been suggested to be useful for both screening known analytes and unknown compounds for acute neuropharmacologic effects. Extracellular recording from cultured neuronal networks provides a means for extracting physiologically relevant activity, i.e. action potential firing, in a noninvasive manner conducive for long-term measurements. Previous work from our laboratory described prototype portable systems capable of high signal-to-noise extracellular recordings from cardiac myocytes. The present work describes a portable system tailored to monitoring neuronal extracellular potentials that readily incorporates standardized microelectrode arrays developed by and in use at the University of North Texas. This system utilizes low noise amplifier and filter boards, a two-stage thermal control system with integrated fluidics and a graphical user interface for data acquisition and control implemented on a personal computer. Wherever possible, off-the-shelf components have been utilized for system design and fabrication. During use with cultured neuronal networks, the system typically exhibits input referred noise levels of only 4-6 microVRMS, such that extracellular potentials exceeding 40 microV can be readily resolved. A flow rate of up to 1 ml/min was achieved while the cell recording chamber temperature was maintained within a range of 36-37 degrees C. To demonstrate the capability of this system to resolve small extracellular potentials, pharmacological experiments with cultured neuronal networks have been performed using ion channel blockers, tetrodotoxin and tityustoxin. The implications of the experiments for neurotoxin detection are discussed.
Coates, Kaylynn E; Majot, Adam T; Zhang, Xiaonan; Michael, Cole T; Spitzer, Stacy L; Gaudry, Quentin; Dacks, Andrew M
2017-08-02
Modulatory neurons project widely throughout the brain, dynamically altering network processing based on an animal's physiological state. The connectivity of individual modulatory neurons can be complex, as they often receive input from a variety of sources and are diverse in their physiology, structure, and gene expression profiles. To establish basic principles about the connectivity of individual modulatory neurons, we examined a pair of identified neurons, the "contralaterally projecting, serotonin-immunoreactive deutocerebral neurons" (CSDns), within the olfactory system of Drosophila Specifically, we determined the neuronal classes providing synaptic input to the CSDns within the antennal lobe (AL), an olfactory network targeted by the CSDns, and the degree to which CSDn active zones are uniformly distributed across the AL. Using anatomical techniques, we found that the CSDns received glomerulus-specific input from olfactory receptor neurons (ORNs) and projection neurons (PNs), and networkwide input from local interneurons (LNs). Furthermore, we quantified the number of CSDn active zones in each glomerulus and found that CSDn output is not uniform, but rather heterogeneous, across glomeruli and stereotyped from animal to animal. Finally, we demonstrate that the CSDns synapse broadly onto LNs and PNs throughout the AL but do not synapse upon ORNs. Our results demonstrate that modulatory neurons do not necessarily provide purely top-down input but rather receive neuron class-specific input from the networks that they target, and that even a two cell modulatory network has highly heterogeneous, yet stereotyped, pattern of connectivity. SIGNIFICANCE STATEMENT Modulatory neurons often project broadly throughout the brain to alter processing based on physiological state. However, the connectivity of individual modulatory neurons to their target networks is not well understood, as modulatory neuron populations are heterogeneous in their physiology, morphology, and gene expression. In this study, we use a pair of identified serotonergic neurons within the Drosophila olfactory system as a model to establish a framework for modulatory neuron connectivity. We demonstrate that individual modulatory neurons can integrate neuron class-specific input from their target network, which is often nonreciprocal. Additionally, modulatory neuron output can be stereotyped, yet nonuniform, across network regions. Our results provide new insight into the synaptic relationships that underlie network function of modulatory neurons. Copyright © 2017 the authors 0270-6474/17/377318-14$15.00/0.
Neuronal pathway finding: from neurons to initial neural networks.
Roscigno, Cecelia I
2004-10-01
Neuronal pathway finding is crucial for structured cellular organization and development of neural circuits within the nervous system. Neuronal pathway finding within the visual system has been extensively studied and therefore is used as a model to review existing knowledge regarding concepts of this developmental process. General principles of neuron pathway finding throughout the nervous system exist. Comprehension of these concepts guides neuroscience nurses in gaining an understanding of the developmental course of action, the implications of different anomalies, as well as the theoretical basis and nursing implications of some provocative new therapies being proposed to treat neurodegenerative diseases and neurologic injuries. These therapies have limitations in light of current ethical, developmental, and delivery modes and what is known about the development of neuronal pathways.
MicroRNA-181 promotes synaptogenesis and attenuates axonal outgrowth in cortical neurons
Kos, Aron; Olde Loohuis, Nikkie; Meinhardt, Julia; van Bokhoven, Hans; Kaplan, Barry B; Martens, Gerard; Aschrafi, Armaz
2016-01-01
MicroRNAs (miRs) are non-coding gene transcripts abundantly expressed in both the developing and adult mammalian brain. They act as important modulators of complex gene regulatory networks during neuronal development and plasticity. miR-181c is highly abundant in cerebellar cortex and its expression is increased in autism patients as well as in an animal model of autism. To systematically identify putative targets of miR-181c, we repressed this miR in growing cortical neurons and found over 70 differentially expressed target genes using transcriptome profiling. Pathway analysis showed that the miR-181c-modulated genes converge on signaling cascades relevant to neurite and synapse developmental processes. To experimentally examine the significance of these data, we inhibited miR-181c during rat cortical neuronal maturation in vitro; this loss-of miR-181c function resulted in enhanced neurite sprouting and reduced synaptogenesis. Collectively, our findings suggest that miR-181c is a modulator of gene networks associated with cortical neuronal maturation. PMID:27017280
An FPGA-Based Silicon Neuronal Network with Selectable Excitability Silicon Neurons
Li, Jing; Katori, Yuichi; Kohno, Takashi
2012-01-01
This paper presents a digital silicon neuronal network which simulates the nerve system in creatures and has the ability to execute intelligent tasks, such as associative memory. Two essential elements, the mathematical-structure-based digital spiking silicon neuron (DSSN) and the transmitter release based silicon synapse, allow us to tune the excitability of silicon neurons and are computationally efficient for hardware implementation. We adopt mixed pipeline and parallel structure and shift operations to design a sufficient large and complex network without excessive hardware resource cost. The network with 256 full-connected neurons is built on a Digilent Atlys board equipped with a Xilinx Spartan-6 LX45 FPGA. Besides, a memory control block and USB control block are designed to accomplish the task of data communication between the network and the host PC. This paper also describes the mechanism of associative memory performed in the silicon neuronal network. The network is capable of retrieving stored patterns if the inputs contain enough information of them. The retrieving probability increases with the similarity between the input and the stored pattern increasing. Synchronization of neurons is observed when the successful stored pattern retrieval occurs. PMID:23269911
Lin, Mingyan; Pedrosa, Erika; Hrabovsky, Anastasia; Chen, Jian; Puliafito, Benjamin R; Gilbert, Stephanie R; Zheng, Deyou; Lachman, Herbert M
2016-11-15
Individuals with 22q11.2 Deletion Syndrome (22q11.2 DS) are a specific high-risk group for developing schizophrenia (SZ), schizoaffective disorder (SAD) and autism spectrum disorders (ASD). Several genes in the deleted region have been implicated in the development of SZ, e.g., PRODH and DGCR8. However, the mechanistic connection between these genes and the neuropsychiatric phenotype remains unclear. To elucidate the molecular consequences of 22q11.2 deletion in early neural development, we carried out RNA-seq analysis to investigate gene expression in early differentiating human neurons derived from induced pluripotent stem cells (iPSCs) of 22q11.2 DS SZ and SAD patients. Eight cases (ten iPSC-neuron samples in total including duplicate clones) and seven controls (nine in total including duplicate clones) were subjected to RNA sequencing. Using a systems level analysis, differentially expressed genes/gene-modules and pathway of interests were identified. Lastly, we related our findings from in vitro neuronal cultures to brain development by mapping differentially expressed genes to BrainSpan transcriptomes. We observed ~2-fold reduction in expression of almost all genes in the 22q11.2 region in SZ (37 genes reached p-value < 0.05, 36 of which reached a false discovery rate < 0.05). Outside of the deleted region, 745 genes showed significant differences in expression between SZ and control neurons (p < 0.05). Function enrichment and network analysis of the differentially expressed genes uncovered converging evidence on abnormal expression in key functional pathways, such as apoptosis, cell cycle and survival, and MAPK signaling in the SZ and SAD samples. By leveraging transcriptome profiles of normal human brain tissues across human development into adulthood, we showed that the differentially expressed genes converge on a sub-network mediated by CDC45 and the cell cycle, which would be disrupted by the 22q11.2 deletion during embryonic brain development, and another sub-network modulated by PRODH, which could contribute to disruption of brain function during adolescence. This study has provided evidence for disruption of potential molecular events in SZ patient with 22q11.2 deletion and related our findings from in vitro neuronal cultures to functional perturbations that can occur during brain development in SZ.
Russ, Jeffrey B; Kaltschmidt, Julia A
2014-10-01
Every behaviour of an organism relies on an intricate and vastly diverse network of neurons whose identity and connectivity must be specified with extreme precision during development. Intrinsically, specification of neuronal identity depends heavily on the expression of powerful transcription factors that direct numerous features of neuronal identity, including especially properties of neuronal connectivity, such as dendritic morphology, axonal targeting or synaptic specificity, ultimately priming the neuron for incorporation into emerging circuitry. As the neuron's early connectivity is established, extrinsic signals from its pre- and postsynaptic partners feedback on the neuron to further refine its unique characteristics. As a result, disruption of one component of the circuitry during development can have vital consequences for the proper identity specification of its synaptic partners. Recent studies have begun to harness the power of various transcription factors that control neuronal cell fate, including those that specify a neuron's subtype-specific identity, seeking insight for future therapeutic strategies that aim to reconstitute damaged circuitry through neuronal reprogramming.
Kendrick, Keith M; Zhan, Yang; Fischer, Hanno; Nicol, Alister U; Zhang, Xuejuan; Feng, Jianfeng
2011-06-09
How oscillatory brain rhythms alone, or in combination, influence cortical information processing to support learning has yet to be fully established. Local field potential and multi-unit neuronal activity recordings were made from 64-electrode arrays in the inferotemporal cortex of conscious sheep during and after visual discrimination learning of face or object pairs. A neural network model has been developed to simulate and aid functional interpretation of learning-evoked changes. Following learning the amplitude of theta (4-8 Hz), but not gamma (30-70 Hz) oscillations was increased, as was the ratio of theta to gamma. Over 75% of electrodes showed significant coupling between theta phase and gamma amplitude (theta-nested gamma). The strength of this coupling was also increased following learning and this was not simply a consequence of increased theta amplitude. Actual discrimination performance was significantly correlated with theta and theta-gamma coupling changes. Neuronal activity was phase-locked with theta but learning had no effect on firing rates or the magnitude or latencies of visual evoked potentials during stimuli. The neural network model developed showed that a combination of fast and slow inhibitory interneurons could generate theta-nested gamma. By increasing N-methyl-D-aspartate receptor sensitivity in the model similar changes were produced as in inferotemporal cortex after learning. The model showed that these changes could potentiate the firing of downstream neurons by a temporal desynchronization of excitatory neuron output without increasing the firing frequencies of the latter. This desynchronization effect was confirmed in IT neuronal activity following learning and its magnitude was correlated with discrimination performance. Face discrimination learning produces significant increases in both theta amplitude and the strength of theta-gamma coupling in the inferotemporal cortex which are correlated with behavioral performance. A network model which can reproduce these changes suggests that a key function of such learning-evoked alterations in theta and theta-nested gamma activity may be increased temporal desynchronization in neuronal firing leading to optimal timing of inputs to downstream neural networks potentiating their responses. In this way learning can produce potentiation in neural networks simply through altering the temporal pattern of their inputs.
2011-01-01
Background How oscillatory brain rhythms alone, or in combination, influence cortical information processing to support learning has yet to be fully established. Local field potential and multi-unit neuronal activity recordings were made from 64-electrode arrays in the inferotemporal cortex of conscious sheep during and after visual discrimination learning of face or object pairs. A neural network model has been developed to simulate and aid functional interpretation of learning-evoked changes. Results Following learning the amplitude of theta (4-8 Hz), but not gamma (30-70 Hz) oscillations was increased, as was the ratio of theta to gamma. Over 75% of electrodes showed significant coupling between theta phase and gamma amplitude (theta-nested gamma). The strength of this coupling was also increased following learning and this was not simply a consequence of increased theta amplitude. Actual discrimination performance was significantly correlated with theta and theta-gamma coupling changes. Neuronal activity was phase-locked with theta but learning had no effect on firing rates or the magnitude or latencies of visual evoked potentials during stimuli. The neural network model developed showed that a combination of fast and slow inhibitory interneurons could generate theta-nested gamma. By increasing N-methyl-D-aspartate receptor sensitivity in the model similar changes were produced as in inferotemporal cortex after learning. The model showed that these changes could potentiate the firing of downstream neurons by a temporal desynchronization of excitatory neuron output without increasing the firing frequencies of the latter. This desynchronization effect was confirmed in IT neuronal activity following learning and its magnitude was correlated with discrimination performance. Conclusions Face discrimination learning produces significant increases in both theta amplitude and the strength of theta-gamma coupling in the inferotemporal cortex which are correlated with behavioral performance. A network model which can reproduce these changes suggests that a key function of such learning-evoked alterations in theta and theta-nested gamma activity may be increased temporal desynchronization in neuronal firing leading to optimal timing of inputs to downstream neural networks potentiating their responses. In this way learning can produce potentiation in neural networks simply through altering the temporal pattern of their inputs. PMID:21658251
Cell diversity and network dynamics in photosensitive human brain organoids
Quadrato, Giorgia; Nguyen, Tuan; Macosko, Evan Z.; Sherwood, John L.; Yang, Sung Min; Berger, Daniel; Maria, Natalie; Scholvin, Jorg; Goldman, Melissa; Kinney, Justin; Boyden, Edward S.; Lichtman, Jeff; Williams, Ziv M.; McCarroll, Steven A.; Arlotta, Paola
2017-01-01
In vitro models of the developing brain such as 3D brain organoids offer an unprecedented opportunity to study aspects of human brain development and disease. However, it remains undefined what cells are generated within organoids and to what extent they recapitulate the regional complexity, cellular diversity, and circuit functionality of the brain. Here, we analyzed gene expression in over 80,000 individual cells isolated from 31 human brain organoids. We find that organoids can generate a broad diversity of cells, which are related to endogenous classes, including cells from the cerebral cortex and the retina. Organoids could be developed over extended periods (over 9 months) enabling unprecedented levels of maturity including the formation of dendritic spines and of spontaneously-active neuronal networks. Finally, neuronal activity within organoids could be controlled using light stimulation of photoreceptor-like cells, which may offer ways to probe the functionality of human neuronal circuits using physiological sensory stimuli. PMID:28445462
Cell diversity and network dynamics in photosensitive human brain organoids.
Quadrato, Giorgia; Nguyen, Tuan; Macosko, Evan Z; Sherwood, John L; Min Yang, Sung; Berger, Daniel R; Maria, Natalie; Scholvin, Jorg; Goldman, Melissa; Kinney, Justin P; Boyden, Edward S; Lichtman, Jeff W; Williams, Ziv M; McCarroll, Steven A; Arlotta, Paola
2017-05-04
In vitro models of the developing brain such as three-dimensional brain organoids offer an unprecedented opportunity to study aspects of human brain development and disease. However, the cells generated within organoids and the extent to which they recapitulate the regional complexity, cellular diversity and circuit functionality of the brain remain undefined. Here we analyse gene expression in over 80,000 individual cells isolated from 31 human brain organoids. We find that organoids can generate a broad diversity of cells, which are related to endogenous classes, including cells from the cerebral cortex and the retina. Organoids could be developed over extended periods (more than 9 months), allowing for the establishment of relatively mature features, including the formation of dendritic spines and spontaneously active neuronal networks. Finally, neuronal activity within organoids could be controlled using light stimulation of photosensitive cells, which may offer a way to probe the functionality of human neuronal circuits using physiological sensory stimuli.
NASA Astrophysics Data System (ADS)
Llinas, Rodolfo R.
1988-12-01
This article reviews the electroresponsive properties of single neurons in the mammalian central nervous system (CNS). In some of these cells the ionic conductances responsible for their excitability also endow them with autorhythmic electrical oscillatory properties. Chemical or electrical synaptic contacts between these neurons often result in network oscillations. In such networks, autorhytmic neurons may act as true oscillators (as pacemakers) or as resonators (responding preferentially to certain firing frequencies). Oscillations and resonance in the CNS are proposed to have diverse functional roles, such as (i) determining global functional states (for example, sleep-wakefulness or attention), (ii) timing in motor coordination, and (iii) specifying connectivity during development. Also, oscillation, especially in the thalamo-cortical circuits, may be related to certain neurological and psychiatric disorders. This review proposes that the autorhythmic electrical properties of central neurons and their connectivity form the basis for an intrinsic functional coordinate system that provides internal context to sensory input.
Foxp2 Regulates Gene Networks Implicated in Neurite Outgrowth in the Developing Brain
Vernes, Sonja C.; Oliver, Peter L.; Spiteri, Elizabeth; Lockstone, Helen E.; Puliyadi, Rathi; Taylor, Jennifer M.; Ho, Joses; Mombereau, Cedric; Brewer, Ariel; Lowy, Ernesto; Nicod, Jérôme; Groszer, Matthias; Baban, Dilair; Sahgal, Natasha; Cazier, Jean-Baptiste; Ragoussis, Jiannis; Davies, Kay E.; Geschwind, Daniel H.; Fisher, Simon E.
2011-01-01
Forkhead-box protein P2 is a transcription factor that has been associated with intriguing aspects of cognitive function in humans, non-human mammals, and song-learning birds. Heterozygous mutations of the human FOXP2 gene cause a monogenic speech and language disorder. Reduced functional dosage of the mouse version (Foxp2) causes deficient cortico-striatal synaptic plasticity and impairs motor-skill learning. Moreover, the songbird orthologue appears critically important for vocal learning. Across diverse vertebrate species, this well-conserved transcription factor is highly expressed in the developing and adult central nervous system. Very little is known about the mechanisms regulated by Foxp2 during brain development. We used an integrated functional genomics strategy to robustly define Foxp2-dependent pathways, both direct and indirect targets, in the embryonic brain. Specifically, we performed genome-wide in vivo ChIP–chip screens for Foxp2-binding and thereby identified a set of 264 high-confidence neural targets under strict, empirically derived significance thresholds. The findings, coupled to expression profiling and in situ hybridization of brain tissue from wild-type and mutant mouse embryos, strongly highlighted gene networks linked to neurite development. We followed up our genomics data with functional experiments, showing that Foxp2 impacts on neurite outgrowth in primary neurons and in neuronal cell models. Our data indicate that Foxp2 modulates neuronal network formation, by directly and indirectly regulating mRNAs involved in the development and plasticity of neuronal connections. PMID:21765815
Foxp2 regulates gene networks implicated in neurite outgrowth in the developing brain.
Vernes, Sonja C; Oliver, Peter L; Spiteri, Elizabeth; Lockstone, Helen E; Puliyadi, Rathi; Taylor, Jennifer M; Ho, Joses; Mombereau, Cedric; Brewer, Ariel; Lowy, Ernesto; Nicod, Jérôme; Groszer, Matthias; Baban, Dilair; Sahgal, Natasha; Cazier, Jean-Baptiste; Ragoussis, Jiannis; Davies, Kay E; Geschwind, Daniel H; Fisher, Simon E
2011-07-01
Forkhead-box protein P2 is a transcription factor that has been associated with intriguing aspects of cognitive function in humans, non-human mammals, and song-learning birds. Heterozygous mutations of the human FOXP2 gene cause a monogenic speech and language disorder. Reduced functional dosage of the mouse version (Foxp2) causes deficient cortico-striatal synaptic plasticity and impairs motor-skill learning. Moreover, the songbird orthologue appears critically important for vocal learning. Across diverse vertebrate species, this well-conserved transcription factor is highly expressed in the developing and adult central nervous system. Very little is known about the mechanisms regulated by Foxp2 during brain development. We used an integrated functional genomics strategy to robustly define Foxp2-dependent pathways, both direct and indirect targets, in the embryonic brain. Specifically, we performed genome-wide in vivo ChIP-chip screens for Foxp2-binding and thereby identified a set of 264 high-confidence neural targets under strict, empirically derived significance thresholds. The findings, coupled to expression profiling and in situ hybridization of brain tissue from wild-type and mutant mouse embryos, strongly highlighted gene networks linked to neurite development. We followed up our genomics data with functional experiments, showing that Foxp2 impacts on neurite outgrowth in primary neurons and in neuronal cell models. Our data indicate that Foxp2 modulates neuronal network formation, by directly and indirectly regulating mRNAs involved in the development and plasticity of neuronal connections.
Paraskevov, A V; Zendrikov, D K
2017-03-23
We show that in model neuronal cultures, where the probability of interneuronal connection formation decreases exponentially with increasing distance between the neurons, there exists a small number of spatial nucleation centers of a network spike, from where the synchronous spiking activity starts propagating in the network typically in the form of circular traveling waves. The number of nucleation centers and their spatial locations are unique and unchanged for a given realization of neuronal network but are different for different networks. In contrast, if the probability of interneuronal connection formation is independent of the distance between neurons, then the nucleation centers do not arise and the synchronization of spiking activity during a network spike occurs spatially uniform throughout the network. Therefore one can conclude that spatial proximity of connections between neurons is important for the formation of nucleation centers. It is also shown that fluctuations of the spatial density of neurons at their random homogeneous distribution typical for the experiments in vitro do not determine the locations of the nucleation centers. The simulation results are qualitatively consistent with the experimental observations.
NASA Astrophysics Data System (ADS)
Paraskevov, A. V.; Zendrikov, D. K.
2017-04-01
We show that in model neuronal cultures, where the probability of interneuronal connection formation decreases exponentially with increasing distance between the neurons, there exists a small number of spatial nucleation centers of a network spike, from where the synchronous spiking activity starts propagating in the network typically in the form of circular traveling waves. The number of nucleation centers and their spatial locations are unique and unchanged for a given realization of neuronal network but are different for different networks. In contrast, if the probability of interneuronal connection formation is independent of the distance between neurons, then the nucleation centers do not arise and the synchronization of spiking activity during a network spike occurs spatially uniform throughout the network. Therefore one can conclude that spatial proximity of connections between neurons is important for the formation of nucleation centers. It is also shown that fluctuations of the spatial density of neurons at their random homogeneous distribution typical for the experiments in vitro do not determine the locations of the nucleation centers. The simulation results are qualitatively consistent with the experimental observations.
Barton, Alan J; Valdés, Julio J; Orchard, Robert
2009-01-01
Classical neural networks are composed of neurons whose nature is determined by a certain function (the neuron model), usually pre-specified. In this paper, a type of neural network (NN-GP) is presented in which: (i) each neuron may have its own neuron model in the form of a general function, (ii) any layout (i.e network interconnection) is possible, and (iii) no bias nodes or weights are associated to the connections, neurons or layers. The general functions associated to a neuron are learned by searching a function space. They are not provided a priori, but are rather built as part of an Evolutionary Computation process based on Genetic Programming. The resulting network solutions are evaluated based on a fitness measure, which may, for example, be based on classification or regression errors. Two real-world examples are presented to illustrate the promising behaviour on classification problems via construction of a low-dimensional representation of a high-dimensional parameter space associated to the set of all network solutions.
Respiratory Network Stability and Modulatory Response to Substance P Require Nalcn.
Yeh, Szu-Ying; Huang, Wei-Hsiang; Wang, Wei; Ward, Christopher S; Chao, Eugene S; Wu, Zhenyu; Tang, Bin; Tang, Jianrong; Sun, Jenny J; Esther van der Heijden, Meike; Gray, Paul A; Xue, Mingshan; Ray, Russell S; Ren, Dejian; Zoghbi, Huda Y
2017-04-19
Respiration is a rhythmic activity as well as one that requires responsiveness to internal and external circumstances; both the rhythm and neuromodulatory responses of breathing are controlled by brainstem neurons in the preBötzinger complex (preBötC) and the retrotrapezoid nucleus (RTN), but the specific ion channels essential to these activities remain to be identified. Because deficiency of sodium leak channel, non-selective (Nalcn) causes lethal apnea in humans and mice, we investigated Nalcn function in these neuronal groups. We found that one-third of mice lacking Nalcn in excitatory preBötC neurons died soon after birth; surviving mice developed apneas in adulthood. Interestingly, in both preBötC and RTN neurons, the Nalcn current influences the resting membrane potential, contributes to maintenance of stable network activity, and mediates modulatory responses to the neuropeptide substance P. These findings reveal Nalcn's specific role in both rhythmic stability and responsiveness to neuropeptides within the respiratory network. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Kondo, Shuhei; Shibata, Tadashi; Ohmi, Tadahiro
1995-02-01
We have investigated the learning performance of the hardware backpropagation (HBP) algorithm, a hardware-oriented learning algorithm developed for the self-learning architecture of neural networks constructed using neuron MOS (metal-oxide-semiconductor) transistors. The solution to finding a mirror symmetry axis in a 4×4 binary pixel array was tested by computer simulation based on the HBP algorithm. Despite the inherent restrictions imposed on the hardware-learning algorithm, HBP exhibits equivalent learning performance to that of the original backpropagation (BP) algorithm when all the pertinent parameters are optimized. Very importantly, we have found that HBP has a superior generalization capability over BP; namely, HBP exhibits higher performance in solving problems that the network has not yet learnt.
Bayati, Mehdi; Valizadeh, Alireza; Abbassian, Abdolhossein; Cheng, Sen
2015-01-01
Many experimental and theoretical studies have suggested that the reliable propagation of synchronous neural activity is crucial for neural information processing. The propagation of synchronous firing activity in so-called synfire chains has been studied extensively in feed-forward networks of spiking neurons. However, it remains unclear how such neural activity could emerge in recurrent neuronal networks through synaptic plasticity. In this study, we investigate whether local excitation, i.e., neurons that fire at a higher frequency than the other, spontaneously active neurons in the network, can shape a network to allow for synchronous activity propagation. We use two-dimensional, locally connected and heterogeneous neuronal networks with spike-timing dependent plasticity (STDP). We find that, in our model, local excitation drives profound network changes within seconds. In the emergent network, neural activity propagates synchronously through the network. This activity originates from the site of the local excitation and propagates through the network. The synchronous activity propagation persists, even when the local excitation is removed, since it derives from the synaptic weight matrix. Importantly, once this connectivity is established it remains stable even in the presence of spontaneous activity. Our results suggest that synfire-chain-like activity can emerge in a relatively simple way in realistic neural networks by locally exciting the desired origin of the neuronal sequence. PMID:26089794
Synaptic Impairment and Robustness of Excitatory Neuronal Networks with Different Topologies
Mirzakhalili, Ehsan; Gourgou, Eleni; Booth, Victoria; Epureanu, Bogdan
2017-01-01
Synaptic deficiencies are a known hallmark of neurodegenerative diseases, but the diagnosis of impaired synapses on the cellular level is not an easy task. Nonetheless, changes in the system-level dynamics of neuronal networks with damaged synapses can be detected using techniques that do not require high spatial resolution. This paper investigates how the structure/topology of neuronal networks influences their dynamics when they suffer from synaptic loss. We study different neuronal network structures/topologies by specifying their degree distributions. The modes of the degree distribution can be used to construct networks that consist of rich clubs and resemble small world networks, as well. We define two dynamical metrics to compare the activity of networks with different structures: persistent activity (namely, the self-sustained activity of the network upon removal of the initial stimulus) and quality of activity (namely, percentage of neurons that participate in the persistent activity of the network). Our results show that synaptic loss affects the persistent activity of networks with bimodal degree distributions less than it affects random networks. The robustness of neuronal networks enhances when the distance between the modes of the degree distribution increases, suggesting that the rich clubs of networks with distinct modes keep the whole network active. In addition, a tradeoff is observed between the quality of activity and the persistent activity. For a range of distributions, both of these dynamical metrics are considerably high for networks with bimodal degree distribution compared to random networks. We also propose three different scenarios of synaptic impairment, which may correspond to different pathological or biological conditions. Regardless of the network structure/topology, results demonstrate that synaptic loss has more severe effects on the activity of the network when impairments are correlated with the activity of the neurons. PMID:28659765
Sanchez, Karla R; Mersha, Mahlet D; Dhillon, Harbinder S; Temburni, Murali K
2018-04-26
Bis-phenols, such as bis-phenol A (BPA) and bis-phenol-S (BPS), are polymerizing agents widely used in the production of plastics and numerous everyday products. They are classified as endocrine disrupting compounds (EDC) with estradiol-like properties. Long-term exposure to EDCs, even at low doses, has been linked with various health defects including cancer, behavioral disorders, and infertility, with greater vulnerability during early developmental periods. To study the effects of BPA on the development of neuronal function, we used an in vitro neuronal network derived from the early chick embryonic brain as a model. We found that exposure to BPA affected the development of network activity, specifically spiking activity and synchronization. A change in network activity is the crucial link between the molecular target of a drug or compound and its effect on behavioral outcome. Multi-electrode arrays are increasingly becoming useful tools to study the effects of drugs on network activity in vitro. There are several systems available in the market and, although there are variations in the number of electrodes, the type and quality of the electrode array and the analysis software, the basic underlying principles, and the data obtained is the same across the different systems. Although currently limited to analysis of two-dimensional in vitro cultures, these MEA systems are being improved to enable in vivo network activity in brain slices. Here, we provide a detailed protocol for embryonic exposure and recording neuronal network activity and synchrony, along with representative results.
PhotoMEA: an opto-electronic biosensor for monitoring in vitro neuronal network activity.
Ghezzi, Diego; Pedrocchi, Alessandra; Menegon, Andrea; Mantero, Sara; Valtorta, Flavia; Ferrigno, Giancarlo
2007-02-01
PhotoMEA is a biosensor useful for the analysis of an in vitro neuronal network, fully based on optical methods. Its function is based on the stimulation of neurons with caged glutamate and the recording of neuronal activity by Voltage-Sensitive fluorescent Dyes (VSD). The main advantage is that it will be possible to stimulate even at sub-single neuron level and to record with high resolution the activity of the entire network in the culture. A large-scale view of neuronal intercommunications offers a unique opportunity for testing the ability of drugs to affect neuronal properties as well as alterations in the behaviour of the entire network. The concept and a prototype for validation is described here in detail.
Synaptic dynamics regulation in response to high frequency stimulation in neuronal networks
NASA Astrophysics Data System (ADS)
Su, Fei; Wang, Jiang; Li, Huiyan; Wei, Xile; Yu, Haitao; Deng, Bin
2018-02-01
High frequency stimulation (HFS) has confirmed its ability in modulating the pathological neural activities. However its detailed mechanism is unclear. This study aims to explore the effects of HFS on neuronal networks dynamics. First, the two-neuron FitzHugh-Nagumo (FHN) networks with static coupling strength and the small-world FHN networks with spike-time-dependent plasticity (STDP) modulated synaptic coupling strength are constructed. Then, the multi-scale method is used to transform the network models into equivalent averaged models, where the HFS intensity is modeled as the ratio between stimulation amplitude and frequency. Results show that in static two-neuron networks, there is still synaptic current projected to the postsynaptic neuron even if the presynaptic neuron is blocked by the HFS. In the small-world networks, the effects of the STDP adjusting rate parameter on the inactivation ratio and synchrony degree increase with the increase of HFS intensity. However, only when the HFS intensity becomes very large can the STDP time window parameter affect the inactivation ratio and synchrony index. Both simulation and numerical analysis demonstrate that the effects of HFS on neuronal network dynamics are realized through the adjustment of synaptic variable and conductance.
Dynamics of excitatory and inhibitory networks are differentially altered by selective attention.
Snyder, Adam C; Morais, Michael J; Smith, Matthew A
2016-10-01
Inhibition and excitation form two fundamental modes of neuronal interaction, yet we understand relatively little about their distinct roles in service of perceptual and cognitive processes. We developed a multidimensional waveform analysis to identify fast-spiking (putative inhibitory) and regular-spiking (putative excitatory) neurons in vivo and used this method to analyze how attention affects these two cell classes in visual area V4 of the extrastriate cortex of rhesus macaques. We found that putative inhibitory neurons had both greater increases in firing rate and decreases in correlated variability with attention compared with putative excitatory neurons. Moreover, the time course of attention effects for putative inhibitory neurons more closely tracked the temporal statistics of target probability in our task. Finally, the session-to-session variability in a behavioral measure of attention covaried with the magnitude of this effect. Together, these results suggest that selective targeting of inhibitory neurons and networks is a critical mechanism for attentional modulation. Copyright © 2016 the American Physiological Society.
Dynamics of excitatory and inhibitory networks are differentially altered by selective attention
Snyder, Adam C.; Morais, Michael J.
2016-01-01
Inhibition and excitation form two fundamental modes of neuronal interaction, yet we understand relatively little about their distinct roles in service of perceptual and cognitive processes. We developed a multidimensional waveform analysis to identify fast-spiking (putative inhibitory) and regular-spiking (putative excitatory) neurons in vivo and used this method to analyze how attention affects these two cell classes in visual area V4 of the extrastriate cortex of rhesus macaques. We found that putative inhibitory neurons had both greater increases in firing rate and decreases in correlated variability with attention compared with putative excitatory neurons. Moreover, the time course of attention effects for putative inhibitory neurons more closely tracked the temporal statistics of target probability in our task. Finally, the session-to-session variability in a behavioral measure of attention covaried with the magnitude of this effect. Together, these results suggest that selective targeting of inhibitory neurons and networks is a critical mechanism for attentional modulation. PMID:27466133
Dal Maschio, Marco; Donovan, Joseph C; Helmbrecht, Thomas O; Baier, Herwig
2017-05-17
We introduce a flexible method for high-resolution interrogation of circuit function, which combines simultaneous 3D two-photon stimulation of multiple targeted neurons, volumetric functional imaging, and quantitative behavioral tracking. This integrated approach was applied to dissect how an ensemble of premotor neurons in the larval zebrafish brain drives a basic motor program, the bending of the tail. We developed an iterative photostimulation strategy to identify minimal subsets of channelrhodopsin (ChR2)-expressing neurons that are sufficient to initiate tail movements. At the same time, the induced network activity was recorded by multiplane GCaMP6 imaging across the brain. From this dataset, we computationally identified activity patterns associated with distinct components of the elicited behavior and characterized the contributions of individual neurons. Using photoactivatable GFP (paGFP), we extended our protocol to visualize single functionally identified neurons and reconstruct their morphologies. Together, this toolkit enables linking behavior to circuit activity with unprecedented resolution. Copyright © 2017 Elsevier Inc. All rights reserved.
Synaptic plasticity and neuronal refractory time cause scaling behaviour of neuronal avalanches
NASA Astrophysics Data System (ADS)
Michiels van Kessenich, L.; de Arcangelis, L.; Herrmann, H. J.
2016-08-01
Neuronal avalanches measured in vitro and in vivo in different cortical networks consistently exhibit power law behaviour for the size and duration distributions with exponents typical for a mean field self-organized branching process. These exponents are also recovered in neuronal network simulations implementing various neuronal dynamics on different network topologies. They can therefore be considered a very robust feature of spontaneous neuronal activity. Interestingly, this scaling behaviour is also observed on regular lattices in finite dimensions, which raises the question about the origin of the mean field behavior observed experimentally. In this study we provide an answer to this open question by investigating the effect of activity dependent plasticity in combination with the neuronal refractory time in a neuronal network. Results show that the refractory time hinders backward avalanches forcing a directed propagation. Hebbian plastic adaptation plays the role of sculpting these directed avalanche patterns into the topology of the network slowly changing it into a branched structure where loops are marginal.
Synaptic plasticity and neuronal refractory time cause scaling behaviour of neuronal avalanches.
Michiels van Kessenich, L; de Arcangelis, L; Herrmann, H J
2016-08-18
Neuronal avalanches measured in vitro and in vivo in different cortical networks consistently exhibit power law behaviour for the size and duration distributions with exponents typical for a mean field self-organized branching process. These exponents are also recovered in neuronal network simulations implementing various neuronal dynamics on different network topologies. They can therefore be considered a very robust feature of spontaneous neuronal activity. Interestingly, this scaling behaviour is also observed on regular lattices in finite dimensions, which raises the question about the origin of the mean field behavior observed experimentally. In this study we provide an answer to this open question by investigating the effect of activity dependent plasticity in combination with the neuronal refractory time in a neuronal network. Results show that the refractory time hinders backward avalanches forcing a directed propagation. Hebbian plastic adaptation plays the role of sculpting these directed avalanche patterns into the topology of the network slowly changing it into a branched structure where loops are marginal.
Dynamic range in small-world networks of Hodgkin-Huxley neurons with chemical synapses
NASA Astrophysics Data System (ADS)
Batista, C. A. S.; Viana, R. L.; Lopes, S. R.; Batista, A. M.
2014-09-01
According to Stevens' law the relationship between stimulus and response is a power-law within an interval called the dynamic range. The dynamic range of sensory organs is found to be larger than that of a single neuron, suggesting that the network structure plays a key role in the behavior of both the scaling exponent and the dynamic range of neuron assemblies. In order to verify computationally the relationships between stimulus and response for spiking neurons, we investigate small-world networks of neurons described by the Hodgkin-Huxley equations connected by chemical synapses. We found that the dynamic range increases with the network size, suggesting that the enhancement of the dynamic range observed in sensory organs, with respect to single neurons, is an emergent property of complex network dynamics.
Topographical maps as complex networks
NASA Astrophysics Data System (ADS)
da Fontoura Costa, Luciano; Diambra, Luis
2005-02-01
The neuronal networks in the mammalian cortex are characterized by the coexistence of hierarchy, modularity, short and long range interactions, spatial correlations, and topographical connections. Particularly interesting, the latter type of organization implies special demands on developing systems in order to achieve precise maps preserving spatial adjacencies, even at the expense of isometry. Although the object of intensive biological research, the elucidation of the main anatomic-functional purposes of the ubiquitous topographical connections in the mammalian brain remains an elusive issue. The present work reports on how recent results from complex network formalism can be used to quantify and model the effect of topographical connections between neuronal cells over the connectivity of the network. While the topographical mapping between two cortical modules is achieved by connecting nearest cells from each module, four kinds of network models are adopted for implementing intramodular connections, including random, preferential-attachment, short-range, and long-range networks. It is shown that, though spatially uniform and simple, topographical connections between modules can lead to major changes in the network properties in some specific cases, depending on intramodular connections schemes, fostering more effective intercommunication between the involved neuronal cells and modules. The possible implications of such effects on cortical operation are discussed.
Pecevski, Dejan; Buesing, Lars; Maass, Wolfgang
2011-01-01
An important open problem of computational neuroscience is the generic organization of computations in networks of neurons in the brain. We show here through rigorous theoretical analysis that inherent stochastic features of spiking neurons, in combination with simple nonlinear computational operations in specific network motifs and dendritic arbors, enable networks of spiking neurons to carry out probabilistic inference through sampling in general graphical models. In particular, it enables them to carry out probabilistic inference in Bayesian networks with converging arrows (“explaining away”) and with undirected loops, that occur in many real-world tasks. Ubiquitous stochastic features of networks of spiking neurons, such as trial-to-trial variability and spontaneous activity, are necessary ingredients of the underlying computational organization. We demonstrate through computer simulations that this approach can be scaled up to neural emulations of probabilistic inference in fairly large graphical models, yielding some of the most complex computations that have been carried out so far in networks of spiking neurons. PMID:22219717
Synaptic Plasticity and Spike Synchronisation in Neuronal Networks
NASA Astrophysics Data System (ADS)
Borges, Rafael R.; Borges, Fernando S.; Lameu, Ewandson L.; Protachevicz, Paulo R.; Iarosz, Kelly C.; Caldas, Iberê L.; Viana, Ricardo L.; Macau, Elbert E. N.; Baptista, Murilo S.; Grebogi, Celso; Batista, Antonio M.
2017-12-01
Brain plasticity, also known as neuroplasticity, is a fundamental mechanism of neuronal adaptation in response to changes in the environment or due to brain injury. In this review, we show our results about the effects of synaptic plasticity on neuronal networks composed by Hodgkin-Huxley neurons. We show that the final topology of the evolved network depends crucially on the ratio between the strengths of the inhibitory and excitatory synapses. Excitation of the same order of inhibition revels an evolved network that presents the rich-club phenomenon, well known to exist in the brain. For initial networks with considerably larger inhibitory strengths, we observe the emergence of a complex evolved topology, where neurons sparsely connected to other neurons, also a typical topology of the brain. The presence of noise enhances the strength of both types of synapses, but if the initial network has synapses of both natures with similar strengths. Finally, we show how the synchronous behaviour of the evolved network will reflect its evolved topology.
Using a hybrid neuron in physiologically inspired models of the basal ganglia.
Thibeault, Corey M; Srinivasa, Narayan
2013-01-01
Our current understanding of the basal ganglia (BG) has facilitated the creation of computational models that have contributed novel theories, explored new functional anatomy and demonstrated results complementing physiological experiments. However, the utility of these models extends beyond these applications. Particularly in neuromorphic engineering, where the basal ganglia's role in computation is important for applications such as power efficient autonomous agents and model-based control strategies. The neurons used in existing computational models of the BG, however, are not amenable for many low-power hardware implementations. Motivated by a need for more hardware accessible networks, we replicate four published models of the BG, spanning single neuron and small networks, replacing the more computationally expensive neuron models with an Izhikevich hybrid neuron. This begins with a network modeling action-selection, where the basal activity levels and the ability to appropriately select the most salient input is reproduced. A Parkinson's disease model is then explored under normal conditions, Parkinsonian conditions and during subthalamic nucleus deep brain stimulation (DBS). The resulting network is capable of replicating the loss of thalamic relay capabilities in the Parkinsonian state and its return under DBS. This is also demonstrated using a network capable of action-selection. Finally, a study of correlation transfer under different patterns of Parkinsonian activity is presented. These networks successfully captured the significant results of the originals studies. This not only creates a foundation for neuromorphic hardware implementations but may also support the development of large-scale biophysical models. The former potentially providing a way of improving the efficacy of DBS and the latter allowing for the efficient simulation of larger more comprehensive networks.
Homeostatic Scaling of Excitability in Recurrent Neural Networks
Remme, Michiel W. H.; Wadman, Wytse J.
2012-01-01
Neurons adjust their intrinsic excitability when experiencing a persistent change in synaptic drive. This process can prevent neural activity from moving into either a quiescent state or a saturated state in the face of ongoing plasticity, and is thought to promote stability of the network in which neurons reside. However, most neurons are embedded in recurrent networks, which require a delicate balance between excitation and inhibition to maintain network stability. This balance could be disrupted when neurons independently adjust their intrinsic excitability. Here, we study the functioning of activity-dependent homeostatic scaling of intrinsic excitability (HSE) in a recurrent neural network. Using both simulations of a recurrent network consisting of excitatory and inhibitory neurons that implement HSE, and a mean-field description of adapting excitatory and inhibitory populations, we show that the stability of such adapting networks critically depends on the relationship between the adaptation time scales of both neuron populations. In a stable adapting network, HSE can keep all neurons functioning within their dynamic range, while the network is undergoing several (patho)physiologically relevant types of plasticity, such as persistent changes in external drive, changes in connection strengths, or the loss of inhibitory cells from the network. However, HSE cannot prevent the unstable network dynamics that result when, due to such plasticity, recurrent excitation in the network becomes too strong compared to feedback inhibition. This suggests that keeping a neural network in a stable and functional state requires the coordination of distinct homeostatic mechanisms that operate not only by adjusting neural excitability, but also by controlling network connectivity. PMID:22570604
Jerome, Jason; Heck, Detlef H.
2011-01-01
Optical manipulation of neuronal activity has rapidly developed into the most powerful and widely used approach to study mechanisms related to neuronal connectivity over a range of scales. Since the early use of single site uncaging to map network connectivity, rapid technological development of light modulation techniques has added important new options, such as fast scanning photostimulation, massively parallel control of light stimuli, holographic uncaging, and two-photon stimulation techniques. Exciting new developments in optogenetics complement neurotransmitter uncaging techniques by providing cell-type specificity and in vivo usability, providing optical access to the neural substrates of behavior. Here we review the rapid evolution of methods for the optical manipulation of neuronal activity, emphasizing crucial recent developments. PMID:22275886
Jerome, Jason; Heck, Detlef H
2011-01-01
Optical manipulation of neuronal activity has rapidly developed into the most powerful and widely used approach to study mechanisms related to neuronal connectivity over a range of scales. Since the early use of single site uncaging to map network connectivity, rapid technological development of light modulation techniques has added important new options, such as fast scanning photostimulation, massively parallel control of light stimuli, holographic uncaging, and two-photon stimulation techniques. Exciting new developments in optogenetics complement neurotransmitter uncaging techniques by providing cell-type specificity and in vivo usability, providing optical access to the neural substrates of behavior. Here we review the rapid evolution of methods for the optical manipulation of neuronal activity, emphasizing crucial recent developments.
Deep learning and shapes similarity for joint segmentation and tracing single neurons in SEM images
NASA Astrophysics Data System (ADS)
Rao, Qiang; Xiao, Chi; Han, Hua; Chen, Xi; Shen, Lijun; Xie, Qiwei
2017-02-01
Extracting the structure of single neurons is critical for understanding how they function within the neural circuits. Recent developments in microscopy techniques, and the widely recognized need for openness and standardization provide a community resource for automated reconstruction of dendritic and axonal morphology of single neurons. In order to look into the fine structure of neurons, we use the Automated Tape-collecting Ultra Microtome Scanning Electron Microscopy (ATUM-SEM) to get images sequence of serial sections of animal brain tissue that densely packed with neurons. Different from other neuron reconstruction method, we propose a method that enhances the SEM images by detecting the neuronal membranes with deep convolutional neural network (DCNN) and segments single neurons by active contour with group shape similarity. We joint the segmentation and tracing together and they interact with each other by alternate iteration that tracing aids the selection of candidate region patch for active contour segmentation while the segmentation provides the neuron geometrical features which improve the robustness of tracing. The tracing model mainly relies on the neuron geometrical features and is updated after neuron being segmented on the every next section. Our method enables the reconstruction of neurons of the drosophila mushroom body which is cut to serial sections and imaged under SEM. Our method provides an elementary step for the whole reconstruction of neuronal networks.
Qualitative-Modeling-Based Silicon Neurons and Their Networks
Kohno, Takashi; Sekikawa, Munehisa; Li, Jing; Nanami, Takuya; Aihara, Kazuyuki
2016-01-01
The ionic conductance models of neuronal cells can finely reproduce a wide variety of complex neuronal activities. However, the complexity of these models has prompted the development of qualitative neuron models. They are described by differential equations with a reduced number of variables and their low-dimensional polynomials, which retain the core mathematical structures. Such simple models form the foundation of a bottom-up approach in computational and theoretical neuroscience. We proposed a qualitative-modeling-based approach for designing silicon neuron circuits, in which the mathematical structures in the polynomial-based qualitative models are reproduced by differential equations with silicon-native expressions. This approach can realize low-power-consuming circuits that can be configured to realize various classes of neuronal cells. In this article, our qualitative-modeling-based silicon neuron circuits for analog and digital implementations are quickly reviewed. One of our CMOS analog silicon neuron circuits can realize a variety of neuronal activities with a power consumption less than 72 nW. The square-wave bursting mode of this circuit is explained. Another circuit can realize Class I and II neuronal activities with about 3 nW. Our digital silicon neuron circuit can also realize these classes. An auto-associative memory realized on an all-to-all connected network of these silicon neurons is also reviewed, in which the neuron class plays important roles in its performance. PMID:27378842
Hu, Liang; Wang, Qin; Qin, Zhen; Su, Kaiqi; Huang, Liquan; Hu, Ning; Wang, Ping
2015-04-15
5-hydroxytryptamine (5-HT) is an important neurotransmitter in regulating emotions and related behaviors in mammals. To detect and monitor the 5-HT, effective and convenient methods are demanded in investigation of neuronal network. In this study, hippocampal neuronal networks (HNNs) endogenously expressing 5-HT receptors were employed as sensing elements to build an in vitro neuronal network-based biosensor. The electrophysiological characteristics were analyzed in both neuron and network levels. The firing rates and amplitudes were derived from signal to determine the biosensor response characteristics. The experimental results demonstrate a dose-dependent inhibitory effect of 5-HT on hippocampal neuron activities, indicating the effectiveness of this hybrid biosensor in detecting 5-HT with a response range from 0.01μmol/L to 10μmol/L. In addition, the cross-correlation analysis of HNNs activities suggests 5-HT could weaken HNN connectivity reversibly, providing more specificity of this biosensor in detecting 5-HT. Moreover, 5-HT induced spatiotemporal firing pattern alterations could be monitored in neuron and network levels simultaneously by this hybrid biosensor in a convenient and direct way. With those merits, this neuronal network-based biosensor will be promising to be a valuable and utility platform for the study of neurotransmitter in vitro. Copyright © 2014 Elsevier B.V. All rights reserved.
DeepNeuron: an open deep learning toolbox for neuron tracing.
Zhou, Zhi; Kuo, Hsien-Chi; Peng, Hanchuan; Long, Fuhui
2018-06-06
Reconstructing three-dimensional (3D) morphology of neurons is essential for understanding brain structures and functions. Over the past decades, a number of neuron tracing tools including manual, semiautomatic, and fully automatic approaches have been developed to extract and analyze 3D neuronal structures. Nevertheless, most of them were developed based on coding certain rules to extract and connect structural components of a neuron, showing limited performance on complicated neuron morphology. Recently, deep learning outperforms many other machine learning methods in a wide range of image analysis and computer vision tasks. Here we developed a new Open Source toolbox, DeepNeuron, which uses deep learning networks to learn features and rules from data and trace neuron morphology in light microscopy images. DeepNeuron provides a family of modules to solve basic yet challenging problems in neuron tracing. These problems include but not limited to: (1) detecting neuron signal under different image conditions, (2) connecting neuronal signals into tree(s), (3) pruning and refining tree morphology, (4) quantifying the quality of morphology, and (5) classifying dendrites and axons in real time. We have tested DeepNeuron using light microscopy images including bright-field and confocal images of human and mouse brain, on which DeepNeuron demonstrates robustness and accuracy in neuron tracing.
Emergence and robustness of target waves in a neuronal network
NASA Astrophysics Data System (ADS)
Xu, Ying; Jin, Wuyin; Ma, Jun
2015-08-01
Target waves in excitable media such as neuronal network can regulate the spatial distribution and orderliness as a continuous pacemaker. Three different schemes are used to develop stable target wave in the network, and the potential mechanism for emergence of target waves in the excitable media is investigated. For example, a local pacing driven by external periodical forcing can generate stable target wave in the excitable media, furthermore, heterogeneity and local feedback under self-feedback coupling are also effective to generate continuous target wave as well. To discern the difference of these target waves, a statistical synchronization factor is defined by using mean field theory and artificial defects are introduced into the network to block the target wave, thus the robustness of these target waves could be detected. However, these target waves developed from the above mentioned schemes show different robustness to the blocking from artificial defects. A regular network of Hindmarsh-Rose neurons is designed in a two-dimensional square array, target waves are induced by using three different ways, and then some artificial defects, which are associated with anatomical defects, are set in the network to detect the effect of defects blocking on the travelling waves. It confirms that the robustness of target waves to defects blocking depends on the intrinsic properties (ways to generate target wave) of target waves.
Enhanced polychronization in a spiking network with metaplasticity.
Guise, Mira; Knott, Alistair; Benuskova, Lubica
2015-01-01
Computational models of metaplasticity have usually focused on the modeling of single synapses (Shouval et al., 2002). In this paper we study the effect of metaplasticity on network behavior. Our guiding assumption is that the primary purpose of metaplasticity is to regulate synaptic plasticity, by increasing it when input is low and decreasing it when input is high. For our experiments we adopt a model of metaplasticity that demonstrably has this effect for a single synapse; our primary interest is in how metaplasticity thus defined affects network-level phenomena. We focus on a network-level phenomenon called polychronicity, that has a potential role in representation and memory. A network with polychronicity has the ability to produce non-synchronous but precisely timed sequences of neural firing events that can arise from strongly connected groups of neurons called polychronous neural groups (Izhikevich et al., 2004). Polychronous groups (PNGs) develop readily when spiking networks are exposed to repeated spatio-temporal stimuli under the influence of spike-timing-dependent plasticity (STDP), but are sensitive to changes in synaptic weight distribution. We use a technique we have recently developed called Response Fingerprinting to show that PNGs formed in the presence of metaplasticity are significantly larger than those with no metaplasticity. A potential mechanism for this enhancement is proposed that links an inherent property of integrator type neurons called spike latency to an increase in the tolerance of PNG neurons to jitter in their inputs.
Ma, Xiaofeng; Kohashi, Tsunehiko; Carlson, Bruce A
2013-07-01
Many sensory brain regions are characterized by extensive local network interactions. However, we know relatively little about the contribution of this microcircuitry to sensory coding. Detailed analyses of neuronal microcircuitry are usually performed in vitro, whereas sensory processing is typically studied by recording from individual neurons in vivo. The electrosensory pathway of mormyrid fish provides a unique opportunity to link in vitro studies of synaptic physiology with in vivo studies of sensory processing. These fish communicate by actively varying the intervals between pulses of electricity. Within the midbrain posterior exterolateral nucleus (ELp), the temporal filtering of afferent spike trains establishes interval tuning by single neurons. We characterized pairwise neuronal connectivity among ELp neurons with dual whole cell recording in an in vitro whole brain preparation. We found a densely connected network in which single neurons influenced the responses of other neurons throughout the network. Similarly tuned neurons were more likely to share an excitatory synaptic connection than differently tuned neurons, and synaptic connections between similarly tuned neurons were stronger than connections between differently tuned neurons. We propose a general model for excitatory network interactions in which strong excitatory connections both reinforce and adjust tuning and weak excitatory connections make smaller modifications to tuning. The diversity of interval tuning observed among this population of neurons can be explained, in part, by each individual neuron receiving a different complement of local excitatory inputs.
Activity of cardiorespiratory networks revealed by transsynaptic virus expressing GFP.
Irnaten, M; Neff, R A; Wang, J; Loewy, A D; Mettenleiter, T C; Mendelowitz, D
2001-01-01
A fluorescent transneuronal marker capable of labeling individual neurons in a central network while maintaining their normal physiology would permit functional studies of neurons within entire networks responsible for complex behaviors such as cardiorespiratory reflexes. The Bartha strain of pseudorabies virus (PRV), an attenuated swine alpha herpesvirus, can be used as a transsynaptic marker of neural circuits. Bartha PRV invades neuronal networks in the CNS through peripherally projecting axons, replicates in these parent neurons, and then travels transsynaptically to continue labeling the second- and higher-order neurons in a time-dependent manner. A Bartha PRV mutant that expresses green fluorescent protein (GFP) was used to visualize and record from neurons that determine the vagal motor outflow to the heart. Here we show that Bartha PRV-GFP-labeled neurons retain their normal electrophysiological properties and that the labeled baroreflex pathways that control heart rate are unaltered by the virus. This novel transynaptic virus permits in vitro studies of identified neurons within functionally defined neuronal systems including networks that mediate cardiovascular and respiratory function and interactions. We also demonstrate superior laryngeal motorneurons fire spontaneously and synapse on cardiac vagal neurons in the nucleus ambiguus. This cardiorespiratory pathway provides a neural basis of respiratory sinus arrhythmias.
De Cegli, Rossella; Iacobacci, Simona; Flore, Gemma; Gambardella, Gennaro; Mao, Lei; Cutillo, Luisa; Lauria, Mario; Klose, Joachim; Illingworth, Elizabeth; Banfi, Sandro; di Bernardo, Diego
2013-01-01
Gene expression profiles can be used to infer previously unknown transcriptional regulatory interaction among thousands of genes, via systems biology 'reverse engineering' approaches. We 'reverse engineered' an embryonic stem (ES)-specific transcriptional network from 171 gene expression profiles, measured in ES cells, to identify master regulators of gene expression ('hubs'). We discovered that E130012A19Rik (E13), highly expressed in mouse ES cells as compared with differentiated cells, was a central 'hub' of the network. We demonstrated that E13 is a protein-coding gene implicated in regulating the commitment towards the different neuronal subtypes and glia cells. The overexpression and knock-down of E13 in ES cell lines, undergoing differentiation into neurons and glia cells, caused a strong up-regulation of the glutamatergic neurons marker Vglut2 and a strong down-regulation of the GABAergic neurons marker GAD65 and of the radial glia marker Blbp. We confirmed E13 expression in the cerebral cortex of adult mice and during development. By immuno-based affinity purification, we characterized protein partners of E13, involved in the Polycomb complex. Our results suggest a role of E13 in regulating the division between glutamatergic projection neurons and GABAergic interneurons and glia cells possibly by epigenetic-mediated transcriptional regulation.
Self-Consistent Scheme for Spike-Train Power Spectra in Heterogeneous Sparse Networks
Pena, Rodrigo F. O.; Vellmer, Sebastian; Bernardi, Davide; Roque, Antonio C.; Lindner, Benjamin
2018-01-01
Recurrent networks of spiking neurons can be in an asynchronous state characterized by low or absent cross-correlations and spike statistics which resemble those of cortical neurons. Although spatial correlations are negligible in this state, neurons can show pronounced temporal correlations in their spike trains that can be quantified by the autocorrelation function or the spike-train power spectrum. Depending on cellular and network parameters, correlations display diverse patterns (ranging from simple refractory-period effects and stochastic oscillations to slow fluctuations) and it is generally not well-understood how these dependencies come about. Previous work has explored how the single-cell correlations in a homogeneous network (excitatory and inhibitory integrate-and-fire neurons with nearly balanced mean recurrent input) can be determined numerically from an iterative single-neuron simulation. Such a scheme is based on the fact that every neuron is driven by the network noise (i.e., the input currents from all its presynaptic partners) but also contributes to the network noise, leading to a self-consistency condition for the input and output spectra. Here we first extend this scheme to homogeneous networks with strong recurrent inhibition and a synaptic filter, in which instabilities of the previous scheme are avoided by an averaging procedure. We then extend the scheme to heterogeneous networks in which (i) different neural subpopulations (e.g., excitatory and inhibitory neurons) have different cellular or connectivity parameters; (ii) the number and strength of the input connections are random (Erdős-Rényi topology) and thus different among neurons. In all heterogeneous cases, neurons are lumped in different classes each of which is represented by a single neuron in the iterative scheme; in addition, we make a Gaussian approximation of the input current to the neuron. These approximations seem to be justified over a broad range of parameters as indicated by comparison with simulation results of large recurrent networks. Our method can help to elucidate how network heterogeneity shapes the asynchronous state in recurrent neural networks. PMID:29551968
Self-Organized Supercriticality and Oscillations in Networks of Stochastic Spiking Neurons
NASA Astrophysics Data System (ADS)
Costa, Ariadne; Brochini, Ludmila; Kinouchi, Osame
2017-08-01
Networks of stochastic spiking neurons are interesting models in the area of Theoretical Neuroscience, presenting both continuous and discontinuous phase transitions. Here we study fully connected networks analytically, numerically and by computational simulations. The neurons have dynamic gains that enable the network to converge to a stationary slightly supercritical state (self-organized supercriticality or SOSC) in the presence of the continuous transition. We show that SOSC, which presents power laws for neuronal avalanches plus some large events, is robust as a function of the main parameter of the neuronal gain dynamics. We discuss the possible applications of the idea of SOSC to biological phenomena like epilepsy and dragon king avalanches. We also find that neuronal gains can produce collective oscillations that coexists with neuronal avalanches, with frequencies compatible with characteristic brain rhythms.
Raghavan, Mohan; Amrutur, Bharadwaj; Narayanan, Rishikesh; Sikdar, Sujit Kumar
2013-01-01
Synfire waves are propagating spike packets in synfire chains, which are feedforward chains embedded in random networks. Although synfire waves have proved to be effective quantification for network activity with clear relations to network structure, their utilities are largely limited to feedforward networks with low background activity. To overcome these shortcomings, we describe a novel generalisation of synfire waves, and define ‘synconset wave’ as a cascade of first spikes within a synchronisation event. Synconset waves would occur in ‘synconset chains’, which are feedforward chains embedded in possibly heavily recurrent networks with heavy background activity. We probed the utility of synconset waves using simulation of single compartment neuron network models with biophysically realistic conductances, and demonstrated that the spread of synconset waves directly follows from the network connectivity matrix and is modulated by top-down inputs and the resultant oscillations. Such synconset profiles lend intuitive insights into network organisation in terms of connection probabilities between various network regions rather than an adjacency matrix. To test this intuition, we develop a Bayesian likelihood function that quantifies the probability that an observed synfire wave was caused by a given network. Further, we demonstrate it's utility in the inverse problem of identifying the network that caused a given synfire wave. This method was effective even in highly subsampled networks where only a small subset of neurons were accessible, thus showing it's utility in experimental estimation of connectomes in real neuronal-networks. Together, we propose synconset chains/waves as an effective framework for understanding the impact of network structure on function, and as a step towards developing physiology-driven network identification methods. Finally, as synconset chains extend the utilities of synfire chains to arbitrary networks, we suggest utilities of our framework to several aspects of network physiology including cell assemblies, population codes, and oscillatory synchrony. PMID:24116018
Goal-seeking neural net for recall and recognition
NASA Astrophysics Data System (ADS)
Omidvar, Omid M.
1990-07-01
Neural networks have been used to mimic cognitive processes which take place in animal brains. The learning capability inherent in neural networks makes them suitable candidates for adaptive tasks such as recall and recognition. The synaptic reinforcements create a proper condition for adaptation, which results in memorization, formation of perception, and higher order information processing activities. In this research a model of a goal seeking neural network is studied and the operation of the network with regard to recall and recognition is analyzed. In these analyses recall is defined as retrieval of stored information where little or no matching is involved. On the other hand recognition is recall with matching; therefore it involves memorizing a piece of information with complete presentation. This research takes the generalized view of reinforcement in which all the signals are potential reinforcers. The neuronal response is considered to be the source of the reinforcement. This local approach to adaptation leads to the goal seeking nature of the neurons as network components. In the proposed model all the synaptic strengths are reinforced in parallel while the reinforcement among the layers is done in a distributed fashion and pipeline mode from the last layer inward. A model of complex neuron with varying threshold is developed to account for inhibitory and excitatory behavior of real neuron. A goal seeking model of a neural network is presented. This network is utilized to perform recall and recognition tasks. The performance of the model with regard to the assigned tasks is presented.
Hamaguchi, Kosuke; Riehle, Alexa; Brunel, Nicolas
2011-01-01
High firing irregularity is a hallmark of cortical neurons in vivo, and modeling studies suggest a balance of excitation and inhibition is necessary to explain this high irregularity. Such a balance must be generated, at least partly, from local interconnected networks of excitatory and inhibitory neurons, but the details of the local network structure are largely unknown. The dynamics of the neural activity depends on the local network structure; this in turn suggests the possibility of estimating network structure from the dynamics of the firing statistics. Here we report a new method to estimate properties of the local cortical network from the instantaneous firing rate and irregularity (CV(2)) under the assumption that recorded neurons are a part of a randomly connected sparse network. The firing irregularity, measured in monkey motor cortex, exhibits two features; many neurons show relatively stable firing irregularity in time and across different task conditions; the time-averaged CV(2) is widely distributed from quasi-regular to irregular (CV(2) = 0.3-1.0). For each recorded neuron, we estimate the three parameters of a local network [balance of local excitation-inhibition, number of recurrent connections per neuron, and excitatory postsynaptic potential (EPSP) size] that best describe the dynamics of the measured firing rates and irregularities. Our analysis shows that optimal parameter sets form a two-dimensional manifold in the three-dimensional parameter space that is confined for most of the neurons to the inhibition-dominated region. High irregularity neurons tend to be more strongly connected to the local network, either in terms of larger EPSP and inhibitory PSP size or larger number of recurrent connections, compared with the low irregularity neurons, for a given excitatory/inhibitory balance. Incorporating either synaptic short-term depression or conductance-based synapses leads many low CV(2) neurons to move to the excitation-dominated region as well as to an increase of EPSP size.
High-resolution CMOS MEA platform to study neurons at subcellular, cellular, and network levels†
Müller, Jan; Ballini, Marco; Livi, Paolo; Chen, Yihui; Radivojevic, Milos; Shadmani, Amir; Viswam, Vijay; Jones, Ian L.; Fiscella, Michele; Diggelmann, Roland; Stettler, Alexander; Frey, Urs; Bakkum, Douglas J.; Hierlemann, Andreas
2017-01-01
Studies on information processing and learning properties of neuronal networks would benefit from simultaneous and parallel access to the activity of a large fraction of all neurons in such networks. Here, we present a CMOS-based device, capable of simultaneously recording the electrical activity of over a thousand cells in in vitro neuronal networks. The device provides sufficiently high spatiotemporal resolution to enable, at the same time, access to neuronal preparations on subcellular, cellular, and network level. The key feature is a rapidly reconfigurable array of 26 400 microelectrodes arranged at low pitch (17.5 μm) within a large overall sensing area (3.85 × 2.10 mm2). An arbitrary subset of the electrodes can be simultaneously connected to 1024 low-noise readout channels as well as 32 stimulation units. Each electrode or electrode subset can be used to electrically stimulate or record the signals of virtually any neuron on the array. We demonstrate the applicability and potential of this device for various different experimental paradigms: large-scale recordings from whole networks of neurons as well as investigations of axonal properties of individual neurons. PMID:25973786
High-resolution CMOS MEA platform to study neurons at subcellular, cellular, and network levels.
Müller, Jan; Ballini, Marco; Livi, Paolo; Chen, Yihui; Radivojevic, Milos; Shadmani, Amir; Viswam, Vijay; Jones, Ian L; Fiscella, Michele; Diggelmann, Roland; Stettler, Alexander; Frey, Urs; Bakkum, Douglas J; Hierlemann, Andreas
2015-07-07
Studies on information processing and learning properties of neuronal networks would benefit from simultaneous and parallel access to the activity of a large fraction of all neurons in such networks. Here, we present a CMOS-based device, capable of simultaneously recording the electrical activity of over a thousand cells in in vitro neuronal networks. The device provides sufficiently high spatiotemporal resolution to enable, at the same time, access to neuronal preparations on subcellular, cellular, and network level. The key feature is a rapidly reconfigurable array of 26 400 microelectrodes arranged at low pitch (17.5 μm) within a large overall sensing area (3.85 × 2.10 mm(2)). An arbitrary subset of the electrodes can be simultaneously connected to 1024 low-noise readout channels as well as 32 stimulation units. Each electrode or electrode subset can be used to electrically stimulate or record the signals of virtually any neuron on the array. We demonstrate the applicability and potential of this device for various different experimental paradigms: large-scale recordings from whole networks of neurons as well as investigations of axonal properties of individual neurons.
A neural network technique for remeshing of bone microstructure.
Fischer, Anath; Holdstein, Yaron
2012-01-01
Today, there is major interest within the biomedical community in developing accurate noninvasive means for the evaluation of bone microstructure and bone quality. Recent improvements in 3D imaging technology, among them development of micro-CT and micro-MRI scanners, allow in-vivo 3D high-resolution scanning and reconstruction of large specimens or even whole bone models. Thus, the tendency today is to evaluate bone features using 3D assessment techniques rather than traditional 2D methods. For this purpose, high-quality meshing methods are required. However, the 3D meshes produced from current commercial systems usually are of low quality with respect to analysis and rapid prototyping. 3D model reconstruction of bone is difficult due to the complexity of bone microstructure. The small bone features lead to a great deal of neighborhood ambiguity near each vertex. The relatively new neural network method for mesh reconstruction has the potential to create or remesh 3D models accurately and quickly. A neural network (NN), which resembles an artificial intelligence (AI) algorithm, is a set of interconnected neurons, where each neuron is capable of making an autonomous arithmetic calculation. Moreover, each neuron is affected by its surrounding neurons through the structure of the network. This paper proposes an extension of the growing neural gas (GNN) neural network technique for remeshing a triangular manifold mesh that represents bone microstructure. This method has the advantage of reconstructing the surface of a genus-n freeform object without a priori knowledge regarding the original object, its topology, or its shape.
Phase synchronization of bursting neurons in clustered small-world networks
NASA Astrophysics Data System (ADS)
Batista, C. A. S.; Lameu, E. L.; Batista, A. M.; Lopes, S. R.; Pereira, T.; Zamora-López, G.; Kurths, J.; Viana, R. L.
2012-07-01
We investigate the collective dynamics of bursting neurons on clustered networks. The clustered network model is composed of subnetworks, each of them presenting the so-called small-world property. This model can also be regarded as a network of networks. In each subnetwork a neuron is connected to other ones with regular as well as random connections, the latter with a given intracluster probability. Moreover, in a given subnetwork each neuron has an intercluster probability to be connected to the other subnetworks. The local neuron dynamics has two time scales (fast and slow) and is modeled by a two-dimensional map. In such small-world network the neuron parameters are chosen to be slightly different such that, if the coupling strength is large enough, there may be synchronization of the bursting (slow) activity. We give bounds for the critical coupling strength to obtain global burst synchronization in terms of the network structure, that is, the probabilities of intracluster and intercluster connections. We find that, as the heterogeneity in the network is reduced, the network global synchronizability is improved. We show that the transitions to global synchrony may be abrupt or smooth depending on the intercluster probability.
NASA Astrophysics Data System (ADS)
Sun, Xiaojuan; Perc, Matjaž; Kurths, Jürgen
2017-05-01
In this paper, we study effects of partial time delays on phase synchronization in Watts-Strogatz small-world neuronal networks. Our focus is on the impact of two parameters, namely the time delay τ and the probability of partial time delay pdelay, whereby the latter determines the probability with which a connection between two neurons is delayed. Our research reveals that partial time delays significantly affect phase synchronization in this system. In particular, partial time delays can either enhance or decrease phase synchronization and induce synchronization transitions with changes in the mean firing rate of neurons, as well as induce switching between synchronized neurons with period-1 firing to synchronized neurons with period-2 firing. Moreover, in comparison to a neuronal network where all connections are delayed, we show that small partial time delay probabilities have especially different influences on phase synchronization of neuronal networks.
Sun, Xiaojuan; Perc, Matjaž; Kurths, Jürgen
2017-05-01
In this paper, we study effects of partial time delays on phase synchronization in Watts-Strogatz small-world neuronal networks. Our focus is on the impact of two parameters, namely the time delay τ and the probability of partial time delay p delay , whereby the latter determines the probability with which a connection between two neurons is delayed. Our research reveals that partial time delays significantly affect phase synchronization in this system. In particular, partial time delays can either enhance or decrease phase synchronization and induce synchronization transitions with changes in the mean firing rate of neurons, as well as induce switching between synchronized neurons with period-1 firing to synchronized neurons with period-2 firing. Moreover, in comparison to a neuronal network where all connections are delayed, we show that small partial time delay probabilities have especially different influences on phase synchronization of neuronal networks.
The Rich Club of the C. elegans Neuronal Connectome
Vértes, Petra E.; Ahnert, Sebastian E.; Schafer, William R.; Bullmore, Edward T.
2013-01-01
There is increasing interest in topological analysis of brain networks as complex systems, with researchers often using neuroimaging to represent the large-scale organization of nervous systems without precise cellular resolution. Here we used graph theory to investigate the neuronal connectome of the nematode worm Caenorhabditis elegans, which is defined anatomically at a cellular scale as 2287 synaptic connections between 279 neurons. We identified a small number of highly connected neurons as a rich club (N = 11) interconnected with high efficiency and high connection distance. Rich club neurons comprise almost exclusively the interneurons of the locomotor circuits, with known functional importance for coordinated movement. The rich club neurons are connector hubs, with high betweenness centrality, and many intermodular connections to nodes in different modules. On identifying the shortest topological paths (motifs) between pairs of peripheral neurons, the motifs that are found most frequently traverse the rich club. The rich club neurons are born early in development, before visible movement of the animal and before the main phase of developmental elongation of its body. We conclude that the high wiring cost of the globally integrative rich club of neurons in the C. elegans connectome is justified by the adaptive value of coordinated movement of the animal. The economical trade-off between physical cost and behavioral value of rich club organization in a cellular connectome confirms theoretical expectations and recapitulates comparable results from human neuroimaging on much larger scale networks, suggesting that this may be a general and scale-invariant principle of brain network organization. PMID:23575836
Compact holographic optical neural network system for real-time pattern recognition
NASA Astrophysics Data System (ADS)
Lu, Taiwei; Mintzer, David T.; Kostrzewski, Andrew A.; Lin, Freddie S.
1996-08-01
One of the important characteristics of artificial neural networks is their capability for massive interconnection and parallel processing. Recently, specialized electronic neural network processors and VLSI neural chips have been introduced in the commercial market. The number of parallel channels they can handle is limited because of the limited parallel interconnections that can be implemented with 1D electronic wires. High-resolution pattern recognition problems can require a large number of neurons for parallel processing of an image. This paper describes a holographic optical neural network (HONN) that is based on high- resolution volume holographic materials and is capable of performing massive 3D parallel interconnection of tens of thousands of neurons. A HONN with more than 16,000 neurons packaged in an attache case has been developed. Rotation- shift-scale-invariant pattern recognition operations have been demonstrated with this system. System parameters such as the signal-to-noise ratio, dynamic range, and processing speed are discussed.
Spiking Neural Network Decoder for Brain-Machine Interfaces.
Dethier, Julie; Gilja, Vikash; Nuyujukian, Paul; Elassaad, Shauki A; Shenoy, Krishna V; Boahen, Kwabena
2011-01-01
We used a spiking neural network (SNN) to decode neural data recorded from a 96-electrode array in premotor/motor cortex while a rhesus monkey performed a point-to-point reaching arm movement task. We mapped a Kalman-filter neural prosthetic decode algorithm developed to predict the arm's velocity on to the SNN using the Neural Engineering Framework and simulated it using Nengo , a freely available software package. A 20,000-neuron network matched the standard decoder's prediction to within 0.03% (normalized by maximum arm velocity). A 1,600-neuron version of this network was within 0.27%, and run in real-time on a 3GHz PC. These results demonstrate that a SNN can implement a statistical signal processing algorithm widely used as the decoder in high-performance neural prostheses (Kalman filter), and achieve similar results with just a few thousand neurons. Hardware SNN implementations-neuromorphic chips-may offer power savings, essential for realizing fully-implantable cortically controlled prostheses.
Kinjo, Erika Reime; Rodríguez, Pedro Xavier Royero; Dos Santos, Bianca Araújo; Higa, Guilherme Shigueto Vilar; Ferraz, Mariana Sacrini Ayres; Schmeltzer, Christian; Rüdiger, Sten; Kihara, Alexandre Hiroaki
2018-05-01
Epilepsy is a disorder of the brain characterized by the predisposition to generate recurrent unprovoked seizures, which involves reshaping of neuronal circuitries based on intense neuronal activity. In this review, we first detailed the regulation of plasticity-associated genes, such as ARC, GAP-43, PSD-95, synapsin, and synaptophysin. Indeed, reshaping of neuronal connectivity after the primary, acute epileptogenesis event increases the excitability of the temporal lobe. Herein, we also discussed the heterogeneity of neuronal populations regarding the number of synaptic connections, which in the theoretical field is commonly referred as degree. Employing integrate-and-fire neuronal model, we determined that in addition to increased synaptic strength, degree correlations might play essential and unsuspected roles in the control of network activity. Indeed, assortativity, which can be described as a condition where high-degree correlations are observed, increases the excitability of neural networks. In this review, we summarized recent topics in the field, and data were discussed according to newly developed or unusual tools, as provided by mathematical graph analysis and high-order statistics. With this, we were able to present new foundations for the pathological activity observed in temporal lobe epilepsy.
Developmental changes of neuronal networks associated with strategic social decision-making.
Steinmann, Elisabeth; Schmalor, Antonia; Prehn-Kristensen, Alexander; Wolff, Stephan; Galka, Andreas; Möhring, Jan; Gerber, Wolf-Dieter; Petermann, Franz; Stephani, Ulrich; Siniatchkin, Michael
2014-04-01
One of the important prerequisites for successful social interaction is the willingness of each individual to cooperate socially. Using the ultimatum game, several studies have demonstrated that the process of decision-making to cooperate or to defeat in interaction with a partner is associated with activation of the dorsolateral prefrontal cortex (DLPFC), anterior cingulate cortex (ACC), anterior insula (AI), and inferior frontal cortex (IFC). This study investigates developmental changes in this neuronal network. 15 healthy children (8-12 years), 15 adolescents (13-18 years) and 15 young adults (19-28 years) were investigated using the ultimatum game. Neuronal networks representing decision-making based on strategic thinking were characterized using functional MRI. In all age groups, the process of decision-making in reaction to unfair offers was associated with hemodynamic changes in similar regions. Compared with children, however, healthy adults and adolescents revealed greater activation in the IFC and the fusiform gyrus, as well as the nucleus accumbens. In contrast, healthy children displayed more activation in the AI, the dorsal part of the ACC, and the DLPFC. There were no differences in brain activations between adults and adolescents. The neuronal mechanisms underlying strategic social decision making are already developed by the age of eight. Decision-making based on strategic thinking is associated with age-dependent involvement of different brain regions. Neuronal networks underlying theory of mind and reward anticipation are more activated in adults and adolescents with regard to the increasing perspective taking with age. In relation to emotional reactivity and respective compensatory coping in younger ages, children have higher activations in a neuronal network associated with emotional processing and executive control. Copyright © 2014 Elsevier Ltd. All rights reserved.
Karbasi, Amin; Salavati, Amir Hesam; Vetterli, Martin
2018-04-01
The connectivity of a neuronal network has a major effect on its functionality and role. It is generally believed that the complex network structure of the brain provides a physiological basis for information processing. Therefore, identifying the network's topology has received a lot of attentions in neuroscience and has been the center of many research initiatives such as Human Connectome Project. Nevertheless, direct and invasive approaches that slice and observe the neural tissue have proven to be time consuming, complex and costly. As a result, the inverse methods that utilize firing activity of neurons in order to identify the (functional) connections have gained momentum recently, especially in light of rapid advances in recording technologies; It will soon be possible to simultaneously monitor the activities of tens of thousands of neurons in real time. While there are a number of excellent approaches that aim to identify the functional connections from firing activities, the scalability of the proposed techniques plays a major challenge in applying them on large-scale datasets of recorded firing activities. In exceptional cases where scalability has not been an issue, the theoretical performance guarantees are usually limited to a specific family of neurons or the type of firing activities. In this paper, we formulate the neural network reconstruction as an instance of a graph learning problem, where we observe the behavior of nodes/neurons (i.e., firing activities) and aim to find the links/connections. We develop a scalable learning mechanism and derive the conditions under which the estimated graph for a network of Leaky Integrate and Fire (LIf) neurons matches the true underlying synaptic connections. We then validate the performance of the algorithm using artificially generated data (for benchmarking) and real data recorded from multiple hippocampal areas in rats.
The Electrophysiological MEMS Device with Micro Channel Array for Cellular Network Analysis
NASA Astrophysics Data System (ADS)
Tonomura, Wataru; Kurashima, Toshiaki; Takayama, Yuzo; Moriguchi, Hiroyuki; Jimbo, Yasuhiko; Konishi, Satoshi
This paper describes a new type of MCA (Micro Channel Array) for simultaneous multipoint measurement of cellular network. Presented MCA employing the measurement principles of the patch-clamp technique is designed for advanced neural network analysis which has been studied by co-authors using 64ch MEA (Micro Electrode Arrays) system. First of all, sucking and clamping of cells through channels of developed MCA is expected to improve electrophysiological signal detections. Electrophysiological sensing electrodes integrated around individual channels of MCA by using MEMS (Micro Electro Mechanical System) technologies are electrically isolated for simultaneous multipoint measurement. In this study, we tested the developed MCA using the non-cultured rat's cerebral cortical slice and the hippocampal neurons. We could measure the spontaneous action potential of the slice simultaneously at multiple points and culture the neurons on developed MCA. Herein, we describe the experimental results together with the design and fabrication of the electrophysiological MEMS device with MCA for cellular network analysis.
Hybrid multiphoton volumetric functional imaging of large-scale bioengineered neuronal networks
NASA Astrophysics Data System (ADS)
Dana, Hod; Marom, Anat; Paluch, Shir; Dvorkin, Roman; Brosh, Inbar; Shoham, Shy
2014-06-01
Planar neural networks and interfaces serve as versatile in vitro models of central nervous system physiology, but adaptations of related methods to three dimensions (3D) have met with limited success. Here, we demonstrate for the first time volumetric functional imaging in a bioengineered neural tissue growing in a transparent hydrogel with cortical cellular and synaptic densities, by introducing complementary new developments in nonlinear microscopy and neural tissue engineering. Our system uses a novel hybrid multiphoton microscope design combining a 3D scanning-line temporal-focusing subsystem and a conventional laser-scanning multiphoton microscope to provide functional and structural volumetric imaging capabilities: dense microscopic 3D sampling at tens of volumes per second of structures with mm-scale dimensions containing a network of over 1,000 developing cells with complex spontaneous activity patterns. These developments open new opportunities for large-scale neuronal interfacing and for applications of 3D engineered networks ranging from basic neuroscience to the screening of neuroactive substances.
Improved Autoassociative Neural Networks
NASA Technical Reports Server (NTRS)
Hand, Charles
2003-01-01
Improved autoassociative neural networks, denoted nexi, have been proposed for use in controlling autonomous robots, including mobile exploratory robots of the biomorphic type. In comparison with conventional autoassociative neural networks, nexi would be more complex but more capable in that they could be trained to do more complex tasks. A nexus would use bit weights and simple arithmetic in a manner that would enable training and operation without a central processing unit, programs, weight registers, or large amounts of memory. Only a relatively small amount of memory (to hold the bit weights) and a simple logic application- specific integrated circuit would be needed. A description of autoassociative neural networks is prerequisite to a meaningful description of a nexus. An autoassociative network is a set of neurons that are completely connected in the sense that each neuron receives input from, and sends output to, all the other neurons. (In some instantiations, a neuron could also send output back to its own input terminal.) The state of a neuron is completely determined by the inner product of its inputs with weights associated with its input channel. Setting the weights sets the behavior of the network. The neurons of an autoassociative network are usually regarded as comprising a row or vector. Time is a quantized phenomenon for most autoassociative networks in the sense that time proceeds in discrete steps. At each time step, the row of neurons forms a pattern: some neurons are firing, some are not. Hence, the current state of an autoassociative network can be described with a single binary vector. As time goes by, the network changes the vector. Autoassociative networks move vectors over hyperspace landscapes of possibilities.
Energetic Constraints Produce Self-sustained Oscillatory Dynamics in Neuronal Networks
Burroni, Javier; Taylor, P.; Corey, Cassian; Vachnadze, Tengiz; Siegelmann, Hava T.
2017-01-01
Overview: We model energy constraints in a network of spiking neurons, while exploring general questions of resource limitation on network function abstractly. Background: Metabolic states like dietary ketosis or hypoglycemia have a large impact on brain function and disease outcomes. Glia provide metabolic support for neurons, among other functions. Yet, in computational models of glia-neuron cooperation, there have been no previous attempts to explore the effects of direct realistic energy costs on network activity in spiking neurons. Currently, biologically realistic spiking neural networks assume that membrane potential is the main driving factor for neural spiking, and do not take into consideration energetic costs. Methods: We define local energy pools to constrain a neuron model, termed Spiking Neuron Energy Pool (SNEP), which explicitly incorporates energy limitations. Each neuron requires energy to spike, and resources in the pool regenerate over time. Our simulation displays an easy-to-use GUI, which can be run locally in a web browser, and is freely available. Results: Energy dependence drastically changes behavior of these neural networks, causing emergent oscillations similar to those in networks of biological neurons. We analyze the system via Lotka-Volterra equations, producing several observations: (1) energy can drive self-sustained oscillations, (2) the energetic cost of spiking modulates the degree and type of oscillations, (3) harmonics emerge with frequencies determined by energy parameters, and (4) varying energetic costs have non-linear effects on energy consumption and firing rates. Conclusions: Models of neuron function which attempt biological realism may benefit from including energy constraints. Further, we assert that observed oscillatory effects of energy limitations exist in networks of many kinds, and that these findings generalize to abstract graphs and technological applications. PMID:28289370
A Physiological Neural Controller of a Muscle Fiber Oculomotor Plant in Horizontal Monkey Saccades
Enderle, John D.
2014-01-01
A neural network model of biophysical neurons in the midbrain is presented to drive a muscle fiber oculomotor plant during horizontal monkey saccades. Neural circuitry, including omnipause neuron, premotor excitatory and inhibitory burst neurons, long lead burst neuron, tonic neuron, interneuron, abducens nucleus, and oculomotor nucleus, is developed to examine saccade dynamics. The time-optimal control strategy by realization of agonist and antagonist controller models is investigated. In consequence, each agonist muscle fiber is stimulated by an agonist neuron, while an antagonist muscle fiber is unstimulated by a pause and step from the antagonist neuron. It is concluded that the neural network is constrained by a minimum duration of the agonist pulse and that the most dominant factor in determining the saccade magnitude is the number of active neurons for the small saccades. For the large saccades, however, the duration of agonist burst firing significantly affects the control of saccades. The proposed saccadic circuitry establishes a complete model of saccade generation since it not only includes the neural circuits at both the premotor and motor stages of the saccade generator, but also uses a time-optimal controller to yield the desired saccade magnitude. PMID:24944832
Engelken, Rainer; Farkhooi, Farzad; Hansel, David; van Vreeswijk, Carl; Wolf, Fred
2016-01-01
Neuronal activity in the central nervous system varies strongly in time and across neuronal populations. It is a longstanding proposal that such fluctuations generically arise from chaotic network dynamics. Various theoretical studies predict that the rich dynamics of rate models operating in the chaotic regime can subserve circuit computation and learning. Neurons in the brain, however, communicate via spikes and it is a theoretical challenge to obtain similar rate fluctuations in networks of spiking neuron models. A recent study investigated spiking balanced networks of leaky integrate and fire (LIF) neurons and compared their dynamics to a matched rate network with identical topology, where single unit input-output functions were chosen from isolated LIF neurons receiving Gaussian white noise input. A mathematical analogy between the chaotic instability in networks of rate units and the spiking network dynamics was proposed. Here we revisit the behavior of the spiking LIF networks and these matched rate networks. We find expected hallmarks of a chaotic instability in the rate network: For supercritical coupling strength near the transition point, the autocorrelation time diverges. For subcritical coupling strengths, we observe critical slowing down in response to small external perturbations. In the spiking network, we found in contrast that the timescale of the autocorrelations is insensitive to the coupling strength and that rate deviations resulting from small input perturbations rapidly decay. The decay speed even accelerates for increasing coupling strength. In conclusion, our reanalysis demonstrates fundamental differences between the behavior of pulse-coupled spiking LIF networks and rate networks with matched topology and input-output function. In particular there is no indication of a corresponding chaotic instability in the spiking network.
Cultured neuronal networks as environmental biosensors.
O'Shaughnessy, Thomas J; Gray, Samuel A; Pancrazio, Joseph J
2004-01-01
Contamination of water by toxins, either intentionally or unintentionally, is a growing concern for both military and civilian agencies and thus there is a need for systems capable of monitoring a wide range of natural and industrial toxicants. The EILATox-Oregon Workshop held in September 2002 provided an opportunity to test the capabilities of a prototype neuronal network-based biosensor with unknown contaminants in water samples. The biosensor is a portable device capable of recording the action potential activity from a network of mammalian neurons grown on glass microelectrode arrays. Changes in the action potential fi ring rate across the network are monitored to determine exposure to toxicants. A series of three neuronal networks derived from mice was used to test seven unknown samples. Two of these unknowns later were revealed to be blanks, to which the neuronal networks did not respond. Of the five remaining unknowns, a significant change in network activity was detected for four of the compounds at concentrations below a lethal level for humans: mercuric chloride, sodium arsenite, phosdrin and chlordimeform. These compounds--two heavy metals, an organophosphate and an insecticide--demonstrate the breadth of detection possible with neuronal networks. The results generated at the workshop show the promise of the neuronal network biosensor as an environmental detector but there is still considerable effort needed to produce a device suitable for routine environmental threat monitoring.
Developmental implications of children's brain networks and learning.
Chan, John S Y; Wang, Yifeng; Yan, Jin H; Chen, Huafu
2016-10-01
The human brain works as a synergistic system where information exchanges between functional neuronal networks. Rudimentary networks are observed in the brain during infancy. In recent years, the question of how functional networks develop and mature in children has been a hotly discussed topic. In this review, we examined the developmental characteristics of functional networks and the impacts of skill training on children's brains. We first focused on the general rules of brain network development and on the typical and atypical development of children's brain networks. After that, we highlighted the essentials of neural plasticity and the effects of learning on brain network development. We also discussed two important theoretical and practical concerns in brain network training. Finally, we concluded by presenting the significance of network training in typically and atypically developed brains.
Neurons from the adult human dentate nucleus: neural networks in the neuron classification.
Grbatinić, Ivan; Marić, Dušica L; Milošević, Nebojša T
2015-04-07
Topological (central vs. border neuron type) and morphological classification of adult human dentate nucleus neurons according to their quantified histomorphological properties using neural networks on real and virtual neuron samples. In the real sample 53.1% and 14.1% of central and border neurons, respectively, are classified correctly with total of 32.8% of misclassified neurons. The most important result present 62.2% of misclassified neurons in border neurons group which is even greater than number of correctly classified neurons (37.8%) in that group, showing obvious failure of network to classify neurons correctly based on computational parameters used in our study. On the virtual sample 97.3% of misclassified neurons in border neurons group which is much greater than number of correctly classified neurons (2.7%) in that group, again confirms obvious failure of network to classify neurons correctly. Statistical analysis shows that there is no statistically significant difference in between central and border neurons for each measured parameter (p>0.05). Total of 96.74% neurons are morphologically classified correctly by neural networks and each one belongs to one of the four histomorphological types: (a) neurons with small soma and short dendrites, (b) neurons with small soma and long dendrites, (c) neuron with large soma and short dendrites, (d) neurons with large soma and long dendrites. Statistical analysis supports these results (p<0.05). Human dentate nucleus neurons can be classified in four neuron types according to their quantitative histomorphological properties. These neuron types consist of two neuron sets, small and large ones with respect to their perykarions with subtypes differing in dendrite length i.e. neurons with short vs. long dendrites. Besides confirmation of neuron classification on small and large ones, already shown in literature, we found two new subtypes i.e. neurons with small soma and long dendrites and with large soma and short dendrites. These neurons are most probably equally distributed throughout the dentate nucleus as no significant difference in their topological distribution is observed. Copyright © 2015 Elsevier Ltd. All rights reserved.
Connectomic constraints on computation in feedforward networks of spiking neurons.
Ramaswamy, Venkatakrishnan; Banerjee, Arunava
2014-10-01
Several efforts are currently underway to decipher the connectome or parts thereof in a variety of organisms. Ascertaining the detailed physiological properties of all the neurons in these connectomes, however, is out of the scope of such projects. It is therefore unclear to what extent knowledge of the connectome alone will advance a mechanistic understanding of computation occurring in these neural circuits, especially when the high-level function of the said circuit is unknown. We consider, here, the question of how the wiring diagram of neurons imposes constraints on what neural circuits can compute, when we cannot assume detailed information on the physiological response properties of the neurons. We call such constraints-that arise by virtue of the connectome-connectomic constraints on computation. For feedforward networks equipped with neurons that obey a deterministic spiking neuron model which satisfies a small number of properties, we ask if just by knowing the architecture of a network, we can rule out computations that it could be doing, no matter what response properties each of its neurons may have. We show results of this form, for certain classes of network architectures. On the other hand, we also prove that with the limited set of properties assumed for our model neurons, there are fundamental limits to the constraints imposed by network structure. Thus, our theory suggests that while connectomic constraints might restrict the computational ability of certain classes of network architectures, we may require more elaborate information on the properties of neurons in the network, before we can discern such results for other classes of networks.
Rojas, Camilo; Tedesco, Mariateresa; Massobrio, Paolo; Marino, Attilio; Ciofani, Gianni; Martinoia, Sergio; Raiteri, Roberto
2018-06-01
We aim to develop a novel non-invasive or minimally invasive method for neural stimulation to be applied in the study and treatment of brain (dys)functions and neurological disorders. We investigate the electrophysiological response of in vitro neuronal networks when subjected to low-intensity pulsed acoustic stimulation, mediated by piezoelectric nanoparticles adsorbed on the neuronal membrane. We show that the presence of piezoelectric barium titanate nanoparticles induces, in a reproducible way, an increase in network activity when excited by stationary ultrasound waves in the MHz regime. Such a response can be fully recovered when switching the ultrasound pulse off, depending on the generated pressure field amplitude, whilst it is insensitive to the duration of the ultrasound pulse in the range 0.5 s-1.5 s. We demonstrate that the presence of piezoelectric nanoparticles is necessary, and when applying the same acoustic stimulation to neuronal cultures without nanoparticles or with non-piezoelectric nanoparticles with the same size distribution, no network response is observed. We believe that our results open up an extremely interesting approach when coupled with suitable functionalization strategies of the nanoparticles in order to address specific neurons and/or brain areas and applied in vivo, thus enabling remote, non-invasive, and highly selective modulation of the activity of neuronal subpopulations of the central nervous system of mammalians.
NASA Astrophysics Data System (ADS)
Rojas, Camilo; Tedesco, Mariateresa; Massobrio, Paolo; Marino, Attilio; Ciofani, Gianni; Martinoia, Sergio; Raiteri, Roberto
2018-06-01
Objective. We aim to develop a novel non-invasive or minimally invasive method for neural stimulation to be applied in the study and treatment of brain (dys)functions and neurological disorders. Approach. We investigate the electrophysiological response of in vitro neuronal networks when subjected to low-intensity pulsed acoustic stimulation, mediated by piezoelectric nanoparticles adsorbed on the neuronal membrane. Main results. We show that the presence of piezoelectric barium titanate nanoparticles induces, in a reproducible way, an increase in network activity when excited by stationary ultrasound waves in the MHz regime. Such a response can be fully recovered when switching the ultrasound pulse off, depending on the generated pressure field amplitude, whilst it is insensitive to the duration of the ultrasound pulse in the range 0.5 s–1.5 s. We demonstrate that the presence of piezoelectric nanoparticles is necessary, and when applying the same acoustic stimulation to neuronal cultures without nanoparticles or with non-piezoelectric nanoparticles with the same size distribution, no network response is observed. Significance. We believe that our results open up an extremely interesting approach when coupled with suitable functionalization strategies of the nanoparticles in order to address specific neurons and/or brain areas and applied in vivo, thus enabling remote, non-invasive, and highly selective modulation of the activity of neuronal subpopulations of the central nervous system of mammalians.
Revealing degree distribution of bursting neuron networks.
Shen, Yu; Hou, Zhonghuai; Xin, Houwen
2010-03-01
We present a method to infer the degree distribution of a bursting neuron network from its dynamics. Burst synchronization (BS) of coupled Morris-Lecar neurons has been studied under the weak coupling condition. In the BS state, all the neurons start and end bursting almost simultaneously, while the spikes inside the burst are incoherent among the neurons. Interestingly, we find that the spike amplitude of a given neuron shows an excellent linear relationship with its degree, which makes it possible to estimate the degree distribution of the network by simple statistics of the spike amplitudes. We demonstrate the validity of this scheme on scale-free as well as small-world networks. The underlying mechanism of such a method is also briefly discussed.
Sternfeld, Matthew J; Hinckley, Christopher A; Moore, Niall J; Pankratz, Matthew T; Hilde, Kathryn L; Driscoll, Shawn P; Hayashi, Marito; Amin, Neal D; Bonanomi, Dario; Gifford, Wesley D; Sharma, Kamal; Goulding, Martyn; Pfaff, Samuel L
2017-01-01
Flexible neural networks, such as the interconnected spinal neurons that control distinct motor actions, can switch their activity to produce different behaviors. Both excitatory (E) and inhibitory (I) spinal neurons are necessary for motor behavior, but the influence of recruiting different ratios of E-to-I cells remains unclear. We constructed synthetic microphysical neural networks, called circuitoids, using precise combinations of spinal neuron subtypes derived from mouse stem cells. Circuitoids of purified excitatory interneurons were sufficient to generate oscillatory bursts with properties similar to in vivo central pattern generators. Inhibitory V1 neurons provided dual layers of regulation within excitatory rhythmogenic networks - they increased the rhythmic burst frequency of excitatory V3 neurons, and segmented excitatory motor neuron activity into sub-networks. Accordingly, the speed and pattern of spinal circuits that underlie complex motor behaviors may be regulated by quantitatively gating the intra-network cellular activity ratio of E-to-I neurons. DOI: http://dx.doi.org/10.7554/eLife.21540.001 PMID:28195039
Sadeh, Sadra; Rotter, Stefan
2015-01-01
The neuronal mechanisms underlying the emergence of orientation selectivity in the primary visual cortex of mammals are still elusive. In rodents, visual neurons show highly selective responses to oriented stimuli, but neighboring neurons do not necessarily have similar preferences. Instead of a smooth map, one observes a salt-and-pepper organization of orientation selectivity. Modeling studies have recently confirmed that balanced random networks are indeed capable of amplifying weakly tuned inputs and generating highly selective output responses, even in absence of feature-selective recurrent connectivity. Here we seek to elucidate the neuronal mechanisms underlying this phenomenon by resorting to networks of integrate-and-fire neurons, which are amenable to analytic treatment. Specifically, in networks of perfect integrate-and-fire neurons, we observe that highly selective and contrast invariant output responses emerge, very similar to networks of leaky integrate-and-fire neurons. We then demonstrate that a theory based on mean firing rates and the detailed network topology predicts the output responses, and explains the mechanisms underlying the suppression of the common-mode, amplification of modulation, and contrast invariance. Increasing inhibition dominance in our networks makes the rectifying nonlinearity more prominent, which in turn adds some distortions to the otherwise essentially linear prediction. An extension of the linear theory can account for all the distortions, enabling us to compute the exact shape of every individual tuning curve in our networks. We show that this simple form of nonlinearity adds two important properties to orientation selectivity in the network, namely sharpening of tuning curves and extra suppression of the modulation. The theory can be further extended to account for the nonlinearity of the leaky model by replacing the rectifier by the appropriate smooth input-output transfer function. These results are robust and do not depend on the state of network dynamics, and hold equally well for mean-driven and fluctuation-driven regimes of activity.
Sadeh, Sadra; Rotter, Stefan
2015-01-01
The neuronal mechanisms underlying the emergence of orientation selectivity in the primary visual cortex of mammals are still elusive. In rodents, visual neurons show highly selective responses to oriented stimuli, but neighboring neurons do not necessarily have similar preferences. Instead of a smooth map, one observes a salt-and-pepper organization of orientation selectivity. Modeling studies have recently confirmed that balanced random networks are indeed capable of amplifying weakly tuned inputs and generating highly selective output responses, even in absence of feature-selective recurrent connectivity. Here we seek to elucidate the neuronal mechanisms underlying this phenomenon by resorting to networks of integrate-and-fire neurons, which are amenable to analytic treatment. Specifically, in networks of perfect integrate-and-fire neurons, we observe that highly selective and contrast invariant output responses emerge, very similar to networks of leaky integrate-and-fire neurons. We then demonstrate that a theory based on mean firing rates and the detailed network topology predicts the output responses, and explains the mechanisms underlying the suppression of the common-mode, amplification of modulation, and contrast invariance. Increasing inhibition dominance in our networks makes the rectifying nonlinearity more prominent, which in turn adds some distortions to the otherwise essentially linear prediction. An extension of the linear theory can account for all the distortions, enabling us to compute the exact shape of every individual tuning curve in our networks. We show that this simple form of nonlinearity adds two important properties to orientation selectivity in the network, namely sharpening of tuning curves and extra suppression of the modulation. The theory can be further extended to account for the nonlinearity of the leaky model by replacing the rectifier by the appropriate smooth input-output transfer function. These results are robust and do not depend on the state of network dynamics, and hold equally well for mean-driven and fluctuation-driven regimes of activity. PMID:25569445
Shafer, Orie T; Kim, Dong Jo; Dunbar-Yaffe, Richard; Nikolaev, Viacheslav O; Lohse, Martin J; Taghert, Paul H
2008-04-24
The neuropeptide PDF is released by sixteen clock neurons in Drosophila and helps maintain circadian activity rhythms by coordinating a network of approximately 150 neuronal clocks. Whether PDF acts directly on elements of this neural network remains unknown. We address this question by adapting Epac1-camps, a genetically encoded cAMP FRET sensor, for use in the living brain. We find that a subset of the PDF-expressing neurons respond to PDF with long-lasting cAMP increases and confirm that such responses require the PDF receptor. In contrast, an unrelated Drosophila neuropeptide, DH31, stimulates large cAMP increases in all PDF-expressing clock neurons. Thus, the network of approximately 150 clock neurons displays widespread, though not uniform, PDF receptivity. This work introduces a sensitive means of measuring cAMP changes in a living brain with subcellular resolution. Specifically, it experimentally confirms the longstanding hypothesis that PDF is a direct modulator of most neurons in the Drosophila clock network.
Willmore, Ben D.B.; Bulstrode, Harry; Tolhurst, David J.
2012-01-01
Neuronal populations in the primary visual cortex (V1) of mammals exhibit contrast normalization. Neurons that respond strongly to simple visual stimuli – such as sinusoidal gratings – respond less well to the same stimuli when they are presented as part of a more complex stimulus which also excites other, neighboring neurons. This phenomenon is generally attributed to generalized patterns of inhibitory connections between nearby V1 neurons. The Bienenstock, Cooper and Munro (BCM) rule is a neural network learning rule that, when trained on natural images, produces model neurons which, individually, have many tuning properties in common with real V1 neurons. However, when viewed as a population, a BCM network is very different from V1 – each member of the BCM population tends to respond to the same dominant features of visual input, producing an incomplete, highly redundant code for visual information. Here, we demonstrate that, by adding contrast normalization into the BCM rule, we arrive at a neurally-plausible Hebbian learning rule that can learn an efficient sparse, overcomplete representation that is a better model for stimulus selectivity in V1. This suggests that one role of contrast normalization in V1 is to guide the neonatal development of receptive fields, so that neurons respond to different features of visual input. PMID:22230381
Dunmyre, Justin R
2011-06-01
The pre-Bötzinger complex of the mammalian brainstem is a heterogeneous neuronal network, and individual neurons within the network have varying strengths of the persistent sodium and calcium-activated nonspecific cationic currents. Individually, these currents have been the focus of modeling efforts. Previously, Dunmyre et al. (J Comput Neurosci 1-24, 2011) proposed a model and studied the interactions of these currents within one self-coupled neuron. In this work, I consider two identical, reciprocally coupled model neurons and validate the reduction to the self-coupled case. I find that all of the dynamics of the two model neuron network and the regions of parameter space where these distinct dynamics are found are qualitatively preserved in the reduction to the self-coupled case.
Efficient and accurate time-stepping schemes for integrate-and-fire neuronal networks.
Shelley, M J; Tao, L
2001-01-01
To avoid the numerical errors associated with resetting the potential following a spike in simulations of integrate-and-fire neuronal networks, Hansel et al. and Shelley independently developed a modified time-stepping method. Their particular scheme consists of second-order Runge-Kutta time-stepping, a linear interpolant to find spike times, and a recalibration of postspike potential using the spike times. Here we show analytically that such a scheme is second order, discuss the conditions under which efficient, higher-order algorithms can be constructed to treat resets, and develop a modified fourth-order scheme. To support our analysis, we simulate a system of integrate-and-fire conductance-based point neurons with all-to-all coupling. For six-digit accuracy, our modified Runge-Kutta fourth-order scheme needs a time-step of Delta(t) = 0.5 x 10(-3) seconds, whereas to achieve comparable accuracy using a recalibrated second-order or a first-order algorithm requires time-steps of 10(-5) seconds or 10(-9) seconds, respectively. Furthermore, since the cortico-cortical conductances in standard integrate-and-fire neuronal networks do not depend on the value of the membrane potential, we can attain fourth-order accuracy with computational costs normally associated with second-order schemes.
Structure-function analysis of genetically defined neuronal populations.
Groh, Alexander; Krieger, Patrik
2013-10-01
Morphological and functional classification of individual neurons is a crucial aspect of the characterization of neuronal networks. Systematic structural and functional analysis of individual neurons is now possible using transgenic mice with genetically defined neurons that can be visualized in vivo or in brain slice preparations. Genetically defined neurons are useful for studying a particular class of neurons and also for more comprehensive studies of the neuronal content of a network. Specific subsets of neurons can be identified by fluorescence imaging of enhanced green fluorescent protein (eGFP) or another fluorophore expressed under the control of a cell-type-specific promoter. The advantages of such genetically defined neurons are not only their homogeneity and suitability for systematic descriptions of networks, but also their tremendous potential for cell-type-specific manipulation of neuronal networks in vivo. This article describes a selection of procedures for visualizing and studying the anatomy and physiology of genetically defined neurons in transgenic mice. We provide information about basic equipment, reagents, procedures, and analytical approaches for obtaining three-dimensional (3D) cell morphologies and determining the axonal input and output of genetically defined neurons. We exemplify with genetically labeled cortical neurons, but the procedures are applicable to other brain regions with little or no alterations.
Spiking Neurons for Analysis of Patterns
NASA Technical Reports Server (NTRS)
Huntsberger, Terrance
2008-01-01
Artificial neural networks comprising spiking neurons of a novel type have been conceived as improved pattern-analysis and pattern-recognition computational systems. These neurons are represented by a mathematical model denoted the state-variable model (SVM), which among other things, exploits a computational parallelism inherent in spiking-neuron geometry. Networks of SVM neurons offer advantages of speed and computational efficiency, relative to traditional artificial neural networks. The SVM also overcomes some of the limitations of prior spiking-neuron models. There are numerous potential pattern-recognition, tracking, and data-reduction (data preprocessing) applications for these SVM neural networks on Earth and in exploration of remote planets. Spiking neurons imitate biological neurons more closely than do the neurons of traditional artificial neural networks. A spiking neuron includes a central cell body (soma) surrounded by a tree-like interconnection network (dendrites). Spiking neurons are so named because they generate trains of output pulses (spikes) in response to inputs received from sensors or from other neurons. They gain their speed advantage over traditional neural networks by using the timing of individual spikes for computation, whereas traditional artificial neurons use averages of activity levels over time. Moreover, spiking neurons use the delays inherent in dendritic processing in order to efficiently encode the information content of incoming signals. Because traditional artificial neurons fail to capture this encoding, they have less processing capability, and so it is necessary to use more gates when implementing traditional artificial neurons in electronic circuitry. Such higher-order functions as dynamic tasking are effected by use of pools (collections) of spiking neurons interconnected by spike-transmitting fibers. The SVM includes adaptive thresholds and submodels of transport of ions (in imitation of such transport in biological neurons). These features enable the neurons to adapt their responses to high-rate inputs from sensors, and to adapt their firing thresholds to mitigate noise or effects of potential sensor failure. The mathematical derivation of the SVM starts from a prior model, known in the art as the point soma model, which captures all of the salient properties of neuronal response while keeping the computational cost low. The point-soma latency time is modified to be an exponentially decaying function of the strength of the applied potential. Choosing computational efficiency over biological fidelity, the dendrites surrounding a neuron are represented by simplified compartmental submodels and there are no dendritic spines. Updates to the dendritic potential, calcium-ion concentrations and conductances, and potassium-ion conductances are done by use of equations similar to those of the point soma. Diffusion processes in dendrites are modeled by averaging among nearest-neighbor compartments. Inputs to each of the dendritic compartments come from sensors. Alternatively or in addition, when an affected neuron is part of a pool, inputs can come from other spiking neurons. At present, SVM neural networks are implemented by computational simulation, using algorithms that encode the SVM and its submodels. However, it should be possible to implement these neural networks in hardware: The differential equations for the dendritic and cellular processes in the SVM model of spiking neurons map to equivalent circuits that can be implemented directly in analog very-large-scale integrated (VLSI) circuits.
Reliability and synchronization in a delay-coupled neuronal network with synaptic plasticity
NASA Astrophysics Data System (ADS)
Pérez, Toni; Uchida, Atsushi
2011-06-01
We investigate the characteristics of reliability and synchronization of a neuronal network of delay-coupled integrate and fire neurons. Reliability and synchronization appear in separated regions of the phase space of the parameters considered. The effect of including synaptic plasticity and different delay values between the connections are also considered. We found that plasticity strongly changes the characteristics of reliability and synchronization in the parameter space of the coupling strength and the drive amplitude for the neuronal network. We also found that delay does not affect the reliability of the network but has a determinant influence on the synchronization of the neurons.
Reducing Neuronal Networks to Discrete Dynamics
Terman, David; Ahn, Sungwoo; Wang, Xueying; Just, Winfried
2008-01-01
We consider a general class of purely inhibitory and excitatory-inhibitory neuronal networks, with a general class of network architectures, and characterize the complex firing patterns that emerge. Our strategy for studying these networks is to first reduce them to a discrete model. In the discrete model, each neuron is represented as a finite number of states and there are rules for how a neuron transitions from one state to another. In this paper, we rigorously demonstrate that the continuous neuronal model can be reduced to the discrete model if the intrinsic and synaptic properties of the cells are chosen appropriately. In a companion paper [1], we analyze the discrete model. PMID:18443649
Neuron-Inspired Fe3O4/Conductive Carbon Filament Network for High-Speed and Stable Lithium Storage.
Hao, Shu-Meng; Li, Qian-Jie; Qu, Jin; An, Fei; Zhang, Yu-Jiao; Yu, Zhong-Zhen
2018-05-17
Construction of a continuous conductance network with high electron-transfer rate is extremely important for high-performance energy storage. Owing to the highly efficient mass transport and information transmission, neurons are exactly a perfect model for electron transport, inspiring us to design a neuron-like reaction network for high-performance lithium-ion batteries (LIBs) with Fe 3 O 4 as an example. The reactive cores (Fe 3 O 4 ) are protected by carbon shells and linked by carbon filaments, constituting an integrated conductance network. Thus, once the reaction starts, the electrons released from every Fe 3 O 4 cores are capable of being transferred rapidly through the whole network directly to the external circuit, endowing the nanocomposite with tremendous rate performance and ultralong cycle life. After 1000 cycles at current densities as high as 1 and 2 A g -1 , charge capacities of the as-synthesized nanocomposite maintain 971 and 715 mA h g -1 , respectively, much higher than those of reported Fe 3 O 4 -based anode materials. The Fe 3 O 4 -based conductive network provides a new idea for future developments of high-rate-performance LIBs.
Aliabadi, Mohsen; Farhadian, Maryam; Darvishi, Ebrahim
2015-08-01
Prediction of hearing loss in noisy workplaces is considered to be an important aspect of hearing conservation program. Artificial intelligence, as a new approach, can be used to predict the complex phenomenon such as hearing loss. Using artificial neural networks, this study aims to present an empirical model for the prediction of the hearing loss threshold among noise-exposed workers. Two hundred and ten workers employed in a steel factory were chosen, and their occupational exposure histories were collected. To determine the hearing loss threshold, the audiometric test was carried out using a calibrated audiometer. The personal noise exposure was also measured using a noise dosimeter in the workstations of workers. Finally, data obtained five variables, which can influence the hearing loss, were used for the development of the prediction model. Multilayer feed-forward neural networks with different structures were developed using MATLAB software. Neural network structures had one hidden layer with the number of neurons being approximately between 5 and 15 neurons. The best developed neural networks with one hidden layer and ten neurons could accurately predict the hearing loss threshold with RMSE = 2.6 dB and R(2) = 0.89. The results also confirmed that neural networks could provide more accurate predictions than multiple regressions. Since occupational hearing loss is frequently non-curable, results of accurate prediction can be used by occupational health experts to modify and improve noise exposure conditions.
Effects of channel noise on firing coherence of small-world Hodgkin-Huxley neuronal networks
NASA Astrophysics Data System (ADS)
Sun, X. J.; Lei, J. Z.; Perc, M.; Lu, Q. S.; Lv, S. J.
2011-01-01
We investigate the effects of channel noise on firing coherence of Watts-Strogatz small-world networks consisting of biophysically realistic HH neurons having a fraction of blocked voltage-gated sodium and potassium ion channels embedded in their neuronal membranes. The intensity of channel noise is determined by the number of non-blocked ion channels, which depends on the fraction of working ion channels and the membrane patch size with the assumption of homogeneous ion channel density. We find that firing coherence of the neuronal network can be either enhanced or reduced depending on the source of channel noise. As shown in this paper, sodium channel noise reduces firing coherence of neuronal networks; in contrast, potassium channel noise enhances it. Furthermore, compared with potassium channel noise, sodium channel noise plays a dominant role in affecting firing coherence of the neuronal network. Moreover, we declare that the observed phenomena are independent of the rewiring probability.
Study on algorithm of process neural network for soft sensing in sewage disposal system
NASA Astrophysics Data System (ADS)
Liu, Zaiwen; Xue, Hong; Wang, Xiaoyi; Yang, Bin; Lu, Siying
2006-11-01
A new method of soft sensing based on process neural network (PNN) for sewage disposal system is represented in the paper. PNN is an extension of traditional neural network, in which the inputs and outputs are time-variation. An aggregation operator is introduced to process neuron, and it makes the neuron network has the ability to deal with the information of space-time two dimensions at the same time, so the data processing enginery of biological neuron is imitated better than traditional neuron. Process neural network with the structure of three layers in which hidden layer is process neuron and input and output are common neurons for soft sensing is discussed. The intelligent soft sensing based on PNN may be used to fulfill measurement of the effluent BOD (Biochemical Oxygen Demand) from sewage disposal system, and a good training result of soft sensing was obtained by the method.
Kapucu, Fikret E.; Välkki, Inkeri; Mikkonen, Jarno E.; Leone, Chiara; Lenk, Kerstin; Tanskanen, Jarno M. A.; Hyttinen, Jari A. K.
2016-01-01
Synchrony and asynchrony are essential aspects of the functioning of interconnected neuronal cells and networks. New information on neuronal synchronization can be expected to aid in understanding these systems. Synchronization provides insight in the functional connectivity and the spatial distribution of the information processing in the networks. Synchronization is generally studied with time domain analysis of neuronal events, or using direct frequency spectrum analysis, e.g., in specific frequency bands. However, these methods have their pitfalls. Thus, we have previously proposed a method to analyze temporal changes in the complexity of the frequency of signals originating from different network regions. The method is based on the correlation of time varying spectral entropies (SEs). SE assesses the regularity, or complexity, of a time series by quantifying the uniformity of the frequency spectrum distribution. It has been previously employed, e.g., in electroencephalogram analysis. Here, we revisit our correlated spectral entropy method (CorSE), providing evidence of its justification, usability, and benefits. Here, CorSE is assessed with simulations and in vitro microelectrode array (MEA) data. CorSE is first demonstrated with a specifically tailored toy simulation to illustrate how it can identify synchronized populations. To provide a form of validation, the method was tested with simulated data from integrate-and-fire model based computational neuronal networks. To demonstrate the analysis of real data, CorSE was applied on in vitro MEA data measured from rat cortical cell cultures, and the results were compared with three known event based synchronization measures. Finally, we show the usability by tracking the development of networks in dissociated mouse cortical cell cultures. The results show that temporal correlations in frequency spectrum distributions reflect the network relations of neuronal populations. In the simulated data, CorSE unraveled the synchronizations. With the real in vitro MEA data, CorSE produced biologically plausible results. Since CorSE analyses continuous data, it is not affected by possibly poor spike or other event detection quality. We conclude that CorSE can reveal neuronal network synchronization based on in vitro MEA field potential measurements. CorSE is expected to be equally applicable also in the analysis of corresponding in vivo and ex vivo data analysis. PMID:27803660
Cocas, Laura A.; Fernandez, Gloria; Barch, Mariya; Doll, Jason; Zamora Diaz, Ivan
2016-01-01
The mammalian cerebral cortex is a dense network composed of local, subcortical, and intercortical synaptic connections. As a result, mapping cell type-specific neuronal connectivity in the cerebral cortex in vivo has long been a challenge for neurobiologists. In particular, the development of excitatory and inhibitory interneuron presynaptic input has been hard to capture. We set out to analyze the development of this connectivity in the first postnatal month using a murine model. First, we surveyed the connectivity of one of the earliest populations of neurons in the brain, the Cajal-Retzius (CR) cells in the neocortex, which are known to be critical for cortical layer formation and are hypothesized to be important in the establishment of early cortical networks. We found that CR cells receive inputs from deeper-layer excitatory neurons and inhibitory interneurons in the first postnatal week. We also found that both excitatory pyramidal neurons and inhibitory interneurons received broad inputs in the first postnatal week, including inputs from CR cells. Expanding our analysis into the more mature brain, we assessed the inputs onto inhibitory interneurons and excitatory projection neurons, labeling neuronal progenitors with Cre drivers to study discrete populations of neurons in older cortex, and found that excitatory cortical and subcortical inputs are refined by the fourth week of development, whereas local inhibitory inputs increase during this postnatal period. Cell type-specific circuit mapping is specific, reliable, and effective, and can be used on molecularly defined subtypes to determine connectivity in the cortex. SIGNIFICANCE STATEMENT Mapping cortical connectivity in the developing mammalian brain has been an intractable problem, in part because it has not been possible to analyze connectivity with cell subtype precision. Our study systematically targets the presynaptic connections of discrete neuronal subtypes in both the mature and developing cerebral cortex. We analyzed the connections that Cajal-Retzius cells make and receive, and found that these cells receive inputs from deeper-layer excitatory neurons and inhibitory interneurons in the first postnatal week. We assessed the inputs onto inhibitory interneurons and excitatory projection neurons, the major two types of neurons in the cortex, and found that excitatory inputs are refined by the fourth week of development, whereas local inhibitory inputs increase during this postnatal period. PMID:26985044
A combined Bodian-Nissl stain for improved network analysis in neuronal cell culture.
Hightower, M; Gross, G W
1985-11-01
Bodian and Nissl procedures were combined to stain dissociated mouse spinal cord cells cultured on coverslips. The Bodian technique stains fine neuronal processes in great detail as well as an intracellular fibrillar network concentrated around the nucleus and in proximal neurites. The Nissl stain clearly delimits neuronal cytoplasm in somata and in large dendrites. A combination of these techniques allows the simultaneous depiction of neuronal perikarya and all afferent and efferent processes. Costaining with little background staining by either procedure suggests high specificity for neurons. This procedure could be exploited for routine network analysis of cultured neurons.
Cortical Dynamics in Presence of Assemblies of Densely Connected Weight-Hub Neurons
Setareh, Hesam; Deger, Moritz; Petersen, Carl C. H.; Gerstner, Wulfram
2017-01-01
Experimental measurements of pairwise connection probability of pyramidal neurons together with the distribution of synaptic weights have been used to construct randomly connected model networks. However, several experimental studies suggest that both wiring and synaptic weight structure between neurons show statistics that differ from random networks. Here we study a network containing a subset of neurons which we call weight-hub neurons, that are characterized by strong inward synapses. We propose a connectivity structure for excitatory neurons that contain assemblies of densely connected weight-hub neurons, while the pairwise connection probability and synaptic weight distribution remain consistent with experimental data. Simulations of such a network with generalized integrate-and-fire neurons display regular and irregular slow oscillations akin to experimentally observed up/down state transitions in the activity of cortical neurons with a broad distribution of pairwise spike correlations. Moreover, stimulation of a model network in the presence or absence of assembly structure exhibits responses similar to light-evoked responses of cortical layers in optogenetically modified animals. We conclude that a high connection probability into and within assemblies of excitatory weight-hub neurons, as it likely is present in some but not all cortical layers, changes the dynamics of a layer of cortical microcircuitry significantly. PMID:28690508
The Role of Adult-Born Neurons in the Constantly Changing Olfactory Bulb Network
Malvaut, Sarah; Saghatelyan, Armen
2016-01-01
The adult mammalian brain is remarkably plastic and constantly undergoes structurofunctional modifications in response to environmental stimuli. In many regions plasticity is manifested by modifications in the efficacy of existing synaptic connections or synapse formation and elimination. In a few regions, however, plasticity is brought by the addition of new neurons that integrate into established neuronal networks. This type of neuronal plasticity is particularly prominent in the olfactory bulb (OB) where thousands of neuronal progenitors are produced on a daily basis in the subventricular zone (SVZ) and migrate along the rostral migratory stream (RMS) towards the OB. In the OB, these neuronal precursors differentiate into local interneurons, mature, and functionally integrate into the bulbar network by establishing output synapses with principal neurons. Despite continuous progress, it is still not well understood how normal functioning of the OB is preserved in the constantly remodelling bulbar network and what role adult-born neurons play in odor behaviour. In this review we will discuss different levels of morphofunctional plasticity effected by adult-born neurons and their functional role in the adult OB and also highlight the possibility that different subpopulations of adult-born cells may fulfill distinct functions in the OB neuronal network and odor behaviour. PMID:26839709
The frequency preference of neurons and synapses in a recurrent oscillatory network.
Tseng, Hua-an; Martinez, Diana; Nadim, Farzan
2014-09-17
A variety of neurons and synapses shows a maximal response at a preferred frequency, generally considered to be important in shaping network activity. We are interested in whether all neurons and synapses in a recurrent oscillatory network can have preferred frequencies and, if so, whether these frequencies are the same or correlated, and whether they influence the network activity. We address this question using identified neurons in the pyloric network of the crab Cancer borealis. Previous work has shown that the pyloric pacemaker neurons exhibit membrane potential resonance whose resonance frequency is correlated with the network frequency. The follower lateral pyloric (LP) neuron makes reciprocally inhibitory synapses with the pacemakers. We find that LP shows resonance at a higher frequency than the pacemakers and the network frequency falls between the two. We also find that the reciprocal synapses between the pacemakers and LP have preferred frequencies but at significantly lower values. The preferred frequency of the LP to pacemaker synapse is correlated with the presynaptic preferred frequency, which is most pronounced when the peak voltage of the LP waveform is within the dynamic range of the synaptic activation curve and a shift in the activation curve by the modulatory neuropeptide proctolin shifts the frequency preference. Proctolin also changes the power of the LP neuron resonance without significantly changing the resonance frequency. These results indicate that different neuron types and synapses in a network may have distinct preferred frequencies, which are subject to neuromodulation and may interact to shape network oscillations. Copyright © 2014 the authors 0270-6474/14/3412933-13$15.00/0.
On the Dynamics of the Spontaneous Activity in Neuronal Networks
Bonifazi, Paolo; Ruaro, Maria Elisabetta; Torre, Vincent
2007-01-01
Most neuronal networks, even in the absence of external stimuli, produce spontaneous bursts of spikes separated by periods of reduced activity. The origin and functional role of these neuronal events are still unclear. The present work shows that the spontaneous activity of two very different networks, intact leech ganglia and dissociated cultures of rat hippocampal neurons, share several features. Indeed, in both networks: i) the inter-spike intervals distribution of the spontaneous firing of single neurons is either regular or periodic or bursting, with the fraction of bursting neurons depending on the network activity; ii) bursts of spontaneous spikes have the same broad distributions of size and duration; iii) the degree of correlated activity increases with the bin width, and the power spectrum of the network firing rate has a 1/f behavior at low frequencies, indicating the existence of long-range temporal correlations; iv) the activity of excitatory synaptic pathways mediated by NMDA receptors is necessary for the onset of the long-range correlations and for the presence of large bursts; v) blockage of inhibitory synaptic pathways mediated by GABAA receptors causes instead an increase in the correlation among neurons and leads to a burst distribution composed only of very small and very large bursts. These results suggest that the spontaneous electrical activity in neuronal networks with different architectures and functions can have very similar properties and common dynamics. PMID:17502919
Kemp, Paul J; Rushton, David J; Yarova, Polina L; Schnell, Christian; Geater, Charlene; Hancock, Jane M; Wieland, Annalena; Hughes, Alis; Badder, Luned; Cope, Emma; Riccardi, Daniela; Randall, Andrew D; Brown, Jonathan T; Allen, Nicholas D; Telezhkin, Vsevolod
2016-11-15
Neurons differentiated from pluripotent stem cells using established neural culture conditions often exhibit functional deficits. Recently, we have developed enhanced media which both synchronize the neurogenesis of pluripotent stem cell-derived neural progenitors and accelerate their functional maturation; together these media are termed SynaptoJuice. This pair of media are pro-synaptogenic and generate authentic, mature synaptic networks of connected forebrain neurons from a variety of induced pluripotent and embryonic stem cell lines. Such enhanced rate and extent of synchronized maturation of pluripotent stem cell-derived neural progenitor cells generates neurons which are characterized by a relatively hyperpolarized resting membrane potential, higher spontaneous and induced action potential activity, enhanced synaptic activity, more complete development of a mature inhibitory GABA A receptor phenotype and faster production of electrical network activity when compared to standard differentiation media. This entire process - from pre-patterned neural progenitor to active neuron - takes 3 weeks or less, making it an ideal platform for drug discovery and disease modelling in the fields of human neurodegenerative and neuropsychiatric disorders, such as Huntington's disease, Parkinson's disease, Alzheimer's disease and Schizophrenia. © 2016 The Authors. The Journal of Physiology © 2016 The Physiological Society.
Neuronal plasticity and thalamocortical sleep and waking oscillations
Timofeev, Igor
2011-01-01
Throughout life, thalamocortical (TC) network alternates between activated states (wake or rapid eye movement sleep) and slow oscillatory state dominating slow-wave sleep. The patterns of neuronal firing are different during these distinct states. I propose that due to relatively regular firing, the activated states preset some steady state synaptic plasticity and that the silent periods of slow-wave sleep contribute to a release from this steady state synaptic plasticity. In this respect, I discuss how states of vigilance affect short-, mid-, and long-term synaptic plasticity, intrinsic neuronal plasticity, as well as homeostatic plasticity. Finally, I suggest that slow oscillation is intrinsic property of cortical network and brain homeostatic mechanisms are tuned to use all forms of plasticity to bring cortical network to the state of slow oscillation. However, prolonged and profound shift from this homeostatic balance could lead to development of paroxysmal hyperexcitability and seizures as in the case of brain trauma. PMID:21854960
Numbers And Gains Of Neurons In Winner-Take-All Networks
NASA Technical Reports Server (NTRS)
Brown, Timothy X.
1993-01-01
Report presents theoretical study of gains required in neurons to implement winner-take-all electronic neural network of given size and related question of maximum size of winner-take-all network in which neurons have specified sigmoid transfer or response function with specified gain.
Vasquez, Juan C.; Houweling, Arthur R.; Tiesinga, Paul
2013-01-01
Neuronal networks in rodent barrel cortex are characterized by stable low baseline firing rates. However, they are sensitive to the action potentials of single neurons as suggested by recent single-cell stimulation experiments that reported quantifiable behavioral responses in response to short spike trains elicited in single neurons. Hence, these networks are stable against internally generated fluctuations in firing rate but at the same time remain sensitive to similarly-sized externally induced perturbations. We investigated stability and sensitivity in a simple recurrent network of stochastic binary neurons and determined numerically the effects of correlation between the number of afferent (“in-degree”) and efferent (“out-degree”) connections in neurons. The key advance reported in this work is that anti-correlation between in-/out-degree distributions increased the stability of the network in comparison to networks with no correlation or positive correlations, while being able to achieve the same level of sensitivity. The experimental characterization of degree distributions is difficult because all pre-synaptic and post-synaptic neurons have to be identified and counted. We explored whether the statistics of network motifs, which requires the characterization of connections between small subsets of neurons, could be used to detect evidence for degree anti-correlations. We find that the sample frequency of the 3-neuron “ring” motif (1→2→3→1), can be used to detect degree anti-correlation for sub-networks of size 30 using about 50 samples, which is of significance because the necessary measurements are achievable experimentally in the near future. Taken together, we hypothesize that barrel cortex networks exhibit degree anti-correlations and specific network motif statistics. PMID:24223550
Munsell, B C; Wu, G; Fridriksson, J; Thayer, K; Mofrad, N; Desisto, N; Shen, D; Bonilha, L
2017-09-09
Impaired confrontation naming is a common symptom of temporal lobe epilepsy (TLE). The neurobiological mechanisms underlying this impairment are poorly understood but may indicate a structural disorganization of broadly distributed neuronal networks that support naming ability. Importantly, naming is frequently impaired in other neurological disorders and by contrasting the neuronal structures supporting naming in TLE with other diseases, it will become possible to elucidate the common systems supporting naming. We aimed to evaluate the neuronal networks that support naming in TLE by using a machine learning algorithm intended to predict naming performance in subjects with medication refractory TLE using only the structural brain connectome reconstructed from diffusion tensor imaging. A connectome-based prediction framework was developed using network properties from anatomically defined brain regions across the entire brain, which were used in a multi-task machine learning algorithm followed by support vector regression. Nodal eigenvector centrality, a measure of regional network integration, predicted approximately 60% of the variance in naming. The nodes with the highest regression weight were bilaterally distributed among perilimbic sub-networks involving mainly the medial and lateral temporal lobe regions. In the context of emerging evidence regarding the role of large structural networks that support language processing, our results suggest intact naming relies on the integration of sub-networks, as opposed to being dependent on isolated brain areas. In the case of TLE, these sub-networks may be disproportionately indicative naming processes that are dependent semantic integration from memory and lexical retrieval, as opposed to multi-modal perception or motor speech production. Copyright © 2017. Published by Elsevier Inc.
Functional Interactions between Mammalian Respiratory Rhythmogenic and Premotor Circuitry
Song, Hanbing; Hayes, John A.; Vann, Nikolas C.; Wang, Xueying; LaMar, M. Drew
2016-01-01
Breathing in mammals depends on rhythms that originate from the preBötzinger complex (preBötC) of the ventral medulla and a network of brainstem and spinal premotor neurons. The rhythm-generating core of the preBötC, as well as some premotor circuits, consist of interneurons derived from Dbx1-expressing precursors (Dbx1 neurons), but the structure and function of these networks remain incompletely understood. We previously developed a cell-specific detection and laser ablation system to interrogate respiratory network structure and function in a slice model of breathing that retains the preBötC, the respiratory-related hypoglossal (XII) motor nucleus and XII premotor circuits. In spontaneously rhythmic slices, cumulative ablation of Dbx1 preBötC neurons decreased XII motor output by ∼50% after ∼15 cell deletions, and then decelerated and terminated rhythmic function altogether as the tally increased to ∼85 neurons. In contrast, cumulatively deleting Dbx1 XII premotor neurons decreased motor output monotonically but did not affect frequency nor stop XII output regardless of the ablation tally. Here, we couple an existing preBötC model with a premotor population in several topological configurations to investigate which one may replicate the laser ablation experiments best. If the XII premotor population is a “small-world” network (rich in local connections with sparse long-range connections among constituent premotor neurons) and connected with the preBötC such that the total number of incoming synapses remains fixed, then the in silico system successfully replicates the in vitro laser ablation experiments. This study proposes a feasible configuration for circuits consisting of Dbx1-derived interneurons that generate inspiratory rhythm and motor pattern. SIGNIFICANCE STATEMENT To produce a breathing-related motor pattern, a brainstem core oscillator circuit projects to a population of premotor interneurons, but the assemblage of this network remains incompletely understood. Here we applied network modeling and numerical simulation to discover respiratory circuit configurations that successfully replicate photonic cell ablation experiments targeting either the core oscillator or premotor network, respectively. If premotor neurons are interconnected in a so-called “small-world” network with a fixed number of incoming synapses balanced between premotor and rhythmogenic neurons, then our simulations match their experimental benchmarks. These results provide a framework of experimentally testable predictions regarding the rudimentary structure and function of respiratory rhythm- and pattern-generating circuits in the brainstem of mammals. PMID:27383596
Sustained synchronized neuronal network activity in a human astrocyte co-culture system
Kuijlaars, Jacobine; Oyelami, Tutu; Diels, Annick; Rohrbacher, Jutta; Versweyveld, Sofie; Meneghello, Giulia; Tuefferd, Marianne; Verstraelen, Peter; Detrez, Jan R.; Verschuuren, Marlies; De Vos, Winnok H.; Meert, Theo; Peeters, Pieter J.; Cik, Miroslav; Nuydens, Rony; Brône, Bert; Verheyen, An
2016-01-01
Impaired neuronal network function is a hallmark of neurodevelopmental and neurodegenerative disorders such as autism, schizophrenia, and Alzheimer’s disease and is typically studied using genetically modified cellular and animal models. Weak predictive capacity and poor translational value of these models urge for better human derived in vitro models. The implementation of human induced pluripotent stem cells (hiPSCs) allows studying pathologies in differentiated disease-relevant and patient-derived neuronal cells. However, the differentiation process and growth conditions of hiPSC-derived neurons are non-trivial. In order to study neuronal network formation and (mal)function in a fully humanized system, we have established an in vitro co-culture model of hiPSC-derived cortical neurons and human primary astrocytes that recapitulates neuronal network synchronization and connectivity within three to four weeks after final plating. Live cell calcium imaging, electrophysiology and high content image analyses revealed an increased maturation of network functionality and synchronicity over time for co-cultures compared to neuronal monocultures. The cells express GABAergic and glutamatergic markers and respond to inhibitors of both neurotransmitter pathways in a functional assay. The combination of this co-culture model with quantitative imaging of network morphofunction is amenable to high throughput screening for lead discovery and drug optimization for neurological diseases. PMID:27819315
Leader neurons in leaky integrate and fire neural network simulations.
Zbinden, Cyrille
2011-10-01
In this paper, we highlight the topological properties of leader neurons whose existence is an experimental fact. Several experimental studies show the existence of leader neurons in population bursts of activity in 2D living neural networks (Eytan and Marom, J Neurosci 26(33):8465-8476, 2006; Eckmann et al., New J Phys 10(015011), 2008). A leader neuron is defined as a neuron which fires at the beginning of a burst (respectively network spike) more often than we expect by chance considering its mean firing rate. This means that leader neurons have some burst triggering power beyond a chance-level statistical effect. In this study, we characterize these leader neuron properties. This naturally leads us to simulate neural 2D networks. To build our simulations, we choose the leaky integrate and fire (lIF) neuron model (Gerstner and Kistler 2002; Cessac, J Math Biol 56(3):311-345, 2008), which allows fast simulations (Izhikevich, IEEE Trans Neural Netw 15(5):1063-1070, 2004; Gerstner and Naud, Science 326:379-380, 2009). The dynamics of our lIF model has got stable leader neurons in the burst population that we simulate. These leader neurons are excitatory neurons and have a low membrane potential firing threshold. Except for these two first properties, the conditions required for a neuron to be a leader neuron are difficult to identify and seem to depend on several parameters involved in the simulations themselves. However, a detailed linear analysis shows a trend of the properties required for a neuron to be a leader neuron. Our main finding is: A leader neuron sends signals to many excitatory neurons as well as to few inhibitory neurons and a leader neuron receives only signals from few other excitatory neurons. Our linear analysis exhibits five essential properties of leader neurons each with different relative importance. This means that considering a given neural network with a fixed mean number of connections per neuron, our analysis gives us a way of predicting which neuron is a good leader neuron and which is not. Our prediction formula correctly assesses leadership for at least ninety percent of neurons.
The interplay between neurons and glia in synapse development and plasticity.
Stogsdill, Jeff A; Eroglu, Cagla
2017-02-01
In the brain, the formation of complex neuronal networks amenable to experience-dependent remodeling is complicated by the diversity of neurons and synapse types. The establishment of a functional brain depends not only on neurons, but also non-neuronal glial cells. Glia are in continuous bi-directional communication with neurons to direct the formation and refinement of synaptic connectivity. This article reviews important findings, which uncovered cellular and molecular aspects of the neuron-glia cross-talk that govern the formation and remodeling of synapses and circuits. In vivo evidence demonstrating the critical interplay between neurons and glia will be the major focus. Additional attention will be given to how aberrant communication between neurons and glia may contribute to neural pathologies. Copyright © 2016 Elsevier Ltd. All rights reserved.
Adaptive Neurons For Artificial Neural Networks
NASA Technical Reports Server (NTRS)
Tawel, Raoul
1990-01-01
Training time decreases dramatically. In improved mathematical model of neural-network processor, temperature of neurons (in addition to connection strengths, also called weights, of synapses) varied during supervised-learning phase of operation according to mathematical formalism and not heuristic rule. Evidence that biological neural networks also process information at neuronal level.
Recent Developments in VSD Imaging of Small Neuronal Networks
ERIC Educational Resources Information Center
Hill, Evan S.; Bruno, Angela M.; Frost, William N.
2014-01-01
Voltage-sensitive dye (VSD) imaging is a powerful technique that can provide, in single experiments, a large-scale view of network activity unobtainable with traditional sharp electrode recording methods. Here we review recent work using VSDs to study small networks and highlight several results from this approach. Topics covered include circuit…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sikora, R.; Chady, T.; Baniukiewicz, P.
2010-02-22
Nondestructive testing and evaluation are under continuous development. Currently researches are concentrated on three main topics: advancement of existing methods, introduction of novel methods and development of artificial intelligent systems for automatic defect recognition (ADR). Automatic defect classification algorithm comprises of two main tasks: creating a defect database and preparing a defect classifier. Here, the database was built using defect features that describe all geometrical and texture properties of the defect. Almost twenty carefully selected features calculated for flaws extracted from real radiograms were used. The radiograms were obtained from shipbuilding industry and they were verified by qualified operator. Twomore » weld defect's classifiers based on artificial neural networks were proposed and compared. First model consisted of one neural network model, where each output neuron corresponded to different defect group. The second model contained five neural networks. Each neural network had one neuron on output and was responsible for detection of defects from one group. In order to evaluate the effectiveness of the neural networks classifiers, the mean square errors were calculated for test radiograms and compared.« less
NASA Astrophysics Data System (ADS)
Sikora, R.; Chady, T.; Baniukiewicz, P.; Caryk, M.; Piekarczyk, B.
2010-02-01
Nondestructive testing and evaluation are under continuous development. Currently researches are concentrated on three main topics: advancement of existing methods, introduction of novel methods and development of artificial intelligent systems for automatic defect recognition (ADR). Automatic defect classification algorithm comprises of two main tasks: creating a defect database and preparing a defect classifier. Here, the database was built using defect features that describe all geometrical and texture properties of the defect. Almost twenty carefully selected features calculated for flaws extracted from real radiograms were used. The radiograms were obtained from shipbuilding industry and they were verified by qualified operator. Two weld defect's classifiers based on artificial neural networks were proposed and compared. First model consisted of one neural network model, where each output neuron corresponded to different defect group. The second model contained five neural networks. Each neural network had one neuron on output and was responsible for detection of defects from one group. In order to evaluate the effectiveness of the neural networks classifiers, the mean square errors were calculated for test radiograms and compared.
Temporal coding in a silicon network of integrate-and-fire neurons.
Liu, Shih-Chii; Douglas, Rodney
2004-09-01
Spatio-temporal processing of spike trains by neuronal networks depends on a variety of mechanisms distributed across synapses, dendrites, and somata. In natural systems, the spike trains and the processing mechanisms cohere though their common physical instantiation. This coherence is lost when the natural system is encoded for simulation on a general purpose computer. By contrast, analog VLSI circuits are, like neurons, inherently related by their real-time physics, and so, could provide a useful substrate for exploring neuronlike event-based processing. Here, we describe a hybrid analog-digital VLSI chip comprising a set of integrate-and-fire neurons and short-term dynamical synapses that can be configured into simple network architectures with some properties of neocortical neuronal circuits. We show that, despite considerable fabrication variance in the properties of individual neurons, the chip offers a viable substrate for exploring real-time spike-based processing in networks of neurons.
Obenhaus, Horst A; Rozov, Andrei; Bertocchi, Ilaria; Tang, Wannan; Kirsch, Joachim; Betz, Heinrich; Sprengel, Rolf
2016-01-01
The causal interrogation of neuronal networks involved in specific behaviors requires the spatially and temporally controlled modulation of neuronal activity. For long-term manipulation of neuronal activity, chemogenetic tools provide a reasonable alternative to short-term optogenetic approaches. Here we show that virus mediated gene transfer of the ivermectin (IVM) activated glycine receptor mutant GlyRα1 (AG) can be used for the selective and reversible silencing of specific neuronal networks in mice. In the striatum, dorsal hippocampus, and olfactory bulb, GlyRα1 (AG) promoted IVM dependent effects in representative behavioral assays. Moreover, GlyRα1 (AG) mediated silencing had a strong and reversible impact on neuronal ensemble activity and c-Fos activation in the olfactory bulb. Together our results demonstrate that long-term, reversible and re-inducible neuronal silencing via GlyRα1 (AG) is a promising tool for the interrogation of network mechanisms underlying the control of behavior and memory formation.
Dlx1/2 and Otp coordinate the production of hypothalamic GHRH- and AgRP-neurons.
Lee, Bora; Kim, Janghyun; An, Taekyeong; Kim, Sangsoo; Patel, Esha M; Raber, Jacob; Lee, Soo-Kyung; Lee, Seunghee; Lee, Jae W
2018-05-23
Despite critical roles of the hypothalamic arcuate neurons in controlling the growth and energy homeostasis, the gene regulatory network directing their development remains unclear. Here we report that the transcription factors Dlx1/2 and Otp coordinate the balanced generation of the two functionally related neurons in the hypothalamic arcuate nucleus, GHRH-neurons promoting the growth and AgRP-neurons controlling the feeding and energy expenditure. Dlx1/2-deficient mice show a loss-of-GHRH-neurons and an increase of AgRP-neurons, and consistently develop dwarfism and consume less energy. These results indicate that Dlx1/2 are crucial for specifying the GHRH-neuronal identity and, simultaneously, for suppressing AgRP-neuronal fate. We further show that Otp is required for the generation of AgRP-neurons and that Dlx1/2 repress the expression of Otp by directly binding the Otp gene. Together, our study demonstrates that the identity of GHRH- and AgRP-neurons is synchronously specified and segregated by the Dlx1/2-Otp gene regulatory axis.
Loohuis, Nikkie FM Olde; Kasri, Nael Nadif; Glennon, Jeffrey C; van Bokhoven, Hans; Hébert, Sébastien S; Kaplan, Barry B.; Martens, Gerard JM; Aschrafi, Armaz
2016-01-01
MicroRNAs (miRs) are small regulatory molecules, which orchestrate neuronal development and plasticity through modulation of complex gene networks. microRNA-137 (miR-137) is a brain-enriched RNA with a critical role in regulating brain development and in mediating synaptic plasticity. Importantly, mutations in this miR are associated with the pathoetiology of schizophrenia (SZ), and there is a widespread assumption that disruptions in miR-137 expression lead to aberrant expression of gene regulatory networks associated with SZ. To systematically identify the mRNA targets for this miR, we performed miR-137 gain- and loss-of-function experiments in primary rat hippocampal neurons and profiled differentially expressed mRNAs through next-generation sequencing. We identified 500 genes that were bidirectionally activated or repressed in their expression by the modulation of miR-137 levels. Gene ontology analysis using two independent software resources suggested functions for these miR-137-regulated genes in neurodevelopmental processes, neuronal maturation processes and cell maintenance, all of which known to be critical for proper brain circuitry formation. Since many of the putative miR-137 targets identified here also have been previously shown to be associated with SZ, we propose that this miR acts as a critical gene network hub contributing to the pathophysiology of this neurodevelopmental disorder. PMID:26925706
Defects formation and wave emitting from defects in excitable media
NASA Astrophysics Data System (ADS)
Ma, Jun; Xu, Ying; Tang, Jun; Wang, Chunni
2016-05-01
Abnormal electrical activities in neuronal system could be associated with some neuronal diseases. Indeed, external forcing can cause breakdown even collapse in nervous system under appropriate condition. The excitable media sometimes could be described by neuronal network with different topologies. The collective behaviors of neurons can show complex spatiotemporal dynamical properties and spatial distribution for electrical activities due to self-organization even from the regulating from central nervous system. Defects in the nervous system can emit continuous waves or pulses, and pacemaker-like source is generated to perturb the normal signal propagation in nervous system. How these defects are developed? In this paper, a network of neurons is designed in two-dimensional square array with nearest-neighbor connection type; the formation mechanism of defects is investigated by detecting the wave propagation induced by external forcing. It is found that defects could be induced under external periodical forcing under the boundary, and then the wave emitted from the defects can keep balance with the waves excited from external forcing.
McConnell, Michael J; Moran, John V; Abyzov, Alexej; Akbarian, Schahram; Bae, Taejeong; Cortes-Ciriano, Isidro; Erwin, Jennifer A; Fasching, Liana; Flasch, Diane A; Freed, Donald; Ganz, Javier; Jaffe, Andrew E; Kwan, Kenneth Y; Kwon, Minseok; Lodato, Michael A; Mills, Ryan E; Paquola, Apua C M; Rodin, Rachel E; Rosenbluh, Chaggai; Sestan, Nenad; Sherman, Maxwell A; Shin, Joo Heon; Song, Saera; Straub, Richard E; Thorpe, Jeremy; Weinberger, Daniel R; Urban, Alexander E; Zhou, Bo; Gage, Fred H; Lehner, Thomas; Senthil, Geetha; Walsh, Christopher A; Chess, Andrew; Courchesne, Eric; Gleeson, Joseph G; Kidd, Jeffrey M; Park, Peter J; Pevsner, Jonathan; Vaccarino, Flora M
2017-04-28
Neuropsychiatric disorders have a complex genetic architecture. Human genetic population-based studies have identified numerous heritable sequence and structural genomic variants associated with susceptibility to neuropsychiatric disease. However, these germline variants do not fully account for disease risk. During brain development, progenitor cells undergo billions of cell divisions to generate the ~80 billion neurons in the brain. The failure to accurately repair DNA damage arising during replication, transcription, and cellular metabolism amid this dramatic cellular expansion can lead to somatic mutations. Somatic mutations that alter subsets of neuronal transcriptomes and proteomes can, in turn, affect cell proliferation and survival and lead to neurodevelopmental disorders. The long life span of individual neurons and the direct relationship between neural circuits and behavior suggest that somatic mutations in small populations of neurons can significantly affect individual neurodevelopment. The Brain Somatic Mosaicism Network has been founded to study somatic mosaicism both in neurotypical human brains and in the context of complex neuropsychiatric disorders. Copyright © 2017, American Association for the Advancement of Science.
Schönweiler, R; Kaese, S; Möller, S; Rinscheid, A; Ptok, M
1996-12-05
Neuronal networks are computer-based techniques for the evaluation and control of complex information systems and processes. So far, they have been used in engineering, telecommunications, artificial speech and speech recognition. A new approach in neuronal network is the self-organizing map (Kohonen map). In the phase of 'learning', the map adapts to the patterns of the primary signals. If, the phase of 'using the map', the input signal hits the field of the primary signals, it resembles them and is called a 'winner'. In our study, we recorded the cries of newborns and young infants using digital audio tape (DAT) and a high quality microphone. The cries were elicited by tactile stimuli wearing headphones. In 27 cases, delayed auditory feedback was presented to the children using a headphone and an additional three-head tape-recorder. Spectrographic characteristics of the cries were classified by 20-step bark spectra and then applied to the neuronal networks. It was possible to recognize similarities of different cries of the same children as well as interindividual differences, which are also audible to experienced listeners. Differences were obvious in profound hearing loss. We know much about the cries of both healthy and sick infants, but a reliable investigation regimen, which can be used for clinical routine purposes, has yet not been developed. If, in the future, it becomes possible to classify spectrographic characteristics automatically, even if they are not audible, neuronal networks may be helpful in the early diagnosis of infant diseases.
Rybak, I A; O'Connor, R; Ross, A; Shevtsova, N A; Nuding, S C; Segers, L S; Shannon, R; Dick, T E; Dunin-Barkowski, W L; Orem, J M; Solomon, I C; Morris, K F; Lindsey, B G
2008-10-01
A large body of data suggests that the pontine respiratory group (PRG) is involved in respiratory phase-switching and the reconfiguration of the brain stem respiratory network. However, connectivity between the PRG and ventral respiratory column (VRC) in computational models has been largely ad hoc. We developed a network model with PRG-VRC connectivity inferred from coordinated in vivo experiments. Neurons were modeled in the "integrate-and-fire" style; some neurons had pacemaker properties derived from the model of Breen et al. We recapitulated earlier modeling results, including reproduction of activity profiles of different respiratory neurons and motor outputs, and their changes under different conditions (vagotomy, pontine lesions, etc.). The model also reproduced characteristic changes in neuronal and motor patterns observed in vivo during fictive cough and during hypoxia in non-rapid eye movement sleep. Our simulations suggested possible mechanisms for respiratory pattern reorganization during these behaviors. The model predicted that network- and pacemaker-generated rhythms could be co-expressed during the transition from gasping to eupnea, producing a combined "burst-ramp" pattern of phrenic discharges. To test this prediction, phrenic activity and multiple single neuron spike trains were monitored in vagotomized, decerebrate, immobilized, thoracotomized, and artificially ventilated cats during hypoxia and recovery. In most experiments, phrenic discharge patterns during recovery from hypoxia were similar to those predicted by the model. We conclude that under certain conditions, e.g., during recovery from severe brain hypoxia, components of a distributed network activity present during eupnea can be co-expressed with gasp patterns generated by a distinct, functionally "simplified" mechanism.
Kerr, Robert R; Burkitt, Anthony N; Thomas, Doreen A; Gilson, Matthieu; Grayden, David B
2013-01-01
Learning rules, such as spike-timing-dependent plasticity (STDP), change the structure of networks of neurons based on the firing activity. A network level understanding of these mechanisms can help infer how the brain learns patterns and processes information. Previous studies have shown that STDP selectively potentiates feed-forward connections that have specific axonal delays, and that this underlies behavioral functions such as sound localization in the auditory brainstem of the barn owl. In this study, we investigate how STDP leads to the selective potentiation of recurrent connections with different axonal and dendritic delays during oscillatory activity. We develop analytical models of learning with additive STDP in recurrent networks driven by oscillatory inputs, and support the results using simulations with leaky integrate-and-fire neurons. Our results show selective potentiation of connections with specific axonal delays, which depended on the input frequency. In addition, we demonstrate how this can lead to a network becoming selective in the amplitude of its oscillatory response to this frequency. We extend this model of axonal delay selection within a single recurrent network in two ways. First, we show the selective potentiation of connections with a range of both axonal and dendritic delays. Second, we show axonal delay selection between multiple groups receiving out-of-phase, oscillatory inputs. We discuss the application of these models to the formation and activation of neuronal ensembles or cell assemblies in the cortex, and also to missing fundamental pitch perception in the auditory brainstem.
Kerr, Robert R.; Burkitt, Anthony N.; Thomas, Doreen A.; Gilson, Matthieu; Grayden, David B.
2013-01-01
Learning rules, such as spike-timing-dependent plasticity (STDP), change the structure of networks of neurons based on the firing activity. A network level understanding of these mechanisms can help infer how the brain learns patterns and processes information. Previous studies have shown that STDP selectively potentiates feed-forward connections that have specific axonal delays, and that this underlies behavioral functions such as sound localization in the auditory brainstem of the barn owl. In this study, we investigate how STDP leads to the selective potentiation of recurrent connections with different axonal and dendritic delays during oscillatory activity. We develop analytical models of learning with additive STDP in recurrent networks driven by oscillatory inputs, and support the results using simulations with leaky integrate-and-fire neurons. Our results show selective potentiation of connections with specific axonal delays, which depended on the input frequency. In addition, we demonstrate how this can lead to a network becoming selective in the amplitude of its oscillatory response to this frequency. We extend this model of axonal delay selection within a single recurrent network in two ways. First, we show the selective potentiation of connections with a range of both axonal and dendritic delays. Second, we show axonal delay selection between multiple groups receiving out-of-phase, oscillatory inputs. We discuss the application of these models to the formation and activation of neuronal ensembles or cell assemblies in the cortex, and also to missing fundamental pitch perception in the auditory brainstem. PMID:23408878
Tessadori, Jacopo; Ghirardi, Mirella
2015-01-01
Brain functions are strictly dependent on neural connections formed during development and modified during life. The cellular and molecular mechanisms underlying synaptogenesis and plastic changes involved in learning and memory have been analyzed in detail in simple animals such as invertebrates and in circuits of mammalian brains mainly by intracellular recordings of neuronal activity. In the last decades, the evolution of techniques such as microelectrode arrays (MEAs) that allow simultaneous, long-lasting, noninvasive, extracellular recordings from a large number of neurons has proven very useful to study long-term processes in neuronal networks in vivo and in vitro. In this work, we start off by briefly reviewing the microelectrode array technology and the optimization of the coupling between neurons and microtransducers to detect subthreshold synaptic signals. Then, we report MEA studies of circuit formation and activity in invertebrate models such as Lymnaea, Aplysia, and Helix. In the following sections, we analyze plasticity and connectivity in cultures of mammalian dissociated neurons, focusing on spontaneous activity and electrical stimulation. We conclude by discussing plasticity in closed-loop experiments. PMID:25866681
Synchronization in neural nets
NASA Technical Reports Server (NTRS)
Vidal, Jacques J.; Haggerty, John
1988-01-01
The paper presents an artificial neural network concept (the Synchronizable Oscillator Networks) where the instants of individual firings in the form of point processes constitute the only form of information transmitted between joining neurons. In the model, neurons fire spontaneously and regularly in the absence of perturbation. When interaction is present, the scheduled firings are advanced or delayed by the firing of neighboring neurons. Networks of such neurons become global oscillators which exhibit multiple synchronizing attractors. From arbitrary initial states, energy minimization learning procedures can make the network converge to oscillatory modes that satisfy multi-dimensional constraints. Such networks can directly represent routing and scheduling problems that consist of ordering sequences of events.
Tanaka, Takuma; Aoyagi, Toshio; Kaneko, Takeshi
2012-10-01
We propose a new principle for replicating receptive field properties of neurons in the primary visual cortex. We derive a learning rule for a feedforward network, which maintains a low firing rate for the output neurons (resulting in temporal sparseness) and allows only a small subset of the neurons in the network to fire at any given time (resulting in population sparseness). Our learning rule also sets the firing rates of the output neurons at each time step to near-maximum or near-minimum levels, resulting in neuronal reliability. The learning rule is simple enough to be written in spatially and temporally local forms. After the learning stage is performed using input image patches of natural scenes, output neurons in the model network are found to exhibit simple-cell-like receptive field properties. When the output of these simple-cell-like neurons are input to another model layer using the same learning rule, the second-layer output neurons after learning become less sensitive to the phase of gratings than the simple-cell-like input neurons. In particular, some of the second-layer output neurons become completely phase invariant, owing to the convergence of the connections from first-layer neurons with similar orientation selectivity to second-layer neurons in the model network. We examine the parameter dependencies of the receptive field properties of the model neurons after learning and discuss their biological implications. We also show that the localized learning rule is consistent with experimental results concerning neuronal plasticity and can replicate the receptive fields of simple and complex cells.
Synchronization transition in neuronal networks composed of chaotic or non-chaotic oscillators.
Xu, Kesheng; Maidana, Jean Paul; Castro, Samy; Orio, Patricio
2018-05-30
Chaotic dynamics has been shown in the dynamics of neurons and neural networks, in experimental data and numerical simulations. Theoretical studies have proposed an underlying role of chaos in neural systems. Nevertheless, whether chaotic neural oscillators make a significant contribution to network behaviour and whether the dynamical richness of neural networks is sensitive to the dynamics of isolated neurons, still remain open questions. We investigated synchronization transitions in heterogeneous neural networks of neurons connected by electrical coupling in a small world topology. The nodes in our model are oscillatory neurons that - when isolated - can exhibit either chaotic or non-chaotic behaviour, depending on conductance parameters. We found that the heterogeneity of firing rates and firing patterns make a greater contribution than chaos to the steepness of the synchronization transition curve. We also show that chaotic dynamics of the isolated neurons do not always make a visible difference in the transition to full synchrony. Moreover, macroscopic chaos is observed regardless of the dynamics nature of the neurons. However, performing a Functional Connectivity Dynamics analysis, we show that chaotic nodes can promote what is known as multi-stable behaviour, where the network dynamically switches between a number of different semi-synchronized, metastable states.
Biological conservation law as an emerging functionality in dynamical neuronal networks.
Podobnik, Boris; Jusup, Marko; Tiganj, Zoran; Wang, Wen-Xu; Buldú, Javier M; Stanley, H Eugene
2017-11-07
Scientists strive to understand how functionalities, such as conservation laws, emerge in complex systems. Living complex systems in particular create high-ordered functionalities by pairing up low-ordered complementary processes, e.g., one process to build and the other to correct. We propose a network mechanism that demonstrates how collective statistical laws can emerge at a macro (i.e., whole-network) level even when they do not exist at a unit (i.e., network-node) level. Drawing inspiration from neuroscience, we model a highly stylized dynamical neuronal network in which neurons fire either randomly or in response to the firing of neighboring neurons. A synapse connecting two neighboring neurons strengthens when both of these neurons are excited and weakens otherwise. We demonstrate that during this interplay between the synaptic and neuronal dynamics, when the network is near a critical point, both recurrent spontaneous and stimulated phase transitions enable the phase-dependent processes to replace each other and spontaneously generate a statistical conservation law-the conservation of synaptic strength. This conservation law is an emerging functionality selected by evolution and is thus a form of biological self-organized criticality in which the key dynamical modes are collective.
Biological conservation law as an emerging functionality in dynamical neuronal networks
Podobnik, Boris; Tiganj, Zoran; Wang, Wen-Xu; Buldú, Javier M.
2017-01-01
Scientists strive to understand how functionalities, such as conservation laws, emerge in complex systems. Living complex systems in particular create high-ordered functionalities by pairing up low-ordered complementary processes, e.g., one process to build and the other to correct. We propose a network mechanism that demonstrates how collective statistical laws can emerge at a macro (i.e., whole-network) level even when they do not exist at a unit (i.e., network-node) level. Drawing inspiration from neuroscience, we model a highly stylized dynamical neuronal network in which neurons fire either randomly or in response to the firing of neighboring neurons. A synapse connecting two neighboring neurons strengthens when both of these neurons are excited and weakens otherwise. We demonstrate that during this interplay between the synaptic and neuronal dynamics, when the network is near a critical point, both recurrent spontaneous and stimulated phase transitions enable the phase-dependent processes to replace each other and spontaneously generate a statistical conservation law—the conservation of synaptic strength. This conservation law is an emerging functionality selected by evolution and is thus a form of biological self-organized criticality in which the key dynamical modes are collective. PMID:29078286
Aravamuthan, Bhooma R; Shoykhet, Michael
2015-10-01
The basal ganglia are vulnerable to injury during cardiac arrest. Movement disorders are a common morbidity in survivors. Yet, neuronal motor network changes post-arrest remain poorly understood. We compared function of the motor network in adult rats that, during postnatal week 3, underwent 9.5 min of asphyxial cardiac arrest (n = 9) or sham intervention (n = 8). Six months after injury, we simultaneously recorded local field potentials (LFP) from the primary motor cortex (MCx) and single neuron firing and LFP from the rat entopeduncular nucleus (EPN), which corresponds to the primate globus pallidus pars interna. Data were analyzed for firing rates, power, and coherence between MCx and EPN spike and LFP activity. Cardiac arrest survivors display chronic motor deficits. EPN firing rate is lower in cardiac arrest survivors (19.5 ± 2.4 Hz) compared with controls (27.4 ± 2.7 Hz; P < 0.05). Cardiac arrest survivors also demonstrate greater coherence between EPN single neurons and MCx LFP (3-100 Hz; P < 0.001). This increased coherence indicates abnormal synchrony in the neuronal motor network after cardiac arrest. Increased motor network synchrony is thought to be antikinetic in primary movement disorders. Characterization of motor network synchrony after cardiac arrest may help guide management of post-hypoxic movement disorders.
Gleichmann, Marc; Zhang, Yongqing; Wood, William H.; Becker, Kevin G.; Mughal, Mohamed R.; Pazin, Michael J.; van Praag, Henriette; Kobilo, Tali; Zonderman, Alan B.; Troncoso, Juan C.; Markesbery, William R.; Mattson, Mark P.
2010-01-01
Activity-dependent modulation of neuronal gene expression promotes neuronal survival and plasticity, and neuronal network activity is perturbed in aging and Alzheimer’s disease (AD). Here we show that cerebral cortical neurons respond to chronic suppression of excitability by downregulating the expression of genes and their encoded proteins involved in inhibitory transmission (GABAergic and somatostatin) and Ca2+ signaling; alterations in pathways involved in lipid metabolism and energy management are also features of silenced neuronal networks. A molecular fingerprint strikingly similar to that of diminished network activity occurs in the human brain during aging and in AD, and opposite changes occur in response to activation of N-methyl-D-aspartate (NMDA) and brain-derived neurotrophic factor (BDNF) receptors in cultured cortical neurons and in mice in response to an enriched environment or electroconvulsive shock. Our findings suggest that reduced inhibitory neurotransmission during aging and in AD may be the result of compensatory responses that, paradoxically, render the neurons vulnerable to Ca2+-mediated degeneration. PMID:20947216
Stochastic multiresonance in coupled excitable FHN neurons
NASA Astrophysics Data System (ADS)
Li, Huiyan; Sun, Xiaojuan; Xiao, Jinghua
2018-04-01
In this paper, effects of noise on Watts-Strogatz small-world neuronal networks, which are stimulated by a subthreshold signal, have been investigated. With the numerical simulations, it is surprisingly found that there exist several optimal noise intensities at which the subthreshold signal can be detected efficiently. This indicates the occurrence of stochastic multiresonance in the studied neuronal networks. Moreover, it is revealed that the occurrence of stochastic multiresonance has close relationship with the period of subthreshold signal Te and the noise-induced mean period of the neuronal networks T0. In detail, we find that noise could induce the neuronal networks to generate stochastic resonance for M times if Te is not very large and falls into the interval ( M × T 0 , ( M + 1 ) × T 0 ) with M being a positive integer. In real neuronal system, subthreshold signal detection is very meaningful. Thus, the obtained results in this paper could give some important implications on detecting subthreshold signal and propagating neuronal information in neuronal systems.
Planar patch clamp for neuronal networks--considerations and future perspectives.
Bosca, Alessandro; Martina, Marzia; Py, Christophe
2014-01-01
The patch-clamp technique is generally accepted as the gold standard for studying ion channel activity allowing investigators to either "clamp" membrane voltage and directly measure transmembrane currents through ion channels, or to passively monitor spontaneously occurring intracellular voltage oscillations. However, this resulting high information content comes at a price. The technique is labor-intensive and requires highly trained personnel and expensive equipment. This seriously limits its application as an interrogation tool for drug development. Patch-clamp chips have been developed in the last decade to overcome the tedious manipulations associated with the use of glass pipettes in conventional patch-clamp experiments. In this chapter, we describe some of the main materials and fabrication protocols that have been developed to date for the production of patch-clamp chips. We also present the concept of a patch-clamp chip array providing high resolution patch-clamp recordings from individual cells at multiple sites in a network of communicating neurons. On this chip, the neurons are aligned with the aperture-probes using chemical patterning. In the discussion we review the potential use of this technology for pharmaceutical assays, neuronal physiology and synaptic plasticity studies.
The circadian rhythm induced by the heterogeneous network structure of the suprachiasmatic nucleus
NASA Astrophysics Data System (ADS)
Gu, Changgui; Yang, Huijie
2016-05-01
In mammals, the master clock is located in the suprachiasmatic nucleus (SCN), which is composed of about 20 000 nonidentical neuronal oscillators expressing different intrinsic periods. These neurons are coupled through neurotransmitters to form a network consisting of two subgroups, i.e., a ventrolateral (VL) subgroup and a dorsomedial (DM) subgroup. The VL contains about 25% SCN neurons that receive photic input from the retina, and the DM comprises the remaining 75% SCN neurons which are coupled to the VL. The synapses from the VL to the DM are evidently denser than that from the DM to the VL, in which the VL dominates the DM. Therefore, the SCN is a heterogeneous network where the neurons of the VL are linked with a large number of SCN neurons. In the present study, we mimicked the SCN network based on Goodwin model considering four types of networks including an all-to-all network, a Newman-Watts (NW) small world network, an Erdös-Rényi (ER) random network, and a Barabási-Albert (BA) scale free network. We found that the circadian rhythm was induced in the BA, ER, and NW networks, while the circadian rhythm was absent in the all-to-all network with weak cellular coupling, where the amplitude of the circadian rhythm is largest in the BA network which is most heterogeneous in the network structure. Our finding provides an alternative explanation for the induction or enhancement of circadian rhythm by the heterogeneity of the network structure.
Artificial astrocytes improve neural network performance.
Porto-Pazos, Ana B; Veiguela, Noha; Mesejo, Pablo; Navarrete, Marta; Alvarellos, Alberto; Ibáñez, Oscar; Pazos, Alejandro; Araque, Alfonso
2011-04-19
Compelling evidence indicates the existence of bidirectional communication between astrocytes and neurons. Astrocytes, a type of glial cells classically considered to be passive supportive cells, have been recently demonstrated to be actively involved in the processing and regulation of synaptic information, suggesting that brain function arises from the activity of neuron-glia networks. However, the actual impact of astrocytes in neural network function is largely unknown and its application in artificial intelligence remains untested. We have investigated the consequences of including artificial astrocytes, which present the biologically defined properties involved in astrocyte-neuron communication, on artificial neural network performance. Using connectionist systems and evolutionary algorithms, we have compared the performance of artificial neural networks (NN) and artificial neuron-glia networks (NGN) to solve classification problems. We show that the degree of success of NGN is superior to NN. Analysis of performances of NN with different number of neurons or different architectures indicate that the effects of NGN cannot be accounted for an increased number of network elements, but rather they are specifically due to astrocytes. Furthermore, the relative efficacy of NGN vs. NN increases as the complexity of the network increases. These results indicate that artificial astrocytes improve neural network performance, and established the concept of Artificial Neuron-Glia Networks, which represents a novel concept in Artificial Intelligence with implications in computational science as well as in the understanding of brain function.
Artificial Astrocytes Improve Neural Network Performance
Porto-Pazos, Ana B.; Veiguela, Noha; Mesejo, Pablo; Navarrete, Marta; Alvarellos, Alberto; Ibáñez, Oscar; Pazos, Alejandro; Araque, Alfonso
2011-01-01
Compelling evidence indicates the existence of bidirectional communication between astrocytes and neurons. Astrocytes, a type of glial cells classically considered to be passive supportive cells, have been recently demonstrated to be actively involved in the processing and regulation of synaptic information, suggesting that brain function arises from the activity of neuron-glia networks. However, the actual impact of astrocytes in neural network function is largely unknown and its application in artificial intelligence remains untested. We have investigated the consequences of including artificial astrocytes, which present the biologically defined properties involved in astrocyte-neuron communication, on artificial neural network performance. Using connectionist systems and evolutionary algorithms, we have compared the performance of artificial neural networks (NN) and artificial neuron-glia networks (NGN) to solve classification problems. We show that the degree of success of NGN is superior to NN. Analysis of performances of NN with different number of neurons or different architectures indicate that the effects of NGN cannot be accounted for an increased number of network elements, but rather they are specifically due to astrocytes. Furthermore, the relative efficacy of NGN vs. NN increases as the complexity of the network increases. These results indicate that artificial astrocytes improve neural network performance, and established the concept of Artificial Neuron-Glia Networks, which represents a novel concept in Artificial Intelligence with implications in computational science as well as in the understanding of brain function. PMID:21526157
De Cegli, Rossella; Iacobacci, Simona; Flore, Gemma; Gambardella, Gennaro; Mao, Lei; Cutillo, Luisa; Lauria, Mario; Klose, Joachim; Illingworth, Elizabeth; Banfi, Sandro; di Bernardo, Diego
2013-01-01
Gene expression profiles can be used to infer previously unknown transcriptional regulatory interaction among thousands of genes, via systems biology ‘reverse engineering’ approaches. We ‘reverse engineered’ an embryonic stem (ES)-specific transcriptional network from 171 gene expression profiles, measured in ES cells, to identify master regulators of gene expression (‘hubs’). We discovered that E130012A19Rik (E13), highly expressed in mouse ES cells as compared with differentiated cells, was a central ‘hub’ of the network. We demonstrated that E13 is a protein-coding gene implicated in regulating the commitment towards the different neuronal subtypes and glia cells. The overexpression and knock-down of E13 in ES cell lines, undergoing differentiation into neurons and glia cells, caused a strong up-regulation of the glutamatergic neurons marker Vglut2 and a strong down-regulation of the GABAergic neurons marker GAD65 and of the radial glia marker Blbp. We confirmed E13 expression in the cerebral cortex of adult mice and during development. By immuno-based affinity purification, we characterized protein partners of E13, involved in the Polycomb complex. Our results suggest a role of E13 in regulating the division between glutamatergic projection neurons and GABAergic interneurons and glia cells possibly by epigenetic-mediated transcriptional regulation. PMID:23180766
A Hox regulatory network establishes motor neuron pool identity and target-muscle connectivity.
Dasen, Jeremy S; Tice, Bonnie C; Brenner-Morton, Susan; Jessell, Thomas M
2005-11-04
Spinal motor neurons acquire specialized "pool" identities that determine their ability to form selective connections with target muscles in the limb, but the molecular basis of this striking example of neuronal specificity has remained unclear. We show here that a Hox transcriptional regulatory network specifies motor neuron pool identity and connectivity. Two interdependent sets of Hox regulatory interactions operate within motor neurons, one assigning rostrocaudal motor pool position and a second directing motor pool diversity at a single segmental level. This Hox regulatory network directs the downstream transcriptional identity of motor neuron pools and defines the pattern of target-muscle connectivity.
Iida, Shoko; Shimba, Kenta; Sakai, Koji; Kotani, Kiyoshi; Jimbo, Yasuhiko
2018-06-18
The balance between glutamate-mediated excitation and GABA-mediated inhibition is critical to cortical functioning. However, the contribution of network structure consisting of the both neurons to cortical functioning has not been elucidated. We aimed to evaluate the relationship between the network structure and functional activity patterns in vitro. We used mouse induced pluripotent stem cells (iPSCs) to construct three types of neuronal populations; excitatory-rich (Exc), inhibitory-rich (Inh), and control (Cont). Then, we analyzed the activity patterns of these neuronal populations using microelectrode arrays (MEAs). Inhibitory synaptic densities differed between the three types of iPSC-derived neuronal populations, and the neurons showed spontaneously synchronized bursting activity with functional maturation for one month. Moreover, different firing patterns were observed between the three populations; Exc demonstrated the highest firing rates, including frequent, long, and dominant bursts. In contrast, Inh demonstrated the lowest firing rates and the least dominant bursts. Synchronized bursts were enhanced by disinhibition via GABA A receptor blockade. The present study, using iPSC-derived neurons and MEAs, for the first time show that synchronized bursting of cortical networks in vitro depends on the network structure consisting of excitatory and inhibitory neurons. Copyright © 2018 Elsevier Inc. All rights reserved.
A Spiking Neural Network in sEMG Feature Extraction.
Lobov, Sergey; Mironov, Vasiliy; Kastalskiy, Innokentiy; Kazantsev, Victor
2015-11-03
We have developed a novel algorithm for sEMG feature extraction and classification. It is based on a hybrid network composed of spiking and artificial neurons. The spiking neuron layer with mutual inhibition was assigned as feature extractor. We demonstrate that the classification accuracy of the proposed model could reach high values comparable with existing sEMG interface systems. Moreover, the algorithm sensibility for different sEMG collecting systems characteristics was estimated. Results showed rather equal accuracy, despite a significant sampling rate difference. The proposed algorithm was successfully tested for mobile robot control.
Parihar, Abhinav; Jerry, Matthew; Datta, Suman; Raychowdhury, Arijit
2018-01-01
Artificial neural networks can harness stochasticity in multiple ways to enable a vast class of computationally powerful models. Boltzmann machines and other stochastic neural networks have been shown to outperform their deterministic counterparts by allowing dynamical systems to escape local energy minima. Electronic implementation of such stochastic networks is currently limited to addition of algorithmic noise to digital machines which is inherently inefficient; albeit recent efforts to harness physical noise in devices for stochasticity have shown promise. To succeed in fabricating electronic neuromorphic networks we need experimental evidence of devices with measurable and controllable stochasticity which is complemented with the development of reliable statistical models of such observed stochasticity. Current research literature has sparse evidence of the former and a complete lack of the latter. This motivates the current article where we demonstrate a stochastic neuron using an insulator-metal-transition (IMT) device, based on electrically induced phase-transition, in series with a tunable resistance. We show that an IMT neuron has dynamics similar to a piecewise linear FitzHugh-Nagumo (FHN) neuron and incorporates all characteristics of a spiking neuron in the device phenomena. We experimentally demonstrate spontaneous stochastic spiking along with electrically controllable firing probabilities using Vanadium Dioxide (VO2) based IMT neurons which show a sigmoid-like transfer function. The stochastic spiking is explained by two noise sources - thermal noise and threshold fluctuations, which act as precursors of bifurcation. As such, the IMT neuron is modeled as an Ornstein-Uhlenbeck (OU) process with a fluctuating boundary resulting in transfer curves that closely match experiments. The moments of interspike intervals are calculated analytically by extending the first-passage-time (FPT) models for Ornstein-Uhlenbeck (OU) process to include a fluctuating boundary. We find that the coefficient of variation of interspike intervals depend on the relative proportion of thermal and threshold noise, where threshold noise is the dominant source in the current experimental demonstrations. As one of the first comprehensive studies of a stochastic neuron hardware and its statistical properties, this article would enable efficient implementation of a large class of neuro-mimetic networks and algorithms. PMID:29670508
Parihar, Abhinav; Jerry, Matthew; Datta, Suman; Raychowdhury, Arijit
2018-01-01
Artificial neural networks can harness stochasticity in multiple ways to enable a vast class of computationally powerful models. Boltzmann machines and other stochastic neural networks have been shown to outperform their deterministic counterparts by allowing dynamical systems to escape local energy minima. Electronic implementation of such stochastic networks is currently limited to addition of algorithmic noise to digital machines which is inherently inefficient; albeit recent efforts to harness physical noise in devices for stochasticity have shown promise. To succeed in fabricating electronic neuromorphic networks we need experimental evidence of devices with measurable and controllable stochasticity which is complemented with the development of reliable statistical models of such observed stochasticity. Current research literature has sparse evidence of the former and a complete lack of the latter. This motivates the current article where we demonstrate a stochastic neuron using an insulator-metal-transition (IMT) device, based on electrically induced phase-transition, in series with a tunable resistance. We show that an IMT neuron has dynamics similar to a piecewise linear FitzHugh-Nagumo (FHN) neuron and incorporates all characteristics of a spiking neuron in the device phenomena. We experimentally demonstrate spontaneous stochastic spiking along with electrically controllable firing probabilities using Vanadium Dioxide (VO 2 ) based IMT neurons which show a sigmoid-like transfer function. The stochastic spiking is explained by two noise sources - thermal noise and threshold fluctuations, which act as precursors of bifurcation. As such, the IMT neuron is modeled as an Ornstein-Uhlenbeck (OU) process with a fluctuating boundary resulting in transfer curves that closely match experiments. The moments of interspike intervals are calculated analytically by extending the first-passage-time (FPT) models for Ornstein-Uhlenbeck (OU) process to include a fluctuating boundary. We find that the coefficient of variation of interspike intervals depend on the relative proportion of thermal and threshold noise, where threshold noise is the dominant source in the current experimental demonstrations. As one of the first comprehensive studies of a stochastic neuron hardware and its statistical properties, this article would enable efficient implementation of a large class of neuro-mimetic networks and algorithms.
ERIC Educational Resources Information Center
Tort, Adriano B. L.; Komorowski, Robert; Kopell, Nancy; Eichenbaum, Howard
2011-01-01
The association of specific events with the context in which they occur is a fundamental feature of episodic memory. However, the underlying network mechanisms generating what-where associations are poorly understood. Recently we reported that some hippocampal principal neurons develop representations of specific events occurring in particular…
Selection of Multiarmed Spiral Waves in a Regular Network of Neurons
Hu, Bolin; Ma, Jun; Tang, Jun
2013-01-01
Formation and selection of multiarmed spiral wave due to spontaneous symmetry breaking are investigated in a regular network of Hodgkin-Huxley neuron by changing the excitability and imposing spatial forcing currents on the neurons in the network. The arm number of the multiarmed spiral wave is dependent on the distribution of spatial forcing currents and excitability diversity in the network, and the selection criterion for supporting multiarmed spiral waves is discussed. A broken spiral segment is measured by a short polygonal line connected by three adjacent points (controlled nodes), and a double-spiral wave can be developed from the spiral segment. Multiarmed spiral wave is formed when a group of double-spiral waves rotate in the same direction in the network. In the numerical studies, a group of controlled nodes are selected and spatial forcing currents are imposed on these nodes, and our results show that l-arm stable spiral wave (l = 2, 3, 4,...8) can be induced to occupy the network completely. It is also confirmed that low excitability is critical to induce multiarmed spiral waves while high excitability is important to propagate the multiarmed spiral wave outside so that distinct multiarmed spiral wave can occupy the network completely. Our results confirm that symmetry breaking of target wave in the media accounts for emergence of multiarmed spiral wave, which can be developed from a group of spiral waves with single arm under appropriate condition, thus the potential formation mechanism of multiarmed spiral wave in the media is explained. PMID:23935966
Ronzitti, Emiliano; Conti, Rossella; Zampini, Valeria; Tanese, Dimitrii; Klapoetke, Nathan; Boyden, Edward S.; Papagiakoumou, Eirini
2017-01-01
Optogenetic neuronal network manipulation promises to unravel a long-standing mystery in neuroscience: how does microcircuit activity relate causally to behavioral and pathological states? The challenge to evoke spikes with high spatial and temporal complexity necessitates further joint development of light-delivery approaches and custom opsins. Two-photon (2P) light-targeting strategies demonstrated in-depth generation of action potentials in photosensitive neurons both in vitro and in vivo, but thus far lack the temporal precision necessary to induce precisely timed spiking events. Here, we show that efficient current integration enabled by 2P holographic amplified laser illumination of Chronos, a highly light-sensitive and fast opsin, can evoke spikes with submillisecond precision and repeated firing up to 100 Hz in brain slices from Swiss male mice. These results pave the way for optogenetic manipulation with the spatial and temporal sophistication necessary to mimic natural microcircuit activity. SIGNIFICANCE STATEMENT To reveal causal links between neuronal activity and behavior, it is necessary to develop experimental strategies to induce spatially and temporally sophisticated perturbation of network microcircuits. Two-photon computer generated holography (2P-CGH) recently demonstrated 3D optogenetic control of selected pools of neurons with single-cell accuracy in depth in the brain. Here, we show that exciting the fast opsin Chronos with amplified laser 2P-CGH enables cellular-resolution targeting with unprecedented temporal control, driving spiking up to 100 Hz with submillisecond onset precision using low laser power densities. This system achieves a unique combination of spatial flexibility and temporal precision needed to pattern optogenetically inputs that mimic natural neuronal network activity patterns. PMID:28972125
Inferring Single Neuron Properties in Conductance Based Balanced Networks
Pool, Román Rossi; Mato, Germán
2011-01-01
Balanced states in large networks are a usual hypothesis for explaining the variability of neural activity in cortical systems. In this regime the statistics of the inputs is characterized by static and dynamic fluctuations. The dynamic fluctuations have a Gaussian distribution. Such statistics allows to use reverse correlation methods, by recording synaptic inputs and the spike trains of ongoing spontaneous activity without any additional input. By using this method, properties of the single neuron dynamics that are masked by the balanced state can be quantified. To show the feasibility of this approach we apply it to large networks of conductance based neurons. The networks are classified as Type I or Type II according to the bifurcations which neurons of the different populations undergo near the firing onset. We also analyze mixed networks, in which each population has a mixture of different neuronal types. We determine under which conditions the intrinsic noise generated by the network can be used to apply reverse correlation methods. We find that under realistic conditions we can ascertain with low error the types of neurons present in the network. We also find that data from neurons with similar firing rates can be combined to perform covariance analysis. We compare the results of these methods (that do not requite any external input) to the standard procedure (that requires the injection of Gaussian noise into a single neuron). We find a good agreement between the two procedures. PMID:22016730
NASA Astrophysics Data System (ADS)
Liu, Chen; Wang, Jiang; Yu, Haitao; Deng, Bin; Wei, Xile; Tsang, Kaiming; Chan, Wailok
2013-09-01
The combined effects of the information transmission delay and the ratio of the electrical and chemical synapses on the synchronization transitions in the hybrid modular neuronal network are investigated in this paper. Numerical results show that the synchronization of neuron activities can be either promoted or destroyed as the information transmission delay increases, irrespective of the probability of electrical synapses in the hybrid-synaptic network. Interestingly, when the number of the electrical synapses exceeds a certain level, further increasing its proportion can obviously enhance the spatiotemporal synchronization transitions. Moreover, the coupling strength has a significant effect on the synchronization transition. The dominated type of the synapse always has a more profound effect on the emergency of the synchronous behaviors. Furthermore, the results of the modular neuronal network structures demonstrate that excessive partitioning of the modular network may result in the dramatic detriment of neuronal synchronization. Considering that information transmission delays are inevitable in intra- and inter-neuronal networks communication, the obtained results may have important implications for the exploration of the synchronization mechanism underlying several neural system diseases such as Parkinson's Disease.
Ergodic properties of spiking neuronal networks with delayed interactions
NASA Astrophysics Data System (ADS)
Palmigiano, Agostina; Wolf, Fred
The dynamical stability of neuronal networks, and the possibility of chaotic dynamics in the brain pose profound questions to the mechanisms underlying perception. Here we advance on the tractability of large neuronal networks of exactly solvable neuronal models with delayed pulse-coupled interactions. Pulse coupled delayed systems with an infinite dimensional phase space can be studied in equivalent systems of fixed and finite degrees of freedom by introducing a delayer variable for each neuron. A Jacobian of the equivalent system can be analytically obtained, and numerically evaluated. We find that depending on the action potential onset rapidness and the level of heterogeneities, the asynchronous irregular regime characteristic of balanced state networks loses stability with increasing delays to either a slow synchronous irregular or a fast synchronous irregular state. In networks of neurons with slow action potential onset, the transition to collective oscillations leads to an increase of the exponential rate of divergence of nearby trajectories and of the entropy production rate of the chaotic dynamics. The attractor dimension, instead of increasing linearly with increasing delay as reported in many other studies, decreases until eventually the network reaches full synchrony
An integrate-and-fire model for synchronized bursting in a network of cultured cortical neurons.
French, D A; Gruenstein, E I
2006-12-01
It has been suggested that spontaneous synchronous neuronal activity is an essential step in the formation of functional networks in the central nervous system. The key features of this type of activity consist of bursts of action potentials with associated spikes of elevated cytoplasmic calcium. These features are also observed in networks of rat cortical neurons that have been formed in culture. Experimental studies of these cultured networks have led to several hypotheses for the mechanisms underlying the observed synchronized oscillations. In this paper, bursting integrate-and-fire type mathematical models for regular spiking (RS) and intrinsic bursting (IB) neurons are introduced and incorporated through a small-world connection scheme into a two-dimensional excitatory network similar to those in the cultured network. This computer model exhibits spontaneous synchronous activity through mechanisms similar to those hypothesized for the cultured experimental networks. Traces of the membrane potential and cytoplasmic calcium from the model closely match those obtained from experiments. We also consider the impact on network behavior of the IB neurons, the geometry and the small world connection scheme.
Single-cell axotomy of cultured hippocampal neurons integrated in neuronal circuits.
Gomis-Rüth, Susana; Stiess, Michael; Wierenga, Corette J; Meyn, Liane; Bradke, Frank
2014-05-01
An understanding of the molecular mechanisms of axon regeneration after injury is key for the development of potential therapies. Single-cell axotomy of dissociated neurons enables the study of the intrinsic regenerative capacities of injured axons. This protocol describes how to perform single-cell axotomy on dissociated hippocampal neurons containing synapses. Furthermore, to axotomize hippocampal neurons integrated in neuronal circuits, we describe how to set up coculture with a few fluorescently labeled neurons. This approach allows axotomy of single cells in a complex neuronal network and the observation of morphological and molecular changes during axon regeneration. Thus, single-cell axotomy of mature neurons is a valuable tool for gaining insights into cell intrinsic axon regeneration and the plasticity of neuronal polarity of mature neurons. Dissociation of the hippocampus and plating of hippocampal neurons takes ∼2 h. Neurons are then left to grow for 2 weeks, during which time they integrate into neuronal circuits. Subsequent axotomy takes 10 min per neuron and further imaging takes 10 min per neuron.
NASA Astrophysics Data System (ADS)
Wang, Qingyun; Zhang, Honghui; Chen, Guanrong
2012-12-01
We study the effect of heterogeneous neuron and information transmission delay on stochastic resonance of scale-free neuronal networks. For this purpose, we introduce the heterogeneity to the specified neuron with the highest degree. It is shown that in the absence of delay, an intermediate noise level can optimally assist spike firings of collective neurons so as to achieve stochastic resonance on scale-free neuronal networks for small and intermediate αh, which plays a heterogeneous role. Maxima of stochastic resonance measure are enhanced as αh increases, which implies that the heterogeneity can improve stochastic resonance. However, as αh is beyond a certain large value, no obvious stochastic resonance can be observed. If the information transmission delay is introduced to neuronal networks, stochastic resonance is dramatically affected. In particular, the tuned information transmission delay can induce multiple stochastic resonance, which can be manifested as well-expressed maximum in the measure for stochastic resonance, appearing every multiple of one half of the subthreshold stimulus period. Furthermore, we can observe that stochastic resonance at odd multiple of one half of the subthreshold stimulus period is subharmonic, as opposed to the case of even multiple of one half of the subthreshold stimulus period. More interestingly, multiple stochastic resonance can also be improved by the suitable heterogeneous neuron. Presented results can provide good insights into the understanding of the heterogeneous neuron and information transmission delay on realistic neuronal networks.
Boulanger-Weill, Jonathan; Candat, Virginie; Jouary, Adrien; Romano, Sebastián A; Pérez-Schuster, Verónica; Sumbre, Germán
2017-06-19
From development up to adulthood, the vertebrate brain is continuously supplied with newborn neurons that integrate into established mature circuits. However, how this process is coordinated during development remains unclear. Using two-photon imaging, GCaMP5 transgenic zebrafish larvae, and sparse electroporation in the larva's optic tectum, we monitored spontaneous and induced activity of large neuronal populations containing newborn and functionally mature neurons. We observed that the maturation of newborn neurons is a 4-day process. Initially, newborn neurons showed undeveloped dendritic arbors, no neurotransmitter identity, and were unresponsive to visual stimulation, although they displayed spontaneous calcium transients. Later on, newborn-labeled neurons began to respond to visual stimuli but in a very variable manner. At the end of the maturation period, newborn-labeled neurons exhibited visual tuning curves (spatial receptive fields and direction selectivity) and spontaneous correlated activity with neighboring functionally mature neurons. At this developmental stage, newborn-labeled neurons presented complex dendritic arbors and neurotransmitter identity (excitatory or inhibitory). Removal of retinal inputs significantly perturbed the integration of newborn neurons into the functionally mature tectal network. Our results provide a comprehensive description of the maturation of newborn neurons during development and shed light on potential mechanisms underlying their integration into a functionally mature neuronal circuit. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Ebner, Marc; Hameroff, Stuart
2011-01-01
Cognitive brain functions, for example, sensory perception, motor control and learning, are understood as computation by axonal-dendritic chemical synapses in networks of integrate-and-fire neurons. Cognitive brain functions may occur either consciously or nonconsciously (on “autopilot”). Conscious cognition is marked by gamma synchrony EEG, mediated largely by dendritic-dendritic gap junctions, sideways connections in input/integration layers. Gap-junction-connected neurons define a sub-network within a larger neural network. A theoretical model (the “conscious pilot”) suggests that as gap junctions open and close, a gamma-synchronized subnetwork, or zone moves through the brain as an executive agent, converting nonconscious “auto-pilot” cognition to consciousness, and enhancing computation by coherent processing and collective integration. In this study we implemented sideways “gap junctions” in a single-layer artificial neural network to perform figure/ground separation. The set of neurons connected through gap junctions form a reconfigurable resistive grid or sub-network zone. In the model, outgoing spikes are temporally integrated and spatially averaged using the fixed resistive grid set up by neurons of similar function which are connected through gap-junctions. This spatial average, essentially a feedback signal from the neuron's output, determines whether particular gap junctions between neurons will open or close. Neurons connected through open gap junctions synchronize their output spikes. We have tested our gap-junction-defined sub-network in a one-layer neural network on artificial retinal inputs using real-world images. Our system is able to perform figure/ground separation where the laterally connected sub-network of neurons represents a perceived object. Even though we only show results for visual stimuli, our approach should generalize to other modalities. The system demonstrates a moving sub-network zone of synchrony, within which the contents of perception are represented and contained. This mobile zone can be viewed as a model of the neural correlate of consciousness in the brain. PMID:22046178
McDonnell, Mark D.; Ward, Lawrence M.
2014-01-01
Abstract Directed random graph models frequently are used successfully in modeling the population dynamics of networks of cortical neurons connected by chemical synapses. Experimental results consistently reveal that neuronal network topology is complex, however, in the sense that it differs statistically from a random network, and differs for classes of neurons that are physiologically different. This suggests that complex network models whose subnetworks have distinct topological structure may be a useful, and more biologically realistic, alternative to random networks. Here we demonstrate that the balanced excitation and inhibition frequently observed in small cortical regions can transiently disappear in otherwise standard neuronal-scale models of fluctuation-driven dynamics, solely because the random network topology was replaced by a complex clustered one, whilst not changing the in-degree of any neurons. In this network, a small subset of cells whose inhibition comes only from outside their local cluster are the cause of bistable population dynamics, where different clusters of these cells irregularly switch back and forth from a sparsely firing state to a highly active state. Transitions to the highly active state occur when a cluster of these cells spikes sufficiently often to cause strong unbalanced positive feedback to each other. Transitions back to the sparsely firing state rely on occasional large fluctuations in the amount of non-local inhibition received. Neurons in the model are homogeneous in their intrinsic dynamics and in-degrees, but differ in the abundance of various directed feedback motifs in which they participate. Our findings suggest that (i) models and simulations should take into account complex structure that varies for neuron and synapse classes; (ii) differences in the dynamics of neurons with similar intrinsic properties may be caused by their membership in distinctive local networks; (iii) it is important to identify neurons that share physiological properties and location, but differ in their connectivity. PMID:24743633
Ebner, Marc; Hameroff, Stuart
2011-01-01
Cognitive brain functions, for example, sensory perception, motor control and learning, are understood as computation by axonal-dendritic chemical synapses in networks of integrate-and-fire neurons. Cognitive brain functions may occur either consciously or nonconsciously (on "autopilot"). Conscious cognition is marked by gamma synchrony EEG, mediated largely by dendritic-dendritic gap junctions, sideways connections in input/integration layers. Gap-junction-connected neurons define a sub-network within a larger neural network. A theoretical model (the "conscious pilot") suggests that as gap junctions open and close, a gamma-synchronized subnetwork, or zone moves through the brain as an executive agent, converting nonconscious "auto-pilot" cognition to consciousness, and enhancing computation by coherent processing and collective integration. In this study we implemented sideways "gap junctions" in a single-layer artificial neural network to perform figure/ground separation. The set of neurons connected through gap junctions form a reconfigurable resistive grid or sub-network zone. In the model, outgoing spikes are temporally integrated and spatially averaged using the fixed resistive grid set up by neurons of similar function which are connected through gap-junctions. This spatial average, essentially a feedback signal from the neuron's output, determines whether particular gap junctions between neurons will open or close. Neurons connected through open gap junctions synchronize their output spikes. We have tested our gap-junction-defined sub-network in a one-layer neural network on artificial retinal inputs using real-world images. Our system is able to perform figure/ground separation where the laterally connected sub-network of neurons represents a perceived object. Even though we only show results for visual stimuli, our approach should generalize to other modalities. The system demonstrates a moving sub-network zone of synchrony, within which the contents of perception are represented and contained. This mobile zone can be viewed as a model of the neural correlate of consciousness in the brain.
Effect of dilution in asymmetric recurrent neural networks.
Folli, Viola; Gosti, Giorgio; Leonetti, Marco; Ruocco, Giancarlo
2018-04-16
We study with numerical simulation the possible limit behaviors of synchronous discrete-time deterministic recurrent neural networks composed of N binary neurons as a function of a network's level of dilution and asymmetry. The network dilution measures the fraction of neuron couples that are connected, and the network asymmetry measures to what extent the underlying connectivity matrix is asymmetric. For each given neural network, we study the dynamical evolution of all the different initial conditions, thus characterizing the full dynamical landscape without imposing any learning rule. Because of the deterministic dynamics, each trajectory converges to an attractor, that can be either a fixed point or a limit cycle. These attractors form the set of all the possible limit behaviors of the neural network. For each network we then determine the convergence times, the limit cycles' length, the number of attractors, and the sizes of the attractors' basin. We show that there are two network structures that maximize the number of possible limit behaviors. The first optimal network structure is fully-connected and symmetric. On the contrary, the second optimal network structure is highly sparse and asymmetric. The latter optimal is similar to what observed in different biological neuronal circuits. These observations lead us to hypothesize that independently from any given learning model, an efficient and effective biologic network that stores a number of limit behaviors close to its maximum capacity tends to develop a connectivity structure similar to one of the optimal networks we found. Copyright © 2018 The Author(s). Published by Elsevier Ltd.. All rights reserved.
A synaptic organizing principle for cortical neuronal groups
Perin, Rodrigo; Berger, Thomas K.; Markram, Henry
2011-01-01
Neuronal circuitry is often considered a clean slate that can be dynamically and arbitrarily molded by experience. However, when we investigated synaptic connectivity in groups of pyramidal neurons in the neocortex, we found that both connectivity and synaptic weights were surprisingly predictable. Synaptic weights follow very closely the number of connections in a group of neurons, saturating after only 20% of possible connections are formed between neurons in a group. When we examined the network topology of connectivity between neurons, we found that the neurons cluster into small world networks that are not scale-free, with less than 2 degrees of separation. We found a simple clustering rule where connectivity is directly proportional to the number of common neighbors, which accounts for these small world networks and accurately predicts the connection probability between any two neurons. This pyramidal neuron network clusters into multiple groups of a few dozen neurons each. The neurons composing each group are surprisingly distributed, typically more than 100 μm apart, allowing for multiple groups to be interlaced in the same space. In summary, we discovered a synaptic organizing principle that groups neurons in a manner that is common across animals and hence, independent of individual experiences. We speculate that these elementary neuronal groups are prescribed Lego-like building blocks of perception and that acquired memory relies more on combining these elementary assemblies into higher-order constructs. PMID:21383177
Synchronization in a non-uniform network of excitatory spiking neurons
NASA Astrophysics Data System (ADS)
Echeveste, Rodrigo; Gros, Claudius
Spontaneous synchronization of pulse coupled elements is ubiquitous in nature and seems to be of vital importance for life. Networks of pacemaker cells in the heart, extended populations of southeast asian fireflies, and neuronal oscillations in cortical networks, are examples of this. In the present work, a rich repertoire of dynamical states with different degrees of synchronization are found in a network of excitatory-only spiking neurons connected in a non-uniform fashion. In particular, uncorrelated and partially correlated states are found without the need for inhibitory neurons or external currents. The phase transitions between these states, as well the robustness, stability, and response of the network to external stimulus are studied.
Sensitivity of feedforward neural networks to weight errors
NASA Technical Reports Server (NTRS)
Stevenson, Maryhelen; Widrow, Bernard; Winter, Rodney
1990-01-01
An analysis is made of the sensitivity of feedforward layered networks of Adaline elements (threshold logic units) to weight errors. An approximation is derived which expresses the probability of error for an output neuron of a large network (a network with many neurons per layer) as a function of the percentage change in the weights. As would be expected, the probability of error increases with the number of layers in the network and with the percentage change in the weights. The probability of error is essentially independent of the number of weights per neuron and of the number of neurons per layer, as long as these numbers are large (on the order of 100 or more).
Gritsun, Taras A; le Feber, Joost; Rutten, Wim L C
2012-01-01
A typical property of isolated cultured neuronal networks of dissociated rat cortical cells is synchronized spiking, called bursting, starting about one week after plating, when the dissociated cells have sufficiently sent out their neurites and formed enough synaptic connections. This paper is the third in a series of three on simulation models of cultured networks. Our two previous studies [26], [27] have shown that random recurrent network activity models generate intra- and inter-bursting patterns similar to experimental data. The networks were noise or pacemaker-driven and had Izhikevich-neuronal elements with only short-term plastic (STP) synapses (so, no long-term potentiation, LTP, or depression, LTD, was included). However, elevated pre-phases (burst leaders) and after-phases of burst main shapes, that usually arise during the development of the network, were not yet simulated in sufficient detail. This lack of detail may be due to the fact that the random models completely missed network topology .and a growth model. Therefore, the present paper adds, for the first time, a growth model to the activity model, to give the network a time dependent topology and to explain burst shapes in more detail. Again, without LTP or LTD mechanisms. The integrated growth-activity model yielded realistic bursting patterns. The automatic adjustment of various mutually interdependent network parameters is one of the major advantages of our current approach. Spatio-temporal bursting activity was validated against experiment. Depending on network size, wave reverberation mechanisms were seen along the network boundaries, which may explain the generation of phases of elevated firing before and after the main phase of the burst shape.In summary, the results show that adding topology and growth explain burst shapes in great detail and suggest that young networks still lack/do not need LTP or LTD mechanisms.
Kumar, Gautam; Kothare, Mayuresh V
2013-12-01
We derive conditions for continuous differentiability of inter-spike intervals (ISIs) of spiking neurons with respect to parameters (decision variables) of an external stimulating input current that drives a recurrent network of synaptically connected neurons. The dynamical behavior of individual neurons is represented by a class of discontinuous single-neuron models. We report here that ISIs of neurons in the network are continuously differentiable with respect to decision variables if (1) a continuously differentiable trajectory of the membrane potential exists between consecutive action potentials with respect to time and decision variables and (2) the partial derivative of the membrane potential of spiking neurons with respect to time is not equal to the partial derivative of their firing threshold with respect to time at the time of action potentials. Our theoretical results are supported by showing fulfillment of these conditions for a class of known bidimensional spiking neuron models.
Identification of a neuronal transcription factor network involved in medulloblastoma development.
Lastowska, Maria; Al-Afghani, Hani; Al-Balool, Haya H; Sheth, Harsh; Mercer, Emma; Coxhead, Jonathan M; Redfern, Chris P F; Peters, Heiko; Burt, Alastair D; Santibanez-Koref, Mauro; Bacon, Chris M; Chesler, Louis; Rust, Alistair G; Adams, David J; Williamson, Daniel; Clifford, Steven C; Jackson, Michael S
2013-07-11
Medulloblastomas, the most frequent malignant brain tumours affecting children, comprise at least 4 distinct clinicogenetic subgroups. Aberrant sonic hedgehog (SHH) signalling is observed in approximately 25% of tumours and defines one subgroup. Although alterations in SHH pathway genes (e.g. PTCH1, SUFU) are observed in many of these tumours, high throughput genomic analyses have identified few other recurring mutations. Here, we have mutagenised the Ptch+/- murine tumour model using the Sleeping Beauty transposon system to identify additional genes and pathways involved in SHH subgroup medulloblastoma development. Mutagenesis significantly increased medulloblastoma frequency and identified 17 candidate cancer genes, including orthologs of genes somatically mutated (PTEN, CREBBP) or associated with poor outcome (PTEN, MYT1L) in the human disease. Strikingly, these candidate genes were enriched for transcription factors (p=2x10-5), the majority of which (6/7; Crebbp, Myt1L, Nfia, Nfib, Tead1 and Tgif2) were linked within a single regulatory network enriched for genes associated with a differentiated neuronal phenotype. Furthermore, activity of this network varied significantly between the human subgroups, was associated with metastatic disease, and predicted poor survival specifically within the SHH subgroup of tumours. Igf2, previously implicated in medulloblastoma, was the most differentially expressed gene in murine tumours with network perturbation, and network activity in both mouse and human tumours was characterised by enrichment for multiple gene-sets indicating increased cell proliferation, IGF signalling, MYC target upregulation, and decreased neuronal differentiation. Collectively, our data support a model of medulloblastoma development in SB-mutagenised Ptch+/- mice which involves disruption of a novel transcription factor network leading to Igf2 upregulation, proliferation of GNPs, and tumour formation. Moreover, our results identify rational therapeutic targets for SHH subgroup tumours, alongside prognostic biomarkers for the identification of poor-risk SHH patients.
Multichannel activity propagation across an engineered axon network
NASA Astrophysics Data System (ADS)
Chen, H. Isaac; Wolf, John A.; Smith, Douglas H.
2017-04-01
Objective. Although substantial progress has been made in mapping the connections of the brain, less is known about how this organization translates into brain function. In particular, the massive interconnectivity of the brain has made it difficult to specifically examine data transmission between two nodes of the connectome, a central component of the ‘neural code.’ Here, we investigated the propagation of multiple streams of asynchronous neuronal activity across an isolated in vitro ‘connectome unit.’ Approach. We used the novel technique of axon stretch growth to create a model of a long-range cortico-cortical network, a modular system consisting of paired nodes of cortical neurons connected by axon tracts. Using optical stimulation and multi-electrode array recording techniques, we explored how input patterns are represented by cortical networks, how these representations shift as they are transmitted between cortical nodes and perturbed by external conditions, and how well the downstream node distinguishes different patterns. Main results. Stimulus representations included direct, synaptic, and multiplexed responses that grew in complexity as the distance between the stimulation source and recorded neuron increased. These representations collapsed into patterns with lower information content at higher stimulation frequencies. With internodal activity propagation, a hierarchy of network pathways, including latent circuits, was revealed using glutamatergic blockade. As stimulus channels were added, divergent, non-linear effects were observed in local versus distant network layers. Pairwise difference analysis of neuronal responses suggested that neuronal ensembles generally outperformed individual cells in discriminating input patterns. Significance. Our data illuminate the complexity of spiking activity propagation in cortical networks in vitro, which is characterized by the transformation of an input into myriad outputs over several network layers. These results provide insight into how the brain potentially processes information and generates the neural code and could guide the development of clinical therapies based on multichannel brain stimulation.
D'Antò, Vincenzo; Cantile, Monica; D'Armiento, Maria; Schiavo, Giulia; Spagnuolo, Gianrico; Terracciano, Luigi; Vecchione, Raffaela; Cillo, Clemente
2006-03-01
Homeobox-containing genes play a crucial role in odontogenesis. After the detection of Dlx and Msx genes in overlapping domains along maxillary and mandibular processes, a homeobox odontogenic code has been proposed to explain the interaction between different homeobox genes during dental lamina patterning. No role has so far been assigned to the Hox gene network in the homeobox odontogenic code due to studies on specific Hox genes and evolutionary considerations. Despite its involvement in early patterning during embryonal development, the HOX gene network, the most repeat-poor regions of the human genome, controls the phenotype identity of adult eukaryotic cells. Here, according to our results, the HOX gene network appears to be active in human tooth germs between 18 and 24 weeks of development. The immunohistochemical localization of specific HOX proteins mostly concerns the epithelial tooth germ compartment. Furthermore, only a few genes of the network are active in embryonal retromolar tissues, as well as in ectomesenchymal dental pulp cells (DPC) grown in vitro from adult human molar. Exposure of DPCs to cAMP induces the expression of from three to nine total HOX genes of the network in parallel with phenotype modifications with traits of neuronal differentiation. Our observations suggest that: (i) by combining its component genes, the HOX gene network determines the phenotype identity of epithelial and ectomesenchymal cells interacting in the generation of human tooth germ; (ii) cAMP treatment activates the HOX network and induces, in parallel, a neuronal-like phenotype in human primary ectomesenchymal dental pulp cells. 2005 Wiley-Liss, Inc.
NASA Astrophysics Data System (ADS)
Li, Xiumin; Wang, Wei; Xue, Fangzheng; Song, Yongduan
2018-02-01
Recently there has been continuously increasing interest in building up computational models of spiking neural networks (SNN), such as the Liquid State Machine (LSM). The biologically inspired self-organized neural networks with neural plasticity can enhance the capability of computational performance, with the characteristic features of dynamical memory and recurrent connection cycles which distinguish them from the more widely used feedforward neural networks. Despite a variety of computational models for brain-like learning and information processing have been proposed, the modeling of self-organized neural networks with multi-neural plasticity is still an important open challenge. The main difficulties lie in the interplay among different forms of neural plasticity rules and understanding how structures and dynamics of neural networks shape the computational performance. In this paper, we propose a novel approach to develop the models of LSM with a biologically inspired self-organizing network based on two neural plasticity learning rules. The connectivity among excitatory neurons is adapted by spike-timing-dependent plasticity (STDP) learning; meanwhile, the degrees of neuronal excitability are regulated to maintain a moderate average activity level by another learning rule: intrinsic plasticity (IP). Our study shows that LSM with STDP+IP performs better than LSM with a random SNN or SNN obtained by STDP alone. The noticeable improvement with the proposed method is due to the better reflected competition among different neurons in the developed SNN model, as well as the more effectively encoded and processed relevant dynamic information with its learning and self-organizing mechanism. This result gives insights to the optimization of computational models of spiking neural networks with neural plasticity.
From network structure to network reorganization: implications for adult neurogenesis
NASA Astrophysics Data System (ADS)
Schneider-Mizell, Casey M.; Parent, Jack M.; Ben-Jacob, Eshel; Zochowski, Michal R.; Sander, Leonard M.
2010-12-01
Networks can be dynamical systems that undergo functional and structural reorganization. One example of such a process is adult hippocampal neurogenesis, in which new cells are continuously born and incorporate into the existing network of the dentate gyrus region of the hippocampus. Many of these introduced cells mature and become indistinguishable from established neurons, joining the existing network. Activity in the network environment is known to promote birth, survival and incorporation of new cells. However, after epileptogenic injury, changes to the connectivity structure around the neurogenic niche are known to correlate with aberrant neurogenesis. The possible role of network-level changes in the development of epilepsy is not well understood. In this paper, we use a computational model to investigate how the structural and functional outcomes of network reorganization, driven by addition of new cells during neurogenesis, depend on the original network structure. We find that there is a stable network topology that allows the network to incorporate new neurons in a manner that enhances activity of the persistently active region, but maintains global network properties. In networks having other connectivity structures, new cells can greatly alter the distribution of firing activity and destroy the initial activity patterns. We thus find that new cells are able to provide focused enhancement of network only for small-world networks with sufficient inhibition. Network-level deviations from this topology, such as those caused by epileptogenic injury, can set the network down a path that develops toward pathological dynamics and aberrant structural integration of new cells.
Network activity of mirror neurons depends on experience.
Ushakov, Vadim L; Kartashov, Sergey I; Zavyalova, Victoria V; Bezverhiy, Denis D; Posichanyuk, Vladimir I; Terentev, Vasliliy N; Anokhin, Konstantin V
2013-03-01
In this work, the investigation of network activity of mirror neurons systems in animal brains depending on experience (existence or absence performance of the shown actions) was carried out. It carried out the research of mirror neurons network in the C57/BL6 line mice in the supervision task of swimming mice-demonstrators in Morris water maze. It showed the presence of mirror neurons systems in the motor cortex M1, M2, cingular cortex, hippocampus in mice groups, having experience of the swimming and without it. The conclusion is drawn about the possibility of the new functional network systems formation by means of mirror neurons systems and the acquisition of new knowledge through supervision by the animals in non-specific tasks.
Optimization Methods for Spiking Neurons and Networks
Russell, Alexander; Orchard, Garrick; Dong, Yi; Mihalaş, Ştefan; Niebur, Ernst; Tapson, Jonathan; Etienne-Cummings, Ralph
2011-01-01
Spiking neurons and spiking neural circuits are finding uses in a multitude of tasks such as robotic locomotion control, neuroprosthetics, visual sensory processing, and audition. The desired neural output is achieved through the use of complex neuron models, or by combining multiple simple neurons into a network. In either case, a means for configuring the neuron or neural circuit is required. Manual manipulation of parameters is both time consuming and non-intuitive due to the nonlinear relationship between parameters and the neuron’s output. The complexity rises even further as the neurons are networked and the systems often become mathematically intractable. In large circuits, the desired behavior and timing of action potential trains may be known but the timing of the individual action potentials is unknown and unimportant, whereas in single neuron systems the timing of individual action potentials is critical. In this paper, we automate the process of finding parameters. To configure a single neuron we derive a maximum likelihood method for configuring a neuron model, specifically the Mihalas–Niebur Neuron. Similarly, to configure neural circuits, we show how we use genetic algorithms (GAs) to configure parameters for a network of simple integrate and fire with adaptation neurons. The GA approach is demonstrated both in software simulation and hardware implementation on a reconfigurable custom very large scale integration chip. PMID:20959265
Hox Genes: Choreographers in Neural Development, Architects of Circuit Organization
Philippidou, Polyxeni; Dasen, Jeremy S.
2013-01-01
Summary The neural circuits governing vital behaviors, such as respiration and locomotion, are comprised of discrete neuronal populations residing within the brainstem and spinal cord. Work over the past decade has provided a fairly comprehensive understanding of the developmental pathways that determine the identity of major neuronal classes within the neural tube. However, the steps through which neurons acquire the subtype diversities necessary for their incorporation into a particular circuit are still poorly defined. Studies on the specification of motor neurons indicate that the large family of Hox transcription factors has a key role in generating the subtypes required for selective muscle innervation. There is also emerging evidence that Hox genes function in multiple neuronal classes to shape synaptic specificity during development, suggesting a broader role in circuit assembly. This review highlights the functions and mechanisms of Hox gene networks, and their multifaceted roles during neuronal specification and connectivity. PMID:24094100
Making sense out of spinal cord somatosensory development
Seal, Rebecca P.
2016-01-01
The spinal cord integrates and relays somatosensory input, leading to complex motor responses. Research over the past couple of decades has identified transcription factor networks that function during development to define and instruct the generation of diverse neuronal populations within the spinal cord. A number of studies have now started to connect these developmentally defined populations with their roles in somatosensory circuits. Here, we review our current understanding of how neuronal diversity in the dorsal spinal cord is generated and we discuss the logic underlying how these neurons form the basis of somatosensory circuits. PMID:27702783
Nonlinear Maps for Design of Discrete-Time Models of Neuronal Network Dynamics
2016-03-31
2016 Performance/Technic~ 03-01-2016- 03-31-2016 4. TITLE AND SUBTITLE Sa. CONTRACT NUMBER Nonlinear Maps for Design of Discrete -Time Models of...simulations is to design a neuronal model in the form of difference equations that generates neuronal states in discrete moments of time. In this...responsive tiring patterns. We propose to use modern DSP ideas to develop new efficient approaches to the design of such discrete -time models for
Multi-channels coupling-induced pattern transition in a tri-layer neuronal network
NASA Astrophysics Data System (ADS)
Wu, Fuqiang; Wang, Ya; Ma, Jun; Jin, Wuyin; Hobiny, Aatef
2018-03-01
Neurons in nerve system show complex electrical behaviors due to complex connection types and diversity in excitability. A tri-layer network is constructed to investigate the signal propagation and pattern formation by selecting different coupling channels between layers. Each layer is set as different states, and the local kinetics is described by Hindmarsh-Rose neuron model. By changing the number of coupling channels between layers and the state of the first layer, the collective behaviors of each layer and synchronization pattern of network are investigated. A statistical factor of synchronization on each layer is calculated. It is found that quiescent state in the second layer can be excited and disordered state in the third layer is suppressed when the first layer is controlled by a pacemaker, and the developed state is dependent on the number of coupling channels. Furthermore, the collapse in the first layer can cause breakdown of other layers in the network, and the mechanism is that disordered state in the third layer is enhanced when sampled signals from the collapsed layer can impose continuous disturbance on the next layer.
Pharmacological Tools to Study the Role of Astrocytes in Neural Network Functions.
Peña-Ortega, Fernando; Rivera-Angulo, Ana Julia; Lorea-Hernández, Jonathan Julio
2016-01-01
Despite that astrocytes and microglia do not communicate by electrical impulses, they can efficiently communicate among them, with each other and with neurons, to participate in complex neural functions requiring broad cell-communication and long-lasting regulation of brain function. Glial cells express many receptors in common with neurons; secrete gliotransmitters as well as neurotrophic and neuroinflammatory factors, which allow them to modulate synaptic transmission and neural excitability. All these properties allow glial cells to influence the activity of neuronal networks. Thus, the incorporation of glial cell function into the understanding of nervous system dynamics will provide a more accurate view of brain function. Our current knowledge of glial cell biology is providing us with experimental tools to explore their participation in neural network modulation. In this chapter, we review some of the classical, as well as some recent, pharmacological tools developed for the study of astrocyte's influence in neural function. We also provide some examples of the use of these pharmacological agents to understand the role of astrocytes in neural network function and dysfunction.
The NEST Dry-Run Mode: Efficient Dynamic Analysis of Neuronal Network Simulation Code.
Kunkel, Susanne; Schenck, Wolfram
2017-01-01
NEST is a simulator for spiking neuronal networks that commits to a general purpose approach: It allows for high flexibility in the design of network models, and its applications range from small-scale simulations on laptops to brain-scale simulations on supercomputers. Hence, developers need to test their code for various use cases and ensure that changes to code do not impair scalability. However, running a full set of benchmarks on a supercomputer takes up precious compute-time resources and can entail long queuing times. Here, we present the NEST dry-run mode, which enables comprehensive dynamic code analysis without requiring access to high-performance computing facilities. A dry-run simulation is carried out by a single process, which performs all simulation steps except communication as if it was part of a parallel environment with many processes. We show that measurements of memory usage and runtime of neuronal network simulations closely match the corresponding dry-run data. Furthermore, we demonstrate the successful application of the dry-run mode in the areas of profiling and performance modeling.
The NEST Dry-Run Mode: Efficient Dynamic Analysis of Neuronal Network Simulation Code
Kunkel, Susanne; Schenck, Wolfram
2017-01-01
NEST is a simulator for spiking neuronal networks that commits to a general purpose approach: It allows for high flexibility in the design of network models, and its applications range from small-scale simulations on laptops to brain-scale simulations on supercomputers. Hence, developers need to test their code for various use cases and ensure that changes to code do not impair scalability. However, running a full set of benchmarks on a supercomputer takes up precious compute-time resources and can entail long queuing times. Here, we present the NEST dry-run mode, which enables comprehensive dynamic code analysis without requiring access to high-performance computing facilities. A dry-run simulation is carried out by a single process, which performs all simulation steps except communication as if it was part of a parallel environment with many processes. We show that measurements of memory usage and runtime of neuronal network simulations closely match the corresponding dry-run data. Furthermore, we demonstrate the successful application of the dry-run mode in the areas of profiling and performance modeling. PMID:28701946
Linking structure and activity in nonlinear spiking networks
Josić, Krešimir; Shea-Brown, Eric
2017-01-01
Recent experimental advances are producing an avalanche of data on both neural connectivity and neural activity. To take full advantage of these two emerging datasets we need a framework that links them, revealing how collective neural activity arises from the structure of neural connectivity and intrinsic neural dynamics. This problem of structure-driven activity has drawn major interest in computational neuroscience. Existing methods for relating activity and architecture in spiking networks rely on linearizing activity around a central operating point and thus fail to capture the nonlinear responses of individual neurons that are the hallmark of neural information processing. Here, we overcome this limitation and present a new relationship between connectivity and activity in networks of nonlinear spiking neurons by developing a diagrammatic fluctuation expansion based on statistical field theory. We explicitly show how recurrent network structure produces pairwise and higher-order correlated activity, and how nonlinearities impact the networks’ spiking activity. Our findings open new avenues to investigating how single-neuron nonlinearities—including those of different cell types—combine with connectivity to shape population activity and function. PMID:28644840
Demertzi, Athena; Gómez, Francisco; Crone, Julia Sophia; Vanhaudenhuyse, Audrey; Tshibanda, Luaba; Noirhomme, Quentin; Thonnard, Marie; Charland-Verville, Vanessa; Kirsch, Murielle; Laureys, Steven; Soddu, Andrea
2014-03-01
In healthy conditions, group-level fMRI resting state analyses identify ten resting state networks (RSNs) of cognitive relevance. Here, we aim to assess the ten-network model in severely brain-injured patients suffering from disorders of consciousness and to identify those networks which will be most relevant to discriminate between patients and healthy subjects. 300 fMRI volumes were obtained in 27 healthy controls and 53 patients in minimally conscious state (MCS), vegetative state/unresponsive wakefulness syndrome (VS/UWS) and coma. Independent component analysis (ICA) reduced data dimensionality. The ten networks were identified by means of a multiple template-matching procedure and were tested on neuronality properties (neuronal vs non-neuronal) in a data-driven way. Univariate analyses detected between-group differences in networks' neuronal properties and estimated voxel-wise functional connectivity in the networks, which were significantly less identifiable in patients. A nearest-neighbor "clinical" classifier was used to determine the networks with high between-group discriminative accuracy. Healthy controls were characterized by more neuronal components compared to patients in VS/UWS and in coma. Compared to healthy controls, fewer patients in MCS and VS/UWS showed components of neuronal origin for the left executive control network, default mode network (DMN), auditory, and right executive control network. The "clinical" classifier indicated the DMN and auditory network with the highest accuracy (85.3%) in discriminating patients from healthy subjects. FMRI multiple-network resting state connectivity is disrupted in severely brain-injured patients suffering from disorders of consciousness. When performing ICA, multiple-network testing and control for neuronal properties of the identified RSNs can advance fMRI system-level characterization. Automatic data-driven patient classification is the first step towards future single-subject objective diagnostics based on fMRI resting state acquisitions. Copyright © 2013 Elsevier Ltd. All rights reserved.
Synchronous behaviour in network model based on human cortico-cortical connections.
Protachevicz, Paulo Ricardo; Borges, Rafael Ribaski; Reis, Adriane da Silva; Borges, Fernando da Silva; Iarosz, Kelly Cristina; Caldas, Ibere Luiz; Lameu, Ewandson Luiz; Macau, Elbert Einstein Nehrer; Viana, Ricardo Luiz; Sokolov, Igor M; Ferrari, Fabiano A S; Kurths, Jürgen; Batista, Antonio Marcos
2018-06-22
We consider a network topology according to the cortico-cortical connec- tion network of the human brain, where each cortical area is composed of a random network of adaptive exponential integrate-and-fire neurons. Depending on the parameters, this neuron model can exhibit spike or burst patterns. As a diagnostic tool to identify spike and burst patterns we utilise the coefficient of variation of the neuronal inter-spike interval. In our neuronal network, we verify the existence of spike and burst synchronisation in different cortical areas. Our simulations show that the network arrangement, i.e., its rich-club organisation, plays an important role in the transition of the areas from desynchronous to synchronous behaviours. © 2018 Institute of Physics and Engineering in Medicine.
Srinivasa, Narayan; Cho, Youngkwan
2014-01-01
A spiking neural network model is described for learning to discriminate among spatial patterns in an unsupervised manner. The network anatomy consists of source neurons that are activated by external inputs, a reservoir that resembles a generic cortical layer with an excitatory-inhibitory (EI) network and a sink layer of neurons for readout. Synaptic plasticity in the form of STDP is imposed on all the excitatory and inhibitory synapses at all times. While long-term excitatory STDP enables sparse and efficient learning of the salient features in inputs, inhibitory STDP enables this learning to be stable by establishing a balance between excitatory and inhibitory currents at each neuron in the network. The synaptic weights between source and reservoir neurons form a basis set for the input patterns. The neural trajectories generated in the reservoir due to input stimulation and lateral connections between reservoir neurons can be readout by the sink layer neurons. This activity is used for adaptation of synapses between reservoir and sink layer neurons. A new measure called the discriminability index (DI) is introduced to compute if the network can discriminate between old patterns already presented in an initial training session. The DI is also used to compute if the network adapts to new patterns without losing its ability to discriminate among old patterns. The final outcome is that the network is able to correctly discriminate between all patterns—both old and new. This result holds as long as inhibitory synapses employ STDP to continuously enable current balance in the network. The results suggest a possible direction for future investigation into how spiking neural networks could address the stability-plasticity question despite having continuous synaptic plasticity. PMID:25566045
Srinivasa, Narayan; Cho, Youngkwan
2014-01-01
A spiking neural network model is described for learning to discriminate among spatial patterns in an unsupervised manner. The network anatomy consists of source neurons that are activated by external inputs, a reservoir that resembles a generic cortical layer with an excitatory-inhibitory (EI) network and a sink layer of neurons for readout. Synaptic plasticity in the form of STDP is imposed on all the excitatory and inhibitory synapses at all times. While long-term excitatory STDP enables sparse and efficient learning of the salient features in inputs, inhibitory STDP enables this learning to be stable by establishing a balance between excitatory and inhibitory currents at each neuron in the network. The synaptic weights between source and reservoir neurons form a basis set for the input patterns. The neural trajectories generated in the reservoir due to input stimulation and lateral connections between reservoir neurons can be readout by the sink layer neurons. This activity is used for adaptation of synapses between reservoir and sink layer neurons. A new measure called the discriminability index (DI) is introduced to compute if the network can discriminate between old patterns already presented in an initial training session. The DI is also used to compute if the network adapts to new patterns without losing its ability to discriminate among old patterns. The final outcome is that the network is able to correctly discriminate between all patterns-both old and new. This result holds as long as inhibitory synapses employ STDP to continuously enable current balance in the network. The results suggest a possible direction for future investigation into how spiking neural networks could address the stability-plasticity question despite having continuous synaptic plasticity.
Xiao, Min; Zheng, Wei Xing; Cao, Jinde
2013-01-01
Recent studies on Hopf bifurcations of neural networks with delays are confined to simplified neural network models consisting of only two, three, four, five, or six neurons. It is well known that neural networks are complex and large-scale nonlinear dynamical systems, so the dynamics of the delayed neural networks are very rich and complicated. Although discussing the dynamics of networks with a few neurons may help us to understand large-scale networks, there are inevitably some complicated problems that may be overlooked if simplified networks are carried over to large-scale networks. In this paper, a general delayed bidirectional associative memory neural network model with n + 1 neurons is considered. By analyzing the associated characteristic equation, the local stability of the trivial steady state is examined, and then the existence of the Hopf bifurcation at the trivial steady state is established. By applying the normal form theory and the center manifold reduction, explicit formulae are derived to determine the direction and stability of the bifurcating periodic solution. Furthermore, the paper highlights situations where the Hopf bifurcations are particularly critical, in the sense that the amplitude and the period of oscillations are very sensitive to errors due to tolerances in the implementation of neuron interconnections. It is shown that the sensitivity is crucially dependent on the delay and also significantly influenced by the feature of the number of neurons. Numerical simulations are carried out to illustrate the main results.
Atypical cross talk between mentalizing and mirror neuron networks in autism spectrum disorder.
Fishman, Inna; Keown, Christopher L; Lincoln, Alan J; Pineda, Jaime A; Müller, Ralph-Axel
2014-07-01
Converging evidence indicates that brain abnormalities in autism spectrum disorder (ASD) involve atypical network connectivity, but it is unclear whether altered connectivity is especially prominent in brain networks that participate in social cognition. To investigate whether adolescents with ASD show altered functional connectivity in 2 brain networks putatively impaired in ASD and involved in social processing, theory of mind (ToM) and mirror neuron system (MNS). Cross-sectional study using resting-state functional magnetic resonance imaging involving 25 adolescents with ASD between the ages of 11 and 18 years and 25 typically developing adolescents matched for age, handedness, and nonverbal IQ. Statistical parametric maps testing the degree of whole-brain functional connectivity and social functioning measures. Relative to typically developing controls, participants with ASD showed a mixed pattern of both over- and underconnectivity in the ToM network, which was associated with greater social impairment. Increased connectivity in the ASD group was detected primarily between the regions of the MNS and ToM, and was correlated with sociocommunicative measures, suggesting that excessive ToM-MNS cross talk might be associated with social impairment. In a secondary analysis comparing a subset of the 15 participants with ASD with the most severe symptomology and a tightly matched subset of 15 typically developing controls, participants with ASD showed exclusive overconnectivity effects in both ToM and MNS networks, which were also associated with greater social dysfunction. Adolescents with ASD showed atypically increased functional connectivity involving the mentalizing and mirror neuron systems, largely reflecting greater cross talk between the 2. This finding is consistent with emerging evidence of reduced network segregation in ASD and challenges the prevailing theory of general long-distance underconnectivity in ASD. This excess ToM-MNS connectivity may reflect immature or aberrant developmental processes in 2 brain networks involved in understanding of others, a domain of impairment in ASD. Further, robust links with sociocommunicative symptoms of ASD implicate atypically increased ToM-MNS connectivity in social deficits observed in ASD.
Girotto, Fernando; Scott, Lucas; Avchalumov, Yosef; Harris, Jacqueline; Iannattone, Stephanie; Drummond-Main, Chris; Tobias, Rose; Bello-Espinosa, Luis; Rho, Jong M.; Davidsen, Jörn; Teskey, G. Campbell; Colicos, Michael A.
2013-01-01
Maternal folic acid supplementation is essential to reduce the risk of neural tube defects. We hypothesize that high levels of folic acid throughout gestation may produce neural networks more susceptible to seizure in offspring. We hence administered large doses of folic acid to rats before and during gestation and found their offspring had a 42% decrease in their seizure threshold. In vitro, acute application of folic acid or its metabolite 4Hfolate to neurons induced hyper-excitability and bursting. Cultured neuronal networks which develop in the presence of a low concentration (50 nM) of 4Hfolate had reduced capacity to stabilize their network dynamics after a burst of high-frequency activity, and an increase in the frequency of mEPSCs. Networks reared in the presence of the folic acid metabolite 5M4Hfolate developed a spontaneous, distinctive bursting pattern, and both metabolites produced an increase in synaptic density. PMID:23492951
A solution to neural field equations by a recurrent neural network method
NASA Astrophysics Data System (ADS)
Alharbi, Abir
2012-09-01
Neural field equations (NFE) are used to model the activity of neurons in the brain, it is introduced from a single neuron 'integrate-and-fire model' starting point. The neural continuum is spatially discretized for numerical studies, and the governing equations are modeled as a system of ordinary differential equations. In this article the recurrent neural network approach is used to solve this system of ODEs. This consists of a technique developed by combining the standard numerical method of finite-differences with the Hopfield neural network. The architecture of the net, energy function, updating equations, and algorithms are developed for the NFE model. A Hopfield Neural Network is then designed to minimize the energy function modeling the NFE. Results obtained from the Hopfield-finite-differences net show excellent performance in terms of accuracy and speed. The parallelism nature of the Hopfield approaches may make them easier to implement on fast parallel computers and give them the speed advantage over the traditional methods.
Interplay between population firing stability and single neuron dynamics in hippocampal networks
Slomowitz, Edden; Styr, Boaz; Vertkin, Irena; Milshtein-Parush, Hila; Nelken, Israel; Slutsky, Michael; Slutsky, Inna
2015-01-01
Neuronal circuits' ability to maintain the delicate balance between stability and flexibility in changing environments is critical for normal neuronal functioning. However, to what extent individual neurons and neuronal populations maintain internal firing properties remains largely unknown. In this study, we show that distributions of spontaneous population firing rates and synchrony are subject to accurate homeostatic control following increase of synaptic inhibition in cultured hippocampal networks. Reduction in firing rate triggered synaptic and intrinsic adaptive responses operating as global homeostatic mechanisms to maintain firing macro-stability, without achieving local homeostasis at the single-neuron level. Adaptive mechanisms, while stabilizing population firing properties, reduced short-term facilitation essential for synaptic discrimination of input patterns. Thus, invariant ongoing population dynamics emerge from intrinsically unstable activity patterns of individual neurons and synapses. The observed differences in the precision of homeostatic control at different spatial scales challenge cell-autonomous theory of network homeostasis and suggest the existence of network-wide regulation rules. DOI: http://dx.doi.org/10.7554/eLife.04378.001 PMID:25556699
Software for Brain Network Simulations: A Comparative Study
Tikidji-Hamburyan, Ruben A.; Narayana, Vikram; Bozkus, Zeki; El-Ghazawi, Tarek A.
2017-01-01
Numerical simulations of brain networks are a critical part of our efforts in understanding brain functions under pathological and normal conditions. For several decades, the community has developed many software packages and simulators to accelerate research in computational neuroscience. In this article, we select the three most popular simulators, as determined by the number of models in the ModelDB database, such as NEURON, GENESIS, and BRIAN, and perform an independent evaluation of these simulators. In addition, we study NEST, one of the lead simulators of the Human Brain Project. First, we study them based on one of the most important characteristics, the range of supported models. Our investigation reveals that brain network simulators may be biased toward supporting a specific set of models. However, all simulators tend to expand the supported range of models by providing a universal environment for the computational study of individual neurons and brain networks. Next, our investigations on the characteristics of computational architecture and efficiency indicate that all simulators compile the most computationally intensive procedures into binary code, with the aim of maximizing their computational performance. However, not all simulators provide the simplest method for module development and/or guarantee efficient binary code. Third, a study of their amenability for high-performance computing reveals that NEST can almost transparently map an existing model on a cluster or multicore computer, while NEURON requires code modification if the model developed for a single computer has to be mapped on a computational cluster. Interestingly, parallelization is the weakest characteristic of BRIAN, which provides no support for cluster computations and limited support for multicore computers. Fourth, we identify the level of user support and frequency of usage for all simulators. Finally, we carry out an evaluation using two case studies: a large network with simplified neural and synaptic models and a small network with detailed models. These two case studies allow us to avoid any bias toward a particular software package. The results indicate that BRIAN provides the most concise language for both cases considered. Furthermore, as expected, NEST mostly favors large network models, while NEURON is better suited for detailed models. Overall, the case studies reinforce our general observation that simulators have a bias in the computational performance toward specific types of the brain network models. PMID:28775687
Ostojic, Srdjan; Brunel, Nicolas; Hakim, Vincent
2009-06-01
We investigate how synchrony can be generated or induced in networks of electrically coupled integrate-and-fire neurons subject to noisy and heterogeneous inputs. Using analytical tools, we find that in a network under constant external inputs, synchrony can appear via a Hopf bifurcation from the asynchronous state to an oscillatory state. In a homogeneous net work, in the oscillatory state all neurons fire in synchrony, while in a heterogeneous network synchrony is looser, many neurons skipping cycles of the oscillation. If the transmission of action potentials via the electrical synapses is effectively excitatory, the Hopf bifurcation is supercritical, while effectively inhibitory transmission due to pronounced hyperpolarization leads to a subcritical bifurcation. In the latter case, the network exhibits bistability between an asynchronous state and an oscillatory state where all the neurons fire in synchrony. Finally we show that for time-varying external inputs, electrical coupling enhances the synchronization in an asynchronous network via a resonance at the firing-rate frequency.
Heuckeroth, R O; Lampe, P A; Johnson, E M; Milbrandt, J
1998-08-01
Signaling through the c-Ret tyrosine kinase and the endothelin B receptor pathways is known to be critical for development of the enteric nervous system. To clarify the role of these receptors in enteric nervous system development, the effect of ligands for these receptors was examined on rat enteric neuron precursors in fully defined medium in primary culture. In this culture system, dividing Ret-positive cells differentiate, cluster into ganglia containing neurons and enteric glia, and create extensive networks reminiscent of the enteric plexus established in vivo. Glial cell-line-derived neurotrophic factor (GDNF) and neurturin both potently support survival and proliferation of enteric neuron precursors in this system. Addition of either neurturin or GDNF to these cultures increased the number of both neurons and enteric glia. Persephin, a third GDNF family member, shares many properties with neurturin and GDNF in the central nervous system and in kidney development. By contrast, persephin does not promote enteric neuron precursor proliferation or survival in these cultures. Endothelin-3 also does not increase the number of enteric neurons or glia in these cultures. Copyright 1998 Academic Press.
Neuronal networks: flip-flops in the brain.
McCormick, David A
2005-04-26
Neuronal activity can rapidly flip-flop between stable states. Although these semi-stable states can be generated through interactions of neuronal networks, it is now known that they can also occur in vivo through intrinsic ionic currents.
Six networks on a universal neuromorphic computing substrate.
Pfeil, Thomas; Grübl, Andreas; Jeltsch, Sebastian; Müller, Eric; Müller, Paul; Petrovici, Mihai A; Schmuker, Michael; Brüderle, Daniel; Schemmel, Johannes; Meier, Karlheinz
2013-01-01
In this study, we present a highly configurable neuromorphic computing substrate and use it for emulating several types of neural networks. At the heart of this system lies a mixed-signal chip, with analog implementations of neurons and synapses and digital transmission of action potentials. Major advantages of this emulation device, which has been explicitly designed as a universal neural network emulator, are its inherent parallelism and high acceleration factor compared to conventional computers. Its configurability allows the realization of almost arbitrary network topologies and the use of widely varied neuronal and synaptic parameters. Fixed-pattern noise inherent to analog circuitry is reduced by calibration routines. An integrated development environment allows neuroscientists to operate the device without any prior knowledge of neuromorphic circuit design. As a showcase for the capabilities of the system, we describe the successful emulation of six different neural networks which cover a broad spectrum of both structure and functionality.
Six Networks on a Universal Neuromorphic Computing Substrate
Pfeil, Thomas; Grübl, Andreas; Jeltsch, Sebastian; Müller, Eric; Müller, Paul; Petrovici, Mihai A.; Schmuker, Michael; Brüderle, Daniel; Schemmel, Johannes; Meier, Karlheinz
2013-01-01
In this study, we present a highly configurable neuromorphic computing substrate and use it for emulating several types of neural networks. At the heart of this system lies a mixed-signal chip, with analog implementations of neurons and synapses and digital transmission of action potentials. Major advantages of this emulation device, which has been explicitly designed as a universal neural network emulator, are its inherent parallelism and high acceleration factor compared to conventional computers. Its configurability allows the realization of almost arbitrary network topologies and the use of widely varied neuronal and synaptic parameters. Fixed-pattern noise inherent to analog circuitry is reduced by calibration routines. An integrated development environment allows neuroscientists to operate the device without any prior knowledge of neuromorphic circuit design. As a showcase for the capabilities of the system, we describe the successful emulation of six different neural networks which cover a broad spectrum of both structure and functionality. PMID:23423583
Bifurcations of large networks of two-dimensional integrate and fire neurons.
Nicola, Wilten; Campbell, Sue Ann
2013-08-01
Recently, a class of two-dimensional integrate and fire models has been used to faithfully model spiking neurons. This class includes the Izhikevich model, the adaptive exponential integrate and fire model, and the quartic integrate and fire model. The bifurcation types for the individual neurons have been thoroughly analyzed by Touboul (SIAM J Appl Math 68(4):1045-1079, 2008). However, when the models are coupled together to form networks, the networks can display bifurcations that an uncoupled oscillator cannot. For example, the networks can transition from firing with a constant rate to burst firing. This paper introduces a technique to reduce a full network of this class of neurons to a mean field model, in the form of a system of switching ordinary differential equations. The reduction uses population density methods and a quasi-steady state approximation to arrive at the mean field system. Reduced models are derived for networks with different topologies and different model neurons with biologically derived parameters. The mean field equations are able to qualitatively and quantitatively describe the bifurcations that the full networks display. Extensions and higher order approximations are discussed.
Balanced excitation and inhibition are required for high-capacity, noise-robust neuronal selectivity
Abbott, L. F.; Sompolinsky, Haim
2017-01-01
Neurons and networks in the cerebral cortex must operate reliably despite multiple sources of noise. To evaluate the impact of both input and output noise, we determine the robustness of single-neuron stimulus selective responses, as well as the robustness of attractor states of networks of neurons performing memory tasks. We find that robustness to output noise requires synaptic connections to be in a balanced regime in which excitation and inhibition are strong and largely cancel each other. We evaluate the conditions required for this regime to exist and determine the properties of networks operating within it. A plausible synaptic plasticity rule for learning that balances weight configurations is presented. Our theory predicts an optimal ratio of the number of excitatory and inhibitory synapses for maximizing the encoding capacity of balanced networks for given statistics of afferent activations. Previous work has shown that balanced networks amplify spatiotemporal variability and account for observed asynchronous irregular states. Here we present a distinct type of balanced network that amplifies small changes in the impinging signals and emerges automatically from learning to perform neuronal and network functions robustly. PMID:29042519
Komatsu, Misako; Namikawa, Jun; Chao, Zenas C; Nagasaka, Yasuo; Fujii, Naotaka; Nakamura, Kiyohiko; Tani, Jun
2014-01-01
Many previous studies have proposed methods for quantifying neuronal interactions. However, these methods evaluated the interactions between recorded signals in an isolated network. In this study, we present a novel approach for estimating interactions between observed neuronal signals by theorizing that those signals are observed from only a part of the network that also includes unobserved structures. We propose a variant of the recurrent network model that consists of both observable and unobservable units. The observable units represent recorded neuronal activity, and the unobservable units are introduced to represent activity from unobserved structures in the network. The network structures are characterized by connective weights, i.e., the interaction intensities between individual units, which are estimated from recorded signals. We applied this model to multi-channel brain signals recorded from monkeys, and obtained robust network structures with physiological relevance. Furthermore, the network exhibited common features that portrayed cortical dynamics as inversely correlated interactions between excitatory and inhibitory populations of neurons, which are consistent with the previous view of cortical local circuits. Our results suggest that the novel concept of incorporating an unobserved structure into network estimations has theoretical advantages and could provide insights into brain dynamics beyond what can be directly observed. Copyright © 2014 Elsevier Ireland Ltd and the Japan Neuroscience Society. All rights reserved.
Numerical methods for solving moment equations in kinetic theory of neuronal network dynamics
NASA Astrophysics Data System (ADS)
Rangan, Aaditya V.; Cai, David; Tao, Louis
2007-02-01
Recently developed kinetic theory and related closures for neuronal network dynamics have been demonstrated to be a powerful theoretical framework for investigating coarse-grained dynamical properties of neuronal networks. The moment equations arising from the kinetic theory are a system of (1 + 1)-dimensional nonlinear partial differential equations (PDE) on a bounded domain with nonlinear boundary conditions. The PDEs themselves are self-consistently specified by parameters which are functions of the boundary values of the solution. The moment equations can be stiff in space and time. Numerical methods are presented here for efficiently and accurately solving these moment equations. The essential ingredients in our numerical methods include: (i) the system is discretized in time with an implicit Euler method within a spectral deferred correction framework, therefore, the PDEs of the kinetic theory are reduced to a sequence, in time, of boundary value problems (BVPs) with nonlinear boundary conditions; (ii) a set of auxiliary parameters is introduced to recast the original BVP with nonlinear boundary conditions as BVPs with linear boundary conditions - with additional algebraic constraints on the auxiliary parameters; (iii) a careful combination of two Newton's iterates for the nonlinear BVP with linear boundary condition, interlaced with a Newton's iterate for solving the associated algebraic constraints is constructed to achieve quadratic convergence for obtaining the solutions with self-consistent parameters. It is shown that a simple fixed-point iteration can only achieve a linear convergence for the self-consistent parameters. The practicability and efficiency of our numerical methods for solving the moment equations of the kinetic theory are illustrated with numerical examples. It is further demonstrated that the moment equations derived from the kinetic theory of neuronal network dynamics can very well capture the coarse-grained dynamical properties of integrate-and-fire neuronal networks.
Emergent coordination underlying learning to reach to grasp with a brain-machine interface.
Vaidya, Mukta; Balasubramanian, Karthikeyan; Southerland, Joshua; Badreldin, Islam; Eleryan, Ahmed; Shattuck, Kelsey; Gururangan, Suchin; Slutzky, Marc; Osborne, Leslie; Fagg, Andrew; Oweiss, Karim; Hatsopoulos, Nicholas G
2018-04-01
The development of coordinated reach-to-grasp movement has been well studied in infants and children. However, the role of motor cortex during this development is unclear because it is difficult to study in humans. We took the approach of using a brain-machine interface (BMI) paradigm in rhesus macaques with prior therapeutic amputations to examine the emergence of novel, coordinated reach to grasp. Previous research has shown that after amputation, the cortical area previously involved in the control of the lost limb undergoes reorganization, but prior BMI work has largely relied on finding neurons that already encode specific movement-related information. In this study, we taught macaques to cortically control a robotic arm and hand through operant conditioning, using neurons that were not explicitly reach or grasp related. Over the course of training, stereotypical patterns emerged and stabilized in the cross-covariance between the reaching and grasping velocity profiles, between pairs of neurons involved in controlling reach and grasp, and to a comparable, but lesser, extent between other stable neurons in the network. In fact, we found evidence of this structured coordination between pairs composed of all combinations of neurons decoding reach or grasp and other stable neurons in the network. The degree of and participation in coordination was highly correlated across all pair types. Our approach provides a unique model for studying the development of novel, coordinated reach-to-grasp movement at the behavioral and cortical levels. NEW & NOTEWORTHY Given that motor cortex undergoes reorganization after amputation, our work focuses on training nonhuman primates with chronic amputations to use neurons that are not reach or grasp related to control a robotic arm to reach to grasp through the use of operant conditioning, mimicking early development. We studied the development of a novel, coordinated behavior at the behavioral and cortical level, and the neural plasticity in M1 associated with learning to use a brain-machine interface.
The Complexity of Dynamics in Small Neural Circuits
Panzeri, Stefano
2016-01-01
Mean-field approximations are a powerful tool for studying large neural networks. However, they do not describe well the behavior of networks composed of a small number of neurons. In this case, major differences between the mean-field approximation and the real behavior of the network can arise. Yet, many interesting problems in neuroscience involve the study of mesoscopic networks composed of a few tens of neurons. Nonetheless, mathematical methods that correctly describe networks of small size are still rare, and this prevents us to make progress in understanding neural dynamics at these intermediate scales. Here we develop a novel systematic analysis of the dynamics of arbitrarily small networks composed of homogeneous populations of excitatory and inhibitory firing-rate neurons. We study the local bifurcations of their neural activity with an approach that is largely analytically tractable, and we numerically determine the global bifurcations. We find that for strong inhibition these networks give rise to very complex dynamics, caused by the formation of multiple branching solutions of the neural dynamics equations that emerge through spontaneous symmetry-breaking. This qualitative change of the neural dynamics is a finite-size effect of the network, that reveals qualitative and previously unexplored differences between mesoscopic cortical circuits and their mean-field approximation. The most important consequence of spontaneous symmetry-breaking is the ability of mesoscopic networks to regulate their degree of functional heterogeneity, which is thought to help reducing the detrimental effect of noise correlations on cortical information processing. PMID:27494737
Patel, Tapan P.; Ventre, Scott C.; Geddes-Klein, Donna; Singh, Pallab K.
2014-01-01
Alterations in the activity of neural circuits are a common consequence of traumatic brain injury (TBI), but the relationship between single-neuron properties and the aggregate network behavior is not well understood. We recently reported that the GluN2B-containing NMDA receptors (NMDARs) are key in mediating mechanical forces during TBI, and that TBI produces a complex change in the functional connectivity of neuronal networks. Here, we evaluated whether cell-to-cell heterogeneity in the connectivity and aggregate contribution of GluN2B receptors to [Ca2+]i before injury influenced the functional rewiring, spontaneous activity, and network plasticity following injury using primary rat cortical dissociated neurons. We found that the functional connectivity of a neuron to its neighbors, combined with the relative influx of calcium through distinct NMDAR subtypes, together contributed to the individual neuronal response to trauma. Specifically, individual neurons whose [Ca2+]i oscillations were largely due to GluN2B NMDAR activation lost many of their functional targets 1 h following injury. In comparison, neurons with large GluN2A contribution or neurons with high functional connectivity both independently protected against injury-induced loss in connectivity. Mechanistically, we found that traumatic injury resulted in increased uncorrelated network activity, an effect linked to reduction of the voltage-sensitive Mg2+ block of GluN2B-containing NMDARs. This uncorrelated activation of GluN2B subtypes after injury significantly limited the potential for network remodeling in response to a plasticity stimulus. Together, our data suggest that two single-cell characteristics, the aggregate contribution of NMDAR subtypes and the number of functional connections, influence network structure following traumatic injury. PMID:24647941
Excitatory signal flow and connectivity in a cortical column: focus on barrel cortex.
Lübke, Joachim; Feldmeyer, Dirk
2007-07-01
A basic feature of the neocortex is its organization in functional, vertically oriented columns, recurring modules of signal processing and a system of transcolumnar long-range horizontal connections. These columns, together with their network of neurons, present in all sensory cortices, are the cellular substrate for sensory perception in the brain. Cortical columns contain thousands of neurons and span all cortical layers. They receive input from other cortical areas and subcortical brain regions and in turn their neurons provide output to various areas of the brain. The modular concept presumes that the neuronal network in a cortical column performs basic signal transformations, which are then integrated with the activity in other networks and more extended brain areas. To understand how sensory signals from the periphery are transformed into electrical activity in the neocortex it is essential to elucidate the spatial-temporal dynamics of cortical signal processing and the underlying neuronal 'microcircuits'. In the last decade the 'barrel' field in the rodent somatosensory cortex, which processes sensory information arriving from the mysticial vibrissae, has become a quite attractive model system because here the columnar structure is clearly visible. In the neocortex and in particular the barrel cortex, numerous neuronal connections within or between cortical layers have been studied both at the functional and structural level. Besides similarities, clear differences with respect to both physiology and morphology of synaptic transmission and connectivity were found. It is therefore necessary to investigate each neuronal connection individually, in order to develop a realistic model of neuronal connectivity and organization of a cortical column. This review attempts to summarize recent advances in the study of individual microcircuits and their functional relevance within the framework of a cortical column, with emphasis on excitatory signal flow.
Hull, Michael J.; Soffe, Stephen R.; Willshaw, David J.; Roberts, Alan
2016-01-01
What cellular and network properties allow reliable neuronal rhythm generation or firing that can be started and stopped by brief synaptic inputs? We investigate rhythmic activity in an electrically-coupled population of brainstem neurons driving swimming locomotion in young frog tadpoles, and how activity is switched on and off by brief sensory stimulation. We build a computational model of 30 electrically-coupled conditional pacemaker neurons on one side of the tadpole hindbrain and spinal cord. Based on experimental estimates for neuron properties, population sizes, synapse strengths and connections, we show that: long-lasting, mutual, glutamatergic excitation between the neurons allows the network to sustain rhythmic pacemaker firing at swimming frequencies following brief synaptic excitation; activity persists but rhythm breaks down without electrical coupling; NMDA voltage-dependency doubles the range of synaptic feedback strengths generating sustained rhythm. The network can be switched on and off at short latency by brief synaptic excitation and inhibition. We demonstrate that a population of generic Hodgkin-Huxley type neurons coupled by glutamatergic excitatory feedback can generate sustained asynchronous firing switched on and off synaptically. We conclude that networks of neurons with NMDAR mediated feedback excitation can generate self-sustained activity following brief synaptic excitation. The frequency of activity is limited by the kinetics of the neuron membrane channels and can be stopped by brief inhibitory input. Network activity can be rhythmic at lower frequencies if the neurons are electrically coupled. Our key finding is that excitatory synaptic feedback within a population of neurons can produce switchable, stable, sustained firing without synaptic inhibition. PMID:26824331
Vardi, Roni; Goldental, Amir; Sardi, Shira; Sheinin, Anton; Kanter, Ido
2016-11-08
The increasing number of recording electrodes enhances the capability of capturing the network's cooperative activity, however, using too many monitors might alter the properties of the measured neural network and induce noise. Using a technique that merges simultaneous multi-patch-clamp and multi-electrode array recordings of neural networks in-vitro, we show that the membrane potential of a single neuron is a reliable and super-sensitive probe for monitoring such cooperative activities and their detailed rhythms. Specifically, the membrane potential and the spiking activity of a single neuron are either highly correlated or highly anti-correlated with the time-dependent macroscopic activity of the entire network. This surprising observation also sheds light on the cooperative origin of neuronal burst in cultured networks. Our findings present an alternative flexible approach to the technique based on a massive tiling of networks by large-scale arrays of electrodes to monitor their activity.
NASA Astrophysics Data System (ADS)
Xie, Huijuan; Gong, Yubing; Wang, Baoying
In this paper, we numerically study the effect of channel noise on synchronization transitions induced by time delay in adaptive scale-free Hodgkin-Huxley neuronal networks with spike-timing-dependent plasticity (STDP). It is found that synchronization transitions by time delay vary as channel noise intensity is changed and become most pronounced when channel noise intensity is optimal. This phenomenon depends on STDP and network average degree, and it can be either enhanced or suppressed as network average degree increases depending on channel noise intensity. These results show that there are optimal channel noise and network average degree that can enhance the synchronization transitions by time delay in the adaptive neuronal networks. These findings could be helpful for better understanding of the regulation effect of channel noise on synchronization of neuronal networks. They could find potential implications for information transmission in neural systems.
Cooperation-Controlled Learning for Explicit Class Structure in Self-Organizing Maps
Kamimura, Ryotaro
2014-01-01
We attempt to demonstrate the effectiveness of multiple points of view toward neural networks. By restricting ourselves to two points of view of a neuron, we propose a new type of information-theoretic method called “cooperation-controlled learning.” In this method, individual and collective neurons are distinguished from one another, and we suppose that the characteristics of individual and collective neurons are different. To implement individual and collective neurons, we prepare two networks, namely, cooperative and uncooperative networks. The roles of these networks and the roles of individual and collective neurons are controlled by the cooperation parameter. As the parameter is increased, the role of cooperative networks becomes more important in learning, and the characteristics of collective neurons become more dominant. On the other hand, when the parameter is small, individual neurons play a more important role. We applied the method to the automobile and housing data from the machine learning database and examined whether explicit class boundaries could be obtained. Experimental results showed that cooperation-controlled learning, in particular taking into account information on input units, could be used to produce clearer class structure than conventional self-organizing maps. PMID:25309950
Bayesian Inference and Online Learning in Poisson Neuronal Networks.
Huang, Yanping; Rao, Rajesh P N
2016-08-01
Motivated by the growing evidence for Bayesian computation in the brain, we show how a two-layer recurrent network of Poisson neurons can perform both approximate Bayesian inference and learning for any hidden Markov model. The lower-layer sensory neurons receive noisy measurements of hidden world states. The higher-layer neurons infer a posterior distribution over world states via Bayesian inference from inputs generated by sensory neurons. We demonstrate how such a neuronal network with synaptic plasticity can implement a form of Bayesian inference similar to Monte Carlo methods such as particle filtering. Each spike in a higher-layer neuron represents a sample of a particular hidden world state. The spiking activity across the neural population approximates the posterior distribution over hidden states. In this model, variability in spiking is regarded not as a nuisance but as an integral feature that provides the variability necessary for sampling during inference. We demonstrate how the network can learn the likelihood model, as well as the transition probabilities underlying the dynamics, using a Hebbian learning rule. We present results illustrating the ability of the network to perform inference and learning for arbitrary hidden Markov models.
Collective Dynamics for Heterogeneous Networks of Theta Neurons
NASA Astrophysics Data System (ADS)
Luke, Tanushree
Collective behavior in neural networks has often been used as an indicator of communication between different brain areas. These collective synchronization and desynchronization patterns are also considered an important feature in understanding normal and abnormal brain function. To understand the emergence of these collective patterns, I create an analytic model that identifies all such macroscopic steady-states attainable by a network of Type-I neurons. This network, whose basic unit is the model "theta'' neuron, contains a mixture of excitable and spiking neurons coupled via a smooth pulse-like synapse. Applying the Ott-Antonsen reduction method in the thermodynamic limit, I obtain a low-dimensional evolution equation that describes the asymptotic dynamics of the macroscopic mean field of the network. This model can be used as the basis in understanding more complicated neuronal networks when additional dynamical features are included. From this reduced dynamical equation for the mean field, I show that the network exhibits three collective attracting steady-states. The first two are equilibrium states that both reflect partial synchronization in the network, whereas the third is a limit cycle in which the degree of network synchronization oscillates in time. In addition to a comprehensive identification of all possible attracting macro-states, this analytic model permits a complete bifurcation analysis of the collective behavior of the network with respect to three key network features: the degree of excitability of the neurons, the heterogeneity of the population, and the overall coupling strength. The network typically tends towards the two macroscopic equilibrium states when the neuron's intrinsic dynamics and the network interactions reinforce each other. In contrast, the limit cycle state, bifurcations, and multistability tend to occur when there is competition between these network features. I also outline here an extension of the above model where the neurons' excitability now varies in time sinuosoidally, thus simulating a parabolic bursting network. This time-varying excitability can lead to the emergence of macroscopic chaos and multistability in the collective behavior of the network. Finally, I expand the single population model described above to examine a two-population neuronal network where each population has its own unique mixture of excitable and spiking neurons, as well as its own coupling strength (either excitatory or inhibitory in nature). Specifically, I consider the situation where the first population is only allowed to influence the second population without any feedback, thus effectively creating a feed-forward "driver-response" system. In this special arrangement, the driver's asymptotic macroscopic dynamics are fully explored in the comprehensive analysis of the single population. Then, in the presence of an influence from the driver, the modified dynamics of the second population, which now acts as a response population, can also be fully analyzed. As in the time-varying model, these modifications give rise to richer dynamics to the response population than those found from the single population formalism, including multi-periodicity and chaos.
Jagasia, Ravi; Steib, Kathrin; Englberger, Elisabeth; Herold, Sabine; Faus-Kessler, Theresa; Saxe, Michael; Gage, Fred H.; Song, Hongjun; Lie, D. Chichung
2009-01-01
Survival and integration of new neurons in the hippocampal circuit are rate-limiting steps in adult hippocampal neurogenesis. Neuronal network activity is a major regulator of these processes, yet little is known about the respective downstream signalling pathways. Here, we investigate the role of CREB signalling in adult hippocampal neurogenesis. CREB is activated in new granule neurons during a distinct developmental period. Loss of CREB function in a cell-autonomous fashion impairs dendritic development, decreases the expression of the neurogenic transcription factor NeuroD and of the neuronal microtubule associated protein, DCX, and compromises the survival of newborn neurons. In addition, GABA-mediated excitation regulates CREB activation at early developmental stages. Importantly, developmental defects following loss of GABA-mediated excitation can be compensated by enhanced CREB signalling. These results indicate that CREB signalling is a central pathway in adult hippocampal neurogenesis, regulating the development and survival of new hippocampal neurons downstream of GABA-mediated excitation. PMID:19553437
Zhang, Xiaoyu; Ju, Han; Penney, Trevor B; VanDongen, Antonius M J
2017-01-01
Humans instantly recognize a previously seen face as "familiar." To deepen our understanding of familiarity-novelty detection, we simulated biologically plausible neural network models of generic cortical microcircuits consisting of spiking neurons with random recurrent synaptic connections. NMDA receptor (NMDAR)-dependent synaptic plasticity was implemented to allow for unsupervised learning and bidirectional modifications. Network spiking activity evoked by sensory inputs consisting of face images altered synaptic efficacy, which resulted in the network responding more strongly to a previously seen face than a novel face. Network size determined how many faces could be accurately recognized as familiar. When the simulated model became sufficiently complex in structure, multiple familiarity traces could be retained in the same network by forming partially-overlapping subnetworks that differ slightly from each other, thereby resulting in a high storage capacity. Fisher's discriminant analysis was applied to identify critical neurons whose spiking activity predicted familiar input patterns. Intriguingly, as sensory exposure was prolonged, the selected critical neurons tended to appear at deeper layers of the network model, suggesting recruitment of additional circuits in the network for incremental information storage. We conclude that generic cortical microcircuits with bidirectional synaptic plasticity have an intrinsic ability to detect familiar inputs. This ability does not require a specialized wiring diagram or supervision and can therefore be expected to emerge naturally in developing cortical circuits.
2017-01-01
Abstract Humans instantly recognize a previously seen face as “familiar.” To deepen our understanding of familiarity-novelty detection, we simulated biologically plausible neural network models of generic cortical microcircuits consisting of spiking neurons with random recurrent synaptic connections. NMDA receptor (NMDAR)-dependent synaptic plasticity was implemented to allow for unsupervised learning and bidirectional modifications. Network spiking activity evoked by sensory inputs consisting of face images altered synaptic efficacy, which resulted in the network responding more strongly to a previously seen face than a novel face. Network size determined how many faces could be accurately recognized as familiar. When the simulated model became sufficiently complex in structure, multiple familiarity traces could be retained in the same network by forming partially-overlapping subnetworks that differ slightly from each other, thereby resulting in a high storage capacity. Fisher’s discriminant analysis was applied to identify critical neurons whose spiking activity predicted familiar input patterns. Intriguingly, as sensory exposure was prolonged, the selected critical neurons tended to appear at deeper layers of the network model, suggesting recruitment of additional circuits in the network for incremental information storage. We conclude that generic cortical microcircuits with bidirectional synaptic plasticity have an intrinsic ability to detect familiar inputs. This ability does not require a specialized wiring diagram or supervision and can therefore be expected to emerge naturally in developing cortical circuits. PMID:28534043
Inference of neuronal network spike dynamics and topology from calcium imaging data
Lütcke, Henry; Gerhard, Felipe; Zenke, Friedemann; Gerstner, Wulfram; Helmchen, Fritjof
2013-01-01
Two-photon calcium imaging enables functional analysis of neuronal circuits by inferring action potential (AP) occurrence (“spike trains”) from cellular fluorescence signals. It remains unclear how experimental parameters such as signal-to-noise ratio (SNR) and acquisition rate affect spike inference and whether additional information about network structure can be extracted. Here we present a simulation framework for quantitatively assessing how well spike dynamics and network topology can be inferred from noisy calcium imaging data. For simulated AP-evoked calcium transients in neocortical pyramidal cells, we analyzed the quality of spike inference as a function of SNR and data acquisition rate using a recently introduced peeling algorithm. Given experimentally attainable values of SNR and acquisition rate, neural spike trains could be reconstructed accurately and with up to millisecond precision. We then applied statistical neuronal network models to explore how remaining uncertainties in spike inference affect estimates of network connectivity and topological features of network organization. We define the experimental conditions suitable for inferring whether the network has a scale-free structure and determine how well hub neurons can be identified. Our findings provide a benchmark for future calcium imaging studies that aim to reliably infer neuronal network properties. PMID:24399936
Autaptic effects on synchrony of neurons coupled by electrical synapses
NASA Astrophysics Data System (ADS)
Kim, Youngtae
2017-07-01
In this paper, we numerically study the effects of a special synapse known as autapse on synchronization of population of Morris-Lecar (ML) neurons coupled by electrical synapses. Several configurations of the ML neuronal populations such as a pair or a ring or a globally coupled network with and without autapses are examined. While most of the papers on the autaptic effects on synchronization have used networks of neurons of same spiking rate, we use the network of neurons of different spiking rates. We find that the optimal autaptic coupling strength and the autaptic time delay enhance synchronization in our neural networks. We use the phase response curve analysis to explain the enhanced synchronization by autapses. Our findings reveal the important relationship between the intraneuronal feedback loop and the interneuronal coupling.
Phase-locking and bistability in neuronal networks with synaptic depression
NASA Astrophysics Data System (ADS)
Akcay, Zeynep; Huang, Xinxian; Nadim, Farzan; Bose, Amitabha
2018-02-01
We consider a recurrent network of two oscillatory neurons that are coupled with inhibitory synapses. We use the phase response curves of the neurons and the properties of short-term synaptic depression to define Poincaré maps for the activity of the network. The fixed points of these maps correspond to phase-locked modes of the network. Using these maps, we analyze the conditions that allow short-term synaptic depression to lead to the existence of bistable phase-locked, periodic solutions. We show that bistability arises when either the phase response curve of the neuron or the short-term depression profile changes steeply enough. The results apply to any Type I oscillator and we illustrate our findings using the Quadratic Integrate-and-Fire and Morris-Lecar neuron models.
An NV-Diamond Magnetic Imager for Neuroscience
NASA Astrophysics Data System (ADS)
Turner, Matthew; Schloss, Jennifer; Bauch, Erik; Hart, Connor; Walsworth, Ronald
2017-04-01
We present recent progress towards imaging time-varying magnetic fields from neurons using nitrogen-vacancy centers in diamond. The diamond neuron imager is noninvasive, label-free, and achieves single-cell resolution and state-of-the-art broadband sensitivity. By imaging magnetic fields from injected currents in mammalian neurons, we will map functional neuronal network connections and illuminate biophysical properties of neurons invisible to traditional electrophysiology. Furthermore, through enhancing magnetometer sensitivity, we aim to demonstrate real-time imaging of action potentials from networks of mammalian neurons.
NASA Astrophysics Data System (ADS)
Zhu, Xiaoliang; Du, Li; Liu, Bendong; Zhe, Jiang
2016-06-01
We present a method based on an electrochemical sensor array and a back propagation artificial neural network for detection and quantification of four properties of lubrication oil, namely water (0, 500 ppm, 1000 ppm), total acid number (TAN) (13.1, 13.7, 14.4, 15.6 mg KOH g-1), soot (0, 1%, 2%, 3%) and sulfur content (1.3%, 1.37%, 1.44%, 1.51%). The sensor array, consisting of four micromachined electrochemical sensors, detects the four properties with overlapping sensitivities. A total set of 36 oil samples containing mixtures of water, soot, and sulfuric acid with different concentrations were prepared for testing. The sensor array’s responses were then divided to three sets: training sets (80% data), validation sets (10%) and testing sets (10%). Several back propagation artificial neural network architectures were trained with the training and validation sets; one architecture with four input neurons, 50 and 5 neurons in the first and second hidden layer, and four neurons in the output layer was selected. The selected neural network was then tested using the four sets of testing data (10%). Test results demonstrated that the developed artificial neural network is able to quantitatively determine the four lubrication properties (water, TAN, soot, and sulfur content) with a maximum prediction error of 18.8%, 6.0%, 6.7%, and 5.4%, respectively, indicting a good match between the target and predicted values. With the developed network, the sensor array could be potentially used for online lubricant oil condition monitoring.
Microglia in CNS development: Shaping the brain for the future.
Mosser, Coralie-Anne; Baptista, Sofia; Arnoux, Isabelle; Audinat, Etienne
Microglial cells are the resident macrophages of the central nervous system (CNS) and are mainly known for their roles in neuropathologies. However, major recent developments have revealed that these immune cells actively interact with neurons in physiological conditions and can modulate the fate and functions of synapses. Originating from myeloid precursors born in the yolk sac, microglial cells invade the CNS during early embryonic development. As a consequence they can potentially influence neuronal proliferation, migration and differentiation as well as the formation and maturation of neuronal networks, thereby contributing to the entire shaping of the CNS. We review here recent evidence indicating that microglial cells are indeed involved in crucial steps of the CNS development, including neuronal survival and apoptosis, axonal growth, migration of neurons, pruning of supernumerary synapses and functional maturation of developing synapses. We also discuss current hypotheses proposing that diverting microglial cells of their physiological functions, by promoting the expression of an immune phenotype during development, may be central to neurodevelopmental disorders such as autism, schizophrenia and epilepsy. Copyright © 2017 Elsevier Ltd. All rights reserved.
Harper, Nicol S; Schoppe, Oliver; Willmore, Ben D B; Cui, Zhanfeng; Schnupp, Jan W H; King, Andrew J
2016-11-01
Cortical sensory neurons are commonly characterized using the receptive field, the linear dependence of their response on the stimulus. In primary auditory cortex neurons can be characterized by their spectrotemporal receptive fields, the spectral and temporal features of a sound that linearly drive a neuron. However, receptive fields do not capture the fact that the response of a cortical neuron results from the complex nonlinear network in which it is embedded. By fitting a nonlinear feedforward network model (a network receptive field) to cortical responses to natural sounds, we reveal that primary auditory cortical neurons are sensitive over a substantially larger spectrotemporal domain than is seen in their standard spectrotemporal receptive fields. Furthermore, the network receptive field, a parsimonious network consisting of 1-7 sub-receptive fields that interact nonlinearly, consistently better predicts neural responses to auditory stimuli than the standard receptive fields. The network receptive field reveals separate excitatory and inhibitory sub-fields with different nonlinear properties, and interaction of the sub-fields gives rise to important operations such as gain control and conjunctive feature detection. The conjunctive effects, where neurons respond only if several specific features are present together, enable increased selectivity for particular complex spectrotemporal structures, and may constitute an important stage in sound recognition. In conclusion, we demonstrate that fitting auditory cortical neural responses with feedforward network models expands on simple linear receptive field models in a manner that yields substantially improved predictive power and reveals key nonlinear aspects of cortical processing, while remaining easy to interpret in a physiological context.
Willmore, Ben D. B.; Cui, Zhanfeng; Schnupp, Jan W. H.; King, Andrew J.
2016-01-01
Cortical sensory neurons are commonly characterized using the receptive field, the linear dependence of their response on the stimulus. In primary auditory cortex neurons can be characterized by their spectrotemporal receptive fields, the spectral and temporal features of a sound that linearly drive a neuron. However, receptive fields do not capture the fact that the response of a cortical neuron results from the complex nonlinear network in which it is embedded. By fitting a nonlinear feedforward network model (a network receptive field) to cortical responses to natural sounds, we reveal that primary auditory cortical neurons are sensitive over a substantially larger spectrotemporal domain than is seen in their standard spectrotemporal receptive fields. Furthermore, the network receptive field, a parsimonious network consisting of 1–7 sub-receptive fields that interact nonlinearly, consistently better predicts neural responses to auditory stimuli than the standard receptive fields. The network receptive field reveals separate excitatory and inhibitory sub-fields with different nonlinear properties, and interaction of the sub-fields gives rise to important operations such as gain control and conjunctive feature detection. The conjunctive effects, where neurons respond only if several specific features are present together, enable increased selectivity for particular complex spectrotemporal structures, and may constitute an important stage in sound recognition. In conclusion, we demonstrate that fitting auditory cortical neural responses with feedforward network models expands on simple linear receptive field models in a manner that yields substantially improved predictive power and reveals key nonlinear aspects of cortical processing, while remaining easy to interpret in a physiological context. PMID:27835647
Excitement and synchronization of small-world neuronal networks with short-term synaptic plasticity.
Han, Fang; Wiercigroch, Marian; Fang, Jian-An; Wang, Zhijie
2011-10-01
Excitement and synchronization of electrically and chemically coupled Newman-Watts (NW) small-world neuronal networks with a short-term synaptic plasticity described by a modified Oja learning rule are investigated. For each type of neuronal network, the variation properties of synaptic weights are examined first. Then the effects of the learning rate, the coupling strength and the shortcut-adding probability on excitement and synchronization of the neuronal network are studied. It is shown that the synaptic learning suppresses the over-excitement, helps synchronization for the electrically coupled network but impairs synchronization for the chemically coupled one. Both the introduction of shortcuts and the increase of the coupling strength improve synchronization and they are helpful in increasing the excitement for the chemically coupled network, but have little effect on the excitement of the electrically coupled one.
Ly, Cheng
2013-10-01
The population density approach to neural network modeling has been utilized in a variety of contexts. The idea is to group many similar noisy neurons into populations and track the probability density function for each population that encompasses the proportion of neurons with a particular state rather than simulating individual neurons (i.e., Monte Carlo). It is commonly used for both analytic insight and as a time-saving computational tool. The main shortcoming of this method is that when realistic attributes are incorporated in the underlying neuron model, the dimension of the probability density function increases, leading to intractable equations or, at best, computationally intensive simulations. Thus, developing principled dimension-reduction methods is essential for the robustness of these powerful methods. As a more pragmatic tool, it would be of great value for the larger theoretical neuroscience community. For exposition of this method, we consider a single uncoupled population of leaky integrate-and-fire neurons receiving external excitatory synaptic input only. We present a dimension-reduction method that reduces a two-dimensional partial differential-integral equation to a computationally efficient one-dimensional system and gives qualitatively accurate results in both the steady-state and nonequilibrium regimes. The method, termed modified mean-field method, is based entirely on the governing equations and not on any auxiliary variables or parameters, and it does not require fine-tuning. The principles of the modified mean-field method have potential applicability to more realistic (i.e., higher-dimensional) neural networks.
Bondarenko, Vladimir E; Cymbalyuk, Gennady S; Patel, Girish; Deweerth, Stephen P; Calabrese, Ronald L
2004-12-01
Oscillatory activity in the central nervous system is associated with various functions, like motor control, memory formation, binding, and attention. Quasiperiodic oscillations are rarely discussed in the neurophysiological literature yet they may play a role in the nervous system both during normal function and disease. Here we use a physical system and a model to explore scenarios for how quasiperiodic oscillations might arise in neuronal networks. An oscillatory system of two mutually inhibitory neuronal units is a ubiquitous network module found in nervous systems and is called a half-center oscillator. Previously we created a half-center oscillator of two identical oscillatory silicon (analog Very Large Scale Integration) neurons and developed a mathematical model describing its dynamics. In the mathematical model, we have shown that an in-phase limit cycle becomes unstable through a subcritical torus bifurcation. However, the existence of this torus bifurcation in experimental silicon two-neuron system was not rigorously demonstrated or investigated. Here we demonstrate the torus predicted by the model for the silicon implementation of a half-center oscillator using complex time series analysis, including bifurcation diagrams, mapping techniques, correlation functions, amplitude spectra, and correlation dimensions, and we investigate how the properties of the quasiperiodic oscillations depend on the strengths of coupling between the silicon neurons. The potential advantages and disadvantages of quasiperiodic oscillations (torus) for biological neural systems and artificial neural networks are discussed.
Convergent neuromodulation onto a network neuron can have divergent effects at the network level.
Kintos, Nickolas; Nusbaum, Michael P; Nadim, Farzan
2016-04-01
Different neuromodulators often target the same ion channel. When such modulators act on different neuron types, this convergent action can enable a rhythmic network to produce distinct outputs. Less clear are the functional consequences when two neuromodulators influence the same ion channel in the same neuron. We examine the consequences of this seeming redundancy using a mathematical model of the crab gastric mill (chewing) network. This network is activated in vitro by the projection neuron MCN1, which elicits a half-center bursting oscillation between the reciprocally-inhibitory neurons LG and Int1. We focus on two neuropeptides which modulate this network, including a MCN1 neurotransmitter and the hormone crustacean cardioactive peptide (CCAP). Both activate the same voltage-gated current (I MI ) in the LG neuron. However, I MI-MCN1 , resulting from MCN1 released neuropeptide, has phasic dynamics in its maximal conductance due to LG presynaptic inhibition of MCN1, while I MI-CCAP retains the same maximal conductance in both phases of the gastric mill rhythm. Separation of time scales allows us to produce a 2D model from which phase plane analysis shows that, as in the biological system, I MI-MCN1 and I MI-CCAP primarily influence the durations of opposing phases of this rhythm. Furthermore, I MI-MCN1 influences the rhythmic output in a manner similar to the Int1-to-LG synapse, whereas I MI-CCAP has an influence similar to the LG-to-Int1 synapse. These results show that distinct neuromodulators which target the same voltage-gated ion channel in the same network neuron can nevertheless produce distinct effects at the network level, providing divergent neuromodulator actions on network activity.
Convergent neuromodulation onto a network neuron can have divergent effects at the network level
Kintos, Nickolas; Nusbaum, Michael P.; Nadim, Farzan
2016-01-01
Different neuromodulators often target the same ion channel. When such modulators act on different neuron types, this convergent action can enable a rhythmic network to produce distinct outputs. Less clear are the functional consequences when two neuromodulators influence the same ion channel in the same neuron. We examine the consequences of this seeming redundancy using a mathematical model of the crab gastric mill (chewing) network. This network is activated in vitro by the projection neuron MCN1, which elicits a half-center bursting oscillation between the reciprocally-inhibitory neurons LG and Int1. We focus on two neuropeptides which modulate this network, including a MCN1 neurotransmitter and the hormone crustacean cardioactive peptide (CCAP). Both activate the same voltage-gated current (IMI) in the LG neuron. However, IMI-MCN1, resulting from MCN1 released neuropeptide, has phasic dynamics in its maximal conductance due to LG presynaptic inhibition of MCN1, while IMI-CCAP retains the same maximal conductance in both phases of the gastric mill rhythm. Separation of time scales allows us to produce a 2D model from which phase plane analysis shows that, as in the biological system, IMI-MCN1 and IMI-CCAP primarily influence the durations of opposing phases of this rhythm. Furthermore, IMI-MCN1 influences the rhythmic output in a manner similar to the Int1-to-LG synapse, whereas IMI-CCAP has an influence similar to the LG-to-Int1 synapse. These results show that distinct neuromodulators which target the same voltage-gated ion channel in the same network neuron can nevertheless produce distinct effects at the network level, providing divergent neuromodulator actions on network activity. PMID:26798029
Nicola, Wilten; Tripp, Bryan; Scott, Matthew
2016-01-01
A fundamental question in computational neuroscience is how to connect a network of spiking neurons to produce desired macroscopic or mean field dynamics. One possible approach is through the Neural Engineering Framework (NEF). The NEF approach requires quantities called decoders which are solved through an optimization problem requiring large matrix inversion. Here, we show how a decoder can be obtained analytically for type I and certain type II firing rates as a function of the heterogeneity of its associated neuron. These decoders generate approximants for functions that converge to the desired function in mean-squared error like 1/N, where N is the number of neurons in the network. We refer to these decoders as scale-invariant decoders due to their structure. These decoders generate weights for a network of neurons through the NEF formula for weights. These weights force the spiking network to have arbitrary and prescribed mean field dynamics. The weights generated with scale-invariant decoders all lie on low dimensional hypersurfaces asymptotically. We demonstrate the applicability of these scale-invariant decoders and weight surfaces by constructing networks of spiking theta neurons that replicate the dynamics of various well known dynamical systems such as the neural integrator, Van der Pol system and the Lorenz system. As these decoders are analytically determined and non-unique, the weights are also analytically determined and non-unique. We discuss the implications for measured weights of neuronal networks. PMID:26973503
Nicola, Wilten; Tripp, Bryan; Scott, Matthew
2016-01-01
A fundamental question in computational neuroscience is how to connect a network of spiking neurons to produce desired macroscopic or mean field dynamics. One possible approach is through the Neural Engineering Framework (NEF). The NEF approach requires quantities called decoders which are solved through an optimization problem requiring large matrix inversion. Here, we show how a decoder can be obtained analytically for type I and certain type II firing rates as a function of the heterogeneity of its associated neuron. These decoders generate approximants for functions that converge to the desired function in mean-squared error like 1/N, where N is the number of neurons in the network. We refer to these decoders as scale-invariant decoders due to their structure. These decoders generate weights for a network of neurons through the NEF formula for weights. These weights force the spiking network to have arbitrary and prescribed mean field dynamics. The weights generated with scale-invariant decoders all lie on low dimensional hypersurfaces asymptotically. We demonstrate the applicability of these scale-invariant decoders and weight surfaces by constructing networks of spiking theta neurons that replicate the dynamics of various well known dynamical systems such as the neural integrator, Van der Pol system and the Lorenz system. As these decoders are analytically determined and non-unique, the weights are also analytically determined and non-unique. We discuss the implications for measured weights of neuronal networks.
The transfer and transformation of collective network information in gene-matched networks.
Kitsukawa, Takashi; Yagi, Takeshi
2015-10-09
Networks, such as the human society network, social and professional networks, and biological system networks, contain vast amounts of information. Information signals in networks are distributed over nodes and transmitted through intricately wired links, making the transfer and transformation of such information difficult to follow. Here we introduce a novel method for describing network information and its transfer using a model network, the Gene-matched network (GMN), in which nodes (neurons) possess attributes (genes). In the GMN, nodes are connected according to their expression of common genes. Because neurons have multiple genes, the GMN is cluster-rich. We show that, in the GMN, information transfer and transformation were controlled systematically, according to the activity level of the network. Furthermore, information transfer and transformation could be traced numerically with a vector using genes expressed in the activated neurons, the active-gene array, which was used to assess the relative activity among overlapping neuronal groups. Interestingly, this coding style closely resembles the cell-assembly neural coding theory. The method introduced here could be applied to many real-world networks, since many systems, including human society and various biological systems, can be represented as a network of this type.
Artificial synapse network on inorganic proton conductor for neuromorphic systems.
Zhu, Li Qiang; Wan, Chang Jin; Guo, Li Qiang; Shi, Yi; Wan, Qing
2014-01-01
The basic units in our brain are neurons, and each neuron has more than 1,000 synapse connections. Synapse is the basic structure for information transfer in an ever-changing manner, and short-term plasticity allows synapses to perform critical computational functions in neural circuits. Therefore, the major challenge for the hardware implementation of neuromorphic computation is to develop artificial synapse network. Here in-plane lateral-coupled oxide-based artificial synapse network coupled by proton neurotransmitters are self-assembled on glass substrates at room-temperature. A strong lateral modulation is observed due to the proton-related electrical-double-layer effect. Short-term plasticity behaviours, including paired-pulse facilitation, dynamic filtering and spatiotemporally correlated signal processing are mimicked. Such laterally coupled oxide-based protonic/electronic hybrid artificial synapse network proposed here is interesting for building future neuromorphic systems.
Neonatal ghrelin programs development of hypothalamic feeding circuits
Steculorum, Sophie M.; Collden, Gustav; Coupe, Berengere; Croizier, Sophie; Lockie, Sarah; Andrews, Zane B.; Jarosch, Florian; Klussmann, Sven; Bouret, Sebastien G.
2015-01-01
A complex neural network regulates body weight and energy balance, and dysfunction in the communication between the gut and this neural network is associated with metabolic diseases, such as obesity. The stomach-derived hormone ghrelin stimulates appetite through interactions with neurons in the arcuate nucleus of the hypothalamus (ARH). Here, we evaluated the physiological and neurobiological contribution of ghrelin during development by specifically blocking ghrelin action during early postnatal development in mice. Ghrelin blockade in neonatal mice resulted in enhanced ARH neural projections and long-term metabolic effects, including increased body weight, visceral fat, and blood glucose levels and decreased leptin sensitivity. In addition, chronic administration of ghrelin during postnatal life impaired the normal development of ARH projections and caused metabolic dysfunction. Consistent with these observations, direct exposure of postnatal ARH neuronal explants to ghrelin blunted axonal growth and blocked the neurotrophic effect of the adipocyte-derived hormone leptin. Moreover, chronic ghrelin exposure in neonatal mice also attenuated leptin-induced STAT3 signaling in ARH neurons. Collectively, these data reveal that ghrelin plays an inhibitory role in the development of hypothalamic neural circuits and suggest that proper expression of ghrelin during neonatal life is pivotal for lifelong metabolic regulation. PMID:25607843
Eguchi, Akihiro; Isbister, James B; Ahmad, Nasir; Stringer, Simon
2018-07-01
We present a hierarchical neural network model, in which subpopulations of neurons develop fixed and regularly repeating temporal chains of spikes (polychronization), which respond specifically to randomized Poisson spike trains representing the input training images. The performance is improved by including top-down and lateral synaptic connections, as well as introducing multiple synaptic contacts between each pair of pre- and postsynaptic neurons, with different synaptic contacts having different axonal delays. Spike-timing-dependent plasticity thus allows the model to select the most effective axonal transmission delay between neurons. Furthermore, neurons representing the binding relationship between low-level and high-level visual features emerge through visually guided learning. This begins to provide a way forward to solving the classic feature binding problem in visual neuroscience and leads to a new hypothesis concerning how information about visual features at every spatial scale may be projected upward through successive neuronal layers. We name this hypothetical upward projection of information the "holographic principle." (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Genetically encoded proton sensors reveal activity-dependent pH changes in neurons.
Raimondo, Joseph V; Irkle, Agnese; Wefelmeyer, Winnie; Newey, Sarah E; Akerman, Colin J
2012-01-01
The regulation of hydrogen ion concentration (pH) is fundamental to cell viability, metabolism, and enzymatic function. Within the nervous system, the control of pH is also involved in diverse and dynamic processes including development, synaptic transmission, and the control of network excitability. As pH affects neuronal activity, and can also itself be altered by neuronal activity, the existence of tools to accurately measure hydrogen ion fluctuations is important for understanding the role pH plays under physiological and pathological conditions. Outside of their use as a marker of synaptic release, genetically encoded pH sensors have not been utilized to study hydrogen ion fluxes associated with network activity. By combining whole-cell patch clamp with simultaneous two-photon or confocal imaging, we quantified the amplitude and time course of neuronal, intracellular, acidic transients evoked by epileptiform activity in two separate in vitro models of temporal lobe epilepsy. In doing so, we demonstrate the suitability of three genetically encoded pH sensors: deGFP4, E(2)GFP, and Cl-sensor for investigating activity-dependent pH changes at the level of single neurons.
Kuhn, Peer-Hendrik; Colombo, Alessio Vittorio; Schusser, Benjamin; Dreymueller, Daniela; Wetzel, Sebastian; Schepers, Ute; Herber, Julia; Ludwig, Andreas; Kremmer, Elisabeth; Montag, Dirk; Müller, Ulrike; Schweizer, Michaela; Saftig, Paul; Bräse, Stefan; Lichtenthaler, Stefan F
2016-01-01
Metzincin metalloproteases have major roles in intercellular communication by modulating the function of membrane proteins. One of the proteases is the a-disintegrin-and-metalloprotease 10 (ADAM10) which acts as alpha-secretase of the Alzheimer's disease amyloid precursor protein. ADAM10 is also required for neuronal network functions in murine brain, but neuronal ADAM10 substrates are only partly known. With a proteomic analysis of Adam10-deficient neurons we identified 91, mostly novel ADAM10 substrate candidates, making ADAM10 a major protease for membrane proteins in the nervous system. Several novel substrates, including the neuronal cell adhesion protein NrCAM, are involved in brain development. Indeed, we detected mistargeted axons in the olfactory bulb of conditional ADAM10-/- mice, which correlate with reduced cleavage of NrCAM, NCAM and other ADAM10 substrates. In summary, the novel ADAM10 substrates provide a molecular basis for neuronal network dysfunctions in conditional ADAM10-/- mice and demonstrate a fundamental function of ADAM10 in the brain. DOI: http://dx.doi.org/10.7554/eLife.12748.001 PMID:26802628
2013-01-01
The capability of the brain to change functionally in response to sensory experience is most active during early stages of development but it decreases later in life when major alterations of neuronal network structures no longer take place in response to experience. This view has been recently challenged by experimental strategies based on the enhancement of environmental stimulation levels, genetic manipulations, and pharmacological treatments, which all have demonstrated that the adult brain retains a degree of plasticity that allows for a rewiring of neuronal circuitries over the entire life course. A hot spot in the field of neuronal plasticity centres on gene programs that underlie plastic phenomena in adulthood. Here, I discuss the role of the recently discovered neuronal-specific and activity-dependent transcription factor NPAS4 as a critical mediator of plasticity in the nervous system. A better understanding of how modifications in the connectivity of neuronal networks occur may shed light on the treatment of pathological conditions such as brain damage or disease in adult life, some of which were once considered untreatable. PMID:24024041
NEURONAL ACTION ON THE DEVELOPING BLOOD VESSEL PATTERN
James, Jennifer M.; Mukouyama, Yoh-suke
2011-01-01
The nervous system relies on a highly specialized network of blood vessels for development and neuronal survival. Recent evidence suggests that both the central and peripheral nervous systems (CNS and PNS) employ multiple mechanisms to shape the vascular tree to meet its specific metabolic demands, such as promoting nerve-artery alignment in the PNS or the development the blood brain barrier in the CNS. In this article we discuss how the nervous system directly influences blood vessel patterning resulting in neuro-vascular congruence that is maintained throughout development and in the adult. PMID:21978864
Rothkegel, Alexander; Lehnertz, Klaus
2009-03-01
We investigate numerically the collective dynamical behavior of pulse-coupled nonleaky integrate-and-fire neurons that are arranged on a two-dimensional small-world network. To ensure ongoing activity, we impose a probability for spontaneous firing for each neuron. We study network dynamics evolving from different sets of initial conditions in dependence on coupling strength and rewiring probability. Besides a homogeneous equilibrium state for low coupling strength, we observe different local patterns including cyclic waves, spiral waves, and turbulentlike patterns, which-depending on network parameters-interfere with the global collective firing of the neurons. We attribute the various network dynamics to distinct regimes in the parameter space. For the same network parameters different network dynamics can be observed depending on the set of initial conditions only. Such a multistable behavior and the interplay between local pattern formation and global collective firing may be attributable to the spatiotemporal dynamics of biological networks.
Computing the Local Field Potential (LFP) from Integrate-and-Fire Network Models.
Mazzoni, Alberto; Lindén, Henrik; Cuntz, Hermann; Lansner, Anders; Panzeri, Stefano; Einevoll, Gaute T
2015-12-01
Leaky integrate-and-fire (LIF) network models are commonly used to study how the spiking dynamics of neural networks changes with stimuli, tasks or dynamic network states. However, neurophysiological studies in vivo often rather measure the mass activity of neuronal microcircuits with the local field potential (LFP). Given that LFPs are generated by spatially separated currents across the neuronal membrane, they cannot be computed directly from quantities defined in models of point-like LIF neurons. Here, we explore the best approximation for predicting the LFP based on standard output from point-neuron LIF networks. To search for this best "LFP proxy", we compared LFP predictions from candidate proxies based on LIF network output (e.g, firing rates, membrane potentials, synaptic currents) with "ground-truth" LFP obtained when the LIF network synaptic input currents were injected into an analogous three-dimensional (3D) network model of multi-compartmental neurons with realistic morphology, spatial distributions of somata and synapses. We found that a specific fixed linear combination of the LIF synaptic currents provided an accurate LFP proxy, accounting for most of the variance of the LFP time course observed in the 3D network for all recording locations. This proxy performed well over a broad set of conditions, including substantial variations of the neuronal morphologies. Our results provide a simple formula for estimating the time course of the LFP from LIF network simulations in cases where a single pyramidal population dominates the LFP generation, and thereby facilitate quantitative comparison between computational models and experimental LFP recordings in vivo.
Computing the Local Field Potential (LFP) from Integrate-and-Fire Network Models
Cuntz, Hermann; Lansner, Anders; Panzeri, Stefano; Einevoll, Gaute T.
2015-01-01
Leaky integrate-and-fire (LIF) network models are commonly used to study how the spiking dynamics of neural networks changes with stimuli, tasks or dynamic network states. However, neurophysiological studies in vivo often rather measure the mass activity of neuronal microcircuits with the local field potential (LFP). Given that LFPs are generated by spatially separated currents across the neuronal membrane, they cannot be computed directly from quantities defined in models of point-like LIF neurons. Here, we explore the best approximation for predicting the LFP based on standard output from point-neuron LIF networks. To search for this best “LFP proxy”, we compared LFP predictions from candidate proxies based on LIF network output (e.g, firing rates, membrane potentials, synaptic currents) with “ground-truth” LFP obtained when the LIF network synaptic input currents were injected into an analogous three-dimensional (3D) network model of multi-compartmental neurons with realistic morphology, spatial distributions of somata and synapses. We found that a specific fixed linear combination of the LIF synaptic currents provided an accurate LFP proxy, accounting for most of the variance of the LFP time course observed in the 3D network for all recording locations. This proxy performed well over a broad set of conditions, including substantial variations of the neuronal morphologies. Our results provide a simple formula for estimating the time course of the LFP from LIF network simulations in cases where a single pyramidal population dominates the LFP generation, and thereby facilitate quantitative comparison between computational models and experimental LFP recordings in vivo. PMID:26657024
Galeazzi, Juan M.; Navajas, Joaquín; Mender, Bedeho M. W.; Quian Quiroga, Rodrigo; Minini, Loredana; Stringer, Simon M.
2016-01-01
ABSTRACT Neurons have been found in the primate brain that respond to objects in specific locations in hand-centered coordinates. A key theoretical challenge is to explain how such hand-centered neuronal responses may develop through visual experience. In this paper we show how hand-centered visual receptive fields can develop using an artificial neural network model, VisNet, of the primate visual system when driven by gaze changes recorded from human test subjects as they completed a jigsaw. A camera mounted on the head captured images of the hand and jigsaw, while eye movements were recorded using an eye-tracking device. This combination of data allowed us to reconstruct the retinal images seen as humans undertook the jigsaw task. These retinal images were then fed into the neural network model during self-organization of its synaptic connectivity using a biologically plausible trace learning rule. A trace learning mechanism encourages neurons in the model to learn to respond to input images that tend to occur in close temporal proximity. In the data recorded from human subjects, we found that the participant’s gaze often shifted through a sequence of locations around a fixed spatial configuration of the hand and one of the jigsaw pieces. In this case, trace learning should bind these retinal images together onto the same subset of output neurons. The simulation results consequently confirmed that some cells learned to respond selectively to the hand and a jigsaw piece in a fixed spatial configuration across different retinal views. PMID:27253452
Galeazzi, Juan M; Navajas, Joaquín; Mender, Bedeho M W; Quian Quiroga, Rodrigo; Minini, Loredana; Stringer, Simon M
2016-01-01
Neurons have been found in the primate brain that respond to objects in specific locations in hand-centered coordinates. A key theoretical challenge is to explain how such hand-centered neuronal responses may develop through visual experience. In this paper we show how hand-centered visual receptive fields can develop using an artificial neural network model, VisNet, of the primate visual system when driven by gaze changes recorded from human test subjects as they completed a jigsaw. A camera mounted on the head captured images of the hand and jigsaw, while eye movements were recorded using an eye-tracking device. This combination of data allowed us to reconstruct the retinal images seen as humans undertook the jigsaw task. These retinal images were then fed into the neural network model during self-organization of its synaptic connectivity using a biologically plausible trace learning rule. A trace learning mechanism encourages neurons in the model to learn to respond to input images that tend to occur in close temporal proximity. In the data recorded from human subjects, we found that the participant's gaze often shifted through a sequence of locations around a fixed spatial configuration of the hand and one of the jigsaw pieces. In this case, trace learning should bind these retinal images together onto the same subset of output neurons. The simulation results consequently confirmed that some cells learned to respond selectively to the hand and a jigsaw piece in a fixed spatial configuration across different retinal views.
Chimera states in two-dimensional networks of locally coupled oscillators
NASA Astrophysics Data System (ADS)
Kundu, Srilena; Majhi, Soumen; Bera, Bidesh K.; Ghosh, Dibakar; Lakshmanan, M.
2018-02-01
Chimera state is defined as a mixed type of collective state in which synchronized and desynchronized subpopulations of a network of coupled oscillators coexist and the appearance of such anomalous behavior has strong connection to diverse neuronal developments. Most of the previous studies on chimera states are not extensively done in two-dimensional ensembles of coupled oscillators by taking neuronal systems with nonlinear coupling function into account while such ensembles of oscillators are more realistic from a neurobiological point of view. In this paper, we report the emergence and existence of chimera states by considering locally coupled two-dimensional networks of identical oscillators where each node is interacting through nonlinear coupling function. This is in contrast with the existence of chimera states in two-dimensional nonlocally coupled oscillators with rectangular kernel in the coupling function. We find that the presence of nonlinearity in the coupling function plays a key role to produce chimera states in two-dimensional locally coupled oscillators. We analytically verify explicitly in the case of a network of coupled Stuart-Landau oscillators in two dimensions that the obtained results using Ott-Antonsen approach and our analytical finding very well matches with the numerical results. Next, we consider another type of important nonlinear coupling function which exists in neuronal systems, namely chemical synaptic function, through which the nearest-neighbor (locally coupled) neurons interact with each other. It is shown that such synaptic interacting function promotes the emergence of chimera states in two-dimensional lattices of locally coupled neuronal oscillators. In numerical simulations, we consider two paradigmatic neuronal oscillators, namely Hindmarsh-Rose neuron model and Rulkov map for each node which exhibit bursting dynamics. By associating various spatiotemporal behaviors and snapshots at particular times, we study the chimera states in detail over a large range of coupling parameter. The existence of chimera states is confirmed by instantaneous angular frequency, order parameter and strength of incoherence.
Chimera states in two-dimensional networks of locally coupled oscillators.
Kundu, Srilena; Majhi, Soumen; Bera, Bidesh K; Ghosh, Dibakar; Lakshmanan, M
2018-02-01
Chimera state is defined as a mixed type of collective state in which synchronized and desynchronized subpopulations of a network of coupled oscillators coexist and the appearance of such anomalous behavior has strong connection to diverse neuronal developments. Most of the previous studies on chimera states are not extensively done in two-dimensional ensembles of coupled oscillators by taking neuronal systems with nonlinear coupling function into account while such ensembles of oscillators are more realistic from a neurobiological point of view. In this paper, we report the emergence and existence of chimera states by considering locally coupled two-dimensional networks of identical oscillators where each node is interacting through nonlinear coupling function. This is in contrast with the existence of chimera states in two-dimensional nonlocally coupled oscillators with rectangular kernel in the coupling function. We find that the presence of nonlinearity in the coupling function plays a key role to produce chimera states in two-dimensional locally coupled oscillators. We analytically verify explicitly in the case of a network of coupled Stuart-Landau oscillators in two dimensions that the obtained results using Ott-Antonsen approach and our analytical finding very well matches with the numerical results. Next, we consider another type of important nonlinear coupling function which exists in neuronal systems, namely chemical synaptic function, through which the nearest-neighbor (locally coupled) neurons interact with each other. It is shown that such synaptic interacting function promotes the emergence of chimera states in two-dimensional lattices of locally coupled neuronal oscillators. In numerical simulations, we consider two paradigmatic neuronal oscillators, namely Hindmarsh-Rose neuron model and Rulkov map for each node which exhibit bursting dynamics. By associating various spatiotemporal behaviors and snapshots at particular times, we study the chimera states in detail over a large range of coupling parameter. The existence of chimera states is confirmed by instantaneous angular frequency, order parameter and strength of incoherence.
NASA Astrophysics Data System (ADS)
Kamal, Hassan; Kanhirodan, Rajan; Srinivas, Kalyan V.; Sikdar, Sujit K.
2010-04-01
We study the responses of a cultured neural network when it is exposed to epileptogenesis glutamate injury causing epilepsy and subsequent treatment with phenobarbital by constructing connectivity map of neurons using correlation matrix. This study is particularly useful in understanding the pharmaceutical drug induced changes in the neuronal network properties with insights into changes at the systems biology level.
Arnold, Fiona JL; Hofmann, Frank; Bengtson, C. Peter; Wittmann, Malte; Vanhoutte, Peter; Bading, Hilmar
2005-01-01
A simplified cell culture system was developed to study neuronal plasticity. As changes in synaptic strength may alter network activity patterns, we grew hippocampal neurones on a microelectrode array (MEA) and monitored their collective behaviour with 60 electrodes simultaneously. We found that exposure of the network for 15 min to the GABAA receptor antagonist bicuculline induced an increase in synaptic efficacy at excitatory synapses that was associated with an increase in the frequency of miniature AMPA receptor-mediated EPSCs and a change in network activity from uncoordinated firing of neurones (lacking any recognizable pattern) to a highly organized, periodic and synchronous burst pattern. Induction of recurrent synchronous bursting was dependent on NMDA receptor activation and required extracellular signal-regulated kinase (ERK)1/2 signalling and translation of pre-existing mRNAs. Once induced, the burst pattern persisted for several days; its maintenance phase (> 4 h) was dependent on gene transcription taking place in a critical period of 120 min following induction. Thus, cultured hippocampal neurones display a simple, transcription and protein synthesis-dependent form of plasticity. The non-invasive nature of MEA recordings provides a significant advantage over traditional assays for synaptic connectivity (i.e. long-term potentiation in brain slices) and facilitates the search for activity-regulated genes critical for late-phase plasticity. PMID:15618268
Arnold, Fiona J L; Hofmann, Frank; Bengtson, C Peter; Wittmann, Malte; Vanhoutte, Peter; Bading, Hilmar
2005-04-01
A simplified cell culture system was developed to study neuronal plasticity. As changes in synaptic strength may alter network activity patterns, we grew hippocampal neurones on a microelectrode array (MEA) and monitored their collective behaviour with 60 electrodes simultaneously. We found that exposure of the network for 15 min to the GABA(A) receptor antagonist bicuculline induced an increase in synaptic efficacy at excitatory synapses that was associated with an increase in the frequency of miniature AMPA receptor-mediated EPSCs and a change in network activity from uncoordinated firing of neurones (lacking any recognizable pattern) to a highly organized, periodic and synchronous burst pattern. Induction of recurrent synchronous bursting was dependent on NMDA receptor activation and required extracellular signal-regulated kinase (ERK)1/2 signalling and translation of pre-existing mRNAs. Once induced, the burst pattern persisted for several days; its maintenance phase (> 4 h) was dependent on gene transcription taking place in a critical period of 120 min following induction. Thus, cultured hippocampal neurones display a simple, transcription and protein synthesis-dependent form of plasticity. The non-invasive nature of MEA recordings provides a significant advantage over traditional assays for synaptic connectivity (i.e. long-term potentiation in brain slices) and facilitates the search for activity-regulated genes critical for late-phase plasticity.
Transcriptional Networks Controlled by NKX2-1 in the Development of Forebrain GABAergic Neurons
Sandberg, Magnus; Flandin, Pierre; Silberberg, Shanni; ...
2016-09-21
The embryonic basal ganglia generates multiple projection neurons and interneuron subtypes from distinct progenitor domains. Combinatorial interactions of transcription factors and chromatin are thought to regulate gene expression. In the medial ganglionic eminence, the NKX2-1 transcription factor controls regional identity and, with LHX6, is necessary to specify pallidal projection neurons and forebrain interneurons. Here, we dissected the molecular functions of NKX2-1 by defining its chromosomal binding, regulation of gene expression, and epigenetic state. NKX2-1 binding at distal regulatory elements led to a repressed epigenetic state and transcriptional repression in the ventricular zone. Conversely, NKX2-1 is required to establish a permissivemore » chromatin state and transcriptional activation in the sub-ventricular and mantle zones. Moreover, combinatorial binding of NKX2-1 and LHX6 promotes transcriptionally permissive chromatin and activates genes expressed in cortical migrating interneurons. Our integrated approach gives a foundation for elucidating transcriptional networks guiding the development of the MGE and its descendants.« less
Ambra1 Shapes Hippocampal Inhibition/Excitation Balance: Role in Neurodevelopmental Disorders.
Nobili, Annalisa; Krashia, Paraskevi; Cordella, Alberto; La Barbera, Livia; Dell'Acqua, Maria Concetta; Caruso, Angela; Pignataro, Annabella; Marino, Ramona; Sciarra, Francesca; Biamonte, Filippo; Scattoni, Maria Luisa; Ammassari-Teule, Martine; Cecconi, Francesco; Berretta, Nicola; Keller, Flavio; Mercuri, Nicola Biagio; D'Amelio, Marcello
2018-02-27
Imbalances between excitatory and inhibitory synaptic transmission cause brain network dysfunction and are central to the pathogenesis of neurodevelopmental disorders. Parvalbumin interneurons are highly implicated in this imbalance. Here, we probed the social behavior and hippocampal function of mice carrying a haploinsufficiency for Ambra1, a pro-autophagic gene crucial for brain development. We show that heterozygous Ambra1 mice (Ambra +/- ) are characterized by loss of hippocampal parvalbumin interneurons, decreases in the inhibition/excitation ratio, and altered social behaviors that are solely restricted to the female gender. Loss of parvalbumin interneurons in Ambra1 +/- females is further linked to reductions of the inhibitory drive onto principal neurons and alterations in network oscillatory activity, CA1 synaptic plasticity, and pyramidal neuron spine density. Parvalbumin interneuron loss is underlined by increased apoptosis during the embryonic development of progenitor neurons in the medial ganglionic eminence. Together, these findings identify an Ambra1-dependent mechanism that drives inhibition/excitation imbalance in the hippocampus, contributing to abnormal brain activity reminiscent of neurodevelopmental disorders.
Evolution of Osteocrin as an activity-regulated factor in the primate brain
Ataman, Bulent; Boulting, Gabriella L.; Harmin, David A.; Yang, Marty G.; Baker-Salisbury, Mollie; Yap, Ee-Lynn; Malik, Athar N.; Mei, Kevin; Rubin, Alex A.; Spiegel, Ivo; Durresi, Ershela; Sharma, Nikhil; Hu, Linda S.; Pletikos, Mihovil; Griffith, Eric C.; Partlow, Jennifer N.; Stevens, Christine R.; Adli, Mazhar; Chahrour, Maria; Sestan, Nenad; Walsh, Christopher A.; Berezovskii, Vladimir K.; Livingstone, Margaret S.; Greenberg, Michael E.
2017-01-01
Sensory stimuli drive the maturation and function of the mammalian nervous system in part through the activation of gene expression networks that regulate synapse development and plasticity. These networks have primarily been studied in mice, and it is not known whether there are species- or clade-specific activity-regulated genes that control features of brain development and function. Here we use transcriptional profiling of human fetal brain cultures to identify an activity-dependent secreted factor, Osteocrin (OSTN), that is induced by membrane depolarization of human but not mouse neurons. We find that OSTN has been repurposed in primates through the evolutionary acquisition of DNA regulatory elements that bind the activity-regulated transcription factor MEF2. In addition, we demonstrate that OSTN is expressed in primate neocortex and restricts activity-dependent dendritic growth in human neurons. These findings suggest that, in response to sensory input, OSTN regulates features of neuronal structure and function that are unique to primates. PMID:27830782
Fast numerical methods for simulating large-scale integrate-and-fire neuronal networks.
Rangan, Aaditya V; Cai, David
2007-02-01
We discuss numerical methods for simulating large-scale, integrate-and-fire (I&F) neuronal networks. Important elements in our numerical methods are (i) a neurophysiologically inspired integrating factor which casts the solution as a numerically tractable integral equation, and allows us to obtain stable and accurate individual neuronal trajectories (i.e., voltage and conductance time-courses) even when the I&F neuronal equations are stiff, such as in strongly fluctuating, high-conductance states; (ii) an iterated process of spike-spike corrections within groups of strongly coupled neurons to account for spike-spike interactions within a single large numerical time-step; and (iii) a clustering procedure of firing events in the network to take advantage of localized architectures, such as spatial scales of strong local interactions, which are often present in large-scale computational models-for example, those of the primary visual cortex. (We note that the spike-spike corrections in our methods are more involved than the correction of single neuron spike-time via a polynomial interpolation as in the modified Runge-Kutta methods commonly used in simulations of I&F neuronal networks.) Our methods can evolve networks with relatively strong local interactions in an asymptotically optimal way such that each neuron fires approximately once in [Formula: see text] operations, where N is the number of neurons in the system. We note that quantifications used in computational modeling are often statistical, since measurements in a real experiment to characterize physiological systems are typically statistical, such as firing rate, interspike interval distributions, and spike-triggered voltage distributions. We emphasize that it takes much less computational effort to resolve statistical properties of certain I&F neuronal networks than to fully resolve trajectories of each and every neuron within the system. For networks operating in realistic dynamical regimes, such as strongly fluctuating, high-conductance states, our methods are designed to achieve statistical accuracy when very large time-steps are used. Moreover, our methods can also achieve trajectory-wise accuracy when small time-steps are used.
Chua, Yansong; Morrison, Abigail
2016-01-01
The role of dendritic spiking mechanisms in neural processing is so far poorly understood. To investigate the role of calcium spikes in the functional properties of the single neuron and recurrent networks, we investigated a three compartment neuron model of the layer 5 pyramidal neuron with calcium dynamics in the distal compartment. By performing single neuron simulations with noisy synaptic input and occasional large coincident input at either just the distal compartment or at both somatic and distal compartments, we show that the presence of calcium spikes confers a substantial advantage for coincidence detection in the former case and a lesser advantage in the latter. We further show that the experimentally observed critical frequency phenomenon, in which action potentials triggered by stimuli near the soma above a certain frequency trigger a calcium spike at distal dendrites, leading to further somatic depolarization, is not exhibited by a neuron receiving realistically noisy synaptic input, and so is unlikely to be a necessary component of coincidence detection. We next investigate the effect of calcium spikes in propagation of spiking activities in a feed-forward network (FFN) embedded in a balanced recurrent network. The excitatory neurons in the network are again connected to either just the distal, or both somatic and distal compartments. With purely distal connectivity, activity propagation is stable and distinguishable for a large range of recurrent synaptic strengths if the feed-forward connections are sufficiently strong, but propagation does not occur in the absence of calcium spikes. When connections are made to both the somatic and the distal compartments, activity propagation is achieved for neurons with active calcium dynamics at a much smaller number of neurons per pool, compared to a network of passive neurons, but quickly becomes unstable as the strength of recurrent synapses increases. Activity propagation at higher scaling factors can be stabilized by increasing network inhibition or introducing short term depression in the excitatory synapses, but the signal to noise ratio remains low. Our results demonstrate that the interaction of synchrony with dendritic spiking mechanisms can have profound consequences for the dynamics on the single neuron and network level. PMID:27499740
Chua, Yansong; Morrison, Abigail
2016-01-01
The role of dendritic spiking mechanisms in neural processing is so far poorly understood. To investigate the role of calcium spikes in the functional properties of the single neuron and recurrent networks, we investigated a three compartment neuron model of the layer 5 pyramidal neuron with calcium dynamics in the distal compartment. By performing single neuron simulations with noisy synaptic input and occasional large coincident input at either just the distal compartment or at both somatic and distal compartments, we show that the presence of calcium spikes confers a substantial advantage for coincidence detection in the former case and a lesser advantage in the latter. We further show that the experimentally observed critical frequency phenomenon, in which action potentials triggered by stimuli near the soma above a certain frequency trigger a calcium spike at distal dendrites, leading to further somatic depolarization, is not exhibited by a neuron receiving realistically noisy synaptic input, and so is unlikely to be a necessary component of coincidence detection. We next investigate the effect of calcium spikes in propagation of spiking activities in a feed-forward network (FFN) embedded in a balanced recurrent network. The excitatory neurons in the network are again connected to either just the distal, or both somatic and distal compartments. With purely distal connectivity, activity propagation is stable and distinguishable for a large range of recurrent synaptic strengths if the feed-forward connections are sufficiently strong, but propagation does not occur in the absence of calcium spikes. When connections are made to both the somatic and the distal compartments, activity propagation is achieved for neurons with active calcium dynamics at a much smaller number of neurons per pool, compared to a network of passive neurons, but quickly becomes unstable as the strength of recurrent synapses increases. Activity propagation at higher scaling factors can be stabilized by increasing network inhibition or introducing short term depression in the excitatory synapses, but the signal to noise ratio remains low. Our results demonstrate that the interaction of synchrony with dendritic spiking mechanisms can have profound consequences for the dynamics on the single neuron and network level.
Toma, Francesca Maria; Calura, Enrica; Rizzetto, Lisa; Carrieri, Claudia; Roncaglia, Paola; Martinelli, Valentina; Scaini, Denis; Masten, Lara; Turco, Antonio; Gustincich, Stefano; Prato, Maurizio; Ballerini, Laura
2013-01-01
In the last decade, carbon nanotube growth substrates have been used to investigate neurons and neuronal networks formation in vitro when guided by artificial nano-scaled cues. Besides, nanotube-based interfaces are being developed, such as prosthesis for monitoring brain activity. We recently described how carbon nanotube substrates alter the electrophysiological and synaptic responses of hippocampal neurons in culture. This observation highlighted the exceptional ability of this material in interfering with nerve tissue growth. Here we test the hypothesis that carbon nanotube scaffolds promote the development of immature neurons isolated from the neonatal rat spinal cord, and maintained in vitro. To address this issue we performed electrophysiological studies associated to gene expression analysis. Our results indicate that spinal neurons plated on electro-conductive carbon nanotubes show a facilitated development. Spinal neurons anticipate the expression of functional markers of maturation, such as the generation of voltage dependent currents or action potentials. These changes are accompanied by a selective modulation of gene expression, involving neuronal and non-neuronal components. Our microarray experiments suggest that carbon nanotube platforms trigger reparative activities involving microglia, in the absence of reactive gliosis. Hence, future tissue scaffolds blended with conductive nanotubes may be exploited to promote cell differentiation and reparative pathways in neural regeneration strategies. PMID:23951361
Gutierrez, Gabrielle J; O'Leary, Timothy; Marder, Eve
2013-03-06
Rhythmic oscillations are common features of nervous systems. One of the fundamental questions posed by these rhythms is how individual neurons or groups of neurons are recruited into different network oscillations. We modeled competing fast and slow oscillators connected to a hub neuron with electrical and inhibitory synapses. We explore the patterns of coordination shown in the network as a function of the electrical coupling and inhibitory synapse strengths with the help of a novel visualization method that we call the "parameterscape." The hub neuron can be switched between the fast and slow oscillators by multiple network mechanisms, indicating that a given change in network state can be achieved by degenerate cellular mechanisms. These results have importance for interpreting experiments employing optogenetic, genetic, and pharmacological manipulations to understand circuit dynamics. Copyright © 2013 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Dang, Nguyen Tuan; Akai-Kasada, Megumi; Asai, Tetsuya; Saito, Akira; Kuwahara, Yuji; Hokkaido University Collaboration
2015-03-01
Machine learning using the artificial neuron network research is supposed to be the best way to understand how the human brain trains itself to process information. In this study, we have successfully developed the programs using supervised machine learning algorithm. However, these supervised learning processes for the neuron network required the very strong computing configuration. Derivation from the necessity of increasing in computing ability and in reduction of power consumption, accelerator circuits become critical. To develop such accelerator circuits using supervised machine learning algorithm, conducting polymer micro/nanowires growing process was realized and applied as a synaptic weigh controller. In this work, high conductivity Polypyrrole (PPy) and Poly (3, 4 - ethylenedioxythiophene) PEDOT wires were potentiostatically grown crosslinking the designated electrodes, which were prefabricated by lithography, when appropriate square wave AC voltage and appropriate frequency were applied. Micro/nanowire growing process emulated the neurotransmitter release process of synapses inside a biological neuron and wire's resistance variation during the growing process was preferred to as the variation of synaptic weigh in machine learning algorithm. In a cooperation with Graduate School of Information Science and Technology, Hokkaido University.
The Dynamics of Networks of Identical Theta Neurons.
Laing, Carlo R
2018-02-05
We consider finite and infinite all-to-all coupled networks of identical theta neurons. Two types of synaptic interactions are investigated: instantaneous and delayed (via first-order synaptic processing). Extensive use is made of the Watanabe/Strogatz (WS) ansatz for reducing the dimension of networks of identical sinusoidally-coupled oscillators. As well as the degeneracy associated with the constants of motion of the WS ansatz, we also find continuous families of solutions for instantaneously coupled neurons, resulting from the reversibility of the reduced model and the form of the synaptic input. We also investigate a number of similar related models. We conclude that the dynamics of networks of all-to-all coupled identical neurons can be surprisingly complicated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Haitao; Guo, Xinmeng; Wang, Jiang, E-mail: jiangwang@tju.edu.cn
2014-09-01
The phenomenon of stochastic resonance in Newman-Watts small-world neuronal networks is investigated when the strength of synaptic connections between neurons is adaptively adjusted by spike-time-dependent plasticity (STDP). It is shown that irrespective of the synaptic connectivity is fixed or adaptive, the phenomenon of stochastic resonance occurs. The efficiency of network stochastic resonance can be largely enhanced by STDP in the coupling process. Particularly, the resonance for adaptive coupling can reach a much larger value than that for fixed one when the noise intensity is small or intermediate. STDP with dominant depression and small temporal window ratio is more efficient formore » the transmission of weak external signal in small-world neuronal networks. In addition, we demonstrate that the effect of stochastic resonance can be further improved via fine-tuning of the average coupling strength of the adaptive network. Furthermore, the small-world topology can significantly affect stochastic resonance of excitable neuronal networks. It is found that there exists an optimal probability of adding links by which the noise-induced transmission of weak periodic signal peaks.« less
Yu, Haitao; Wang, Jiang; Du, Jiwei; Deng, Bin; Wei, Xile
2015-02-01
Effects of time delay on the local and global synchronization in small-world neuronal networks with chemical synapses are investigated in this paper. Numerical results show that, for both excitatory and inhibitory coupling types, the information transmission delay can always induce synchronization transitions of spiking neurons in small-world networks. In particular, regions of in-phase and out-of-phase synchronization of connected neurons emerge intermittently as the synaptic delay increases. For excitatory coupling, all transitions to spiking synchronization occur approximately at integer multiples of the firing period of individual neurons; while for inhibitory coupling, these transitions appear at the odd multiples of the half of the firing period of neurons. More importantly, the local synchronization transition is more profound than the global synchronization transition, depending on the type of coupling synapse. For excitatory synapses, the local in-phase synchronization observed for some values of the delay also occur at a global scale; while for inhibitory ones, this synchronization, observed at the local scale, disappears at a global scale. Furthermore, the small-world structure can also affect the phase synchronization of neuronal networks. It is demonstrated that increasing the rewiring probability can always improve the global synchronization of neuronal activity, but has little effect on the local synchronization of neighboring neurons.
Rich-Club Organization in Effective Connectivity among Cortical Neurons.
Nigam, Sunny; Shimono, Masanori; Ito, Shinya; Yeh, Fang-Chin; Timme, Nicholas; Myroshnychenko, Maxym; Lapish, Christopher C; Tosi, Zachary; Hottowy, Pawel; Smith, Wesley C; Masmanidis, Sotiris C; Litke, Alan M; Sporns, Olaf; Beggs, John M
2016-01-20
The performance of complex networks, like the brain, depends on how effectively their elements communicate. Despite the importance of communication, it is virtually unknown how information is transferred in local cortical networks, consisting of hundreds of closely spaced neurons. To address this, it is important to record simultaneously from hundreds of neurons at a spacing that matches typical axonal connection distances, and at a temporal resolution that matches synaptic delays. We used a 512-electrode array (60 μm spacing) to record spontaneous activity at 20 kHz from up to 500 neurons simultaneously in slice cultures of mouse somatosensory cortex for 1 h at a time. We applied a previously validated version of transfer entropy to quantify information transfer. Similar to in vivo reports, we found an approximately lognormal distribution of firing rates. Pairwise information transfer strengths also were nearly lognormally distributed, similar to reports of synaptic strengths. Some neurons transferred and received much more information than others, which is consistent with previous predictions. Neurons with the highest outgoing and incoming information transfer were more strongly connected to each other than chance, thus forming a "rich club." We found similar results in networks recorded in vivo from rodent cortex, suggesting the generality of these findings. A rich-club structure has been found previously in large-scale human brain networks and is thought to facilitate communication between cortical regions. The discovery of a small, but information-rich, subset of neurons within cortical regions suggests that this population will play a vital role in communication, learning, and memory. Significance statement: Many studies have focused on communication networks between cortical brain regions. In contrast, very few studies have examined communication networks within a cortical region. This is the first study to combine such a large number of neurons (several hundred at a time) with such high temporal resolution (so we can know the direction of communication between neurons) for mapping networks within cortex. We found that information was not transferred equally through all neurons. Instead, ∼70% of the information passed through only 20% of the neurons. Network models suggest that this highly concentrated pattern of information transfer would be both efficient and robust to damage. Therefore, this work may help in understanding how the cortex processes information and responds to neurodegenerative diseases. Copyright © 2016 Nigam et al.
Rich-Club Organization in Effective Connectivity among Cortical Neurons
Shimono, Masanori; Ito, Shinya; Yeh, Fang-Chin; Timme, Nicholas; Myroshnychenko, Maxym; Lapish, Christopher C.; Tosi, Zachary; Hottowy, Pawel; Smith, Wesley C.; Masmanidis, Sotiris C.; Litke, Alan M.; Sporns, Olaf; Beggs, John M.
2016-01-01
The performance of complex networks, like the brain, depends on how effectively their elements communicate. Despite the importance of communication, it is virtually unknown how information is transferred in local cortical networks, consisting of hundreds of closely spaced neurons. To address this, it is important to record simultaneously from hundreds of neurons at a spacing that matches typical axonal connection distances, and at a temporal resolution that matches synaptic delays. We used a 512-electrode array (60 μm spacing) to record spontaneous activity at 20 kHz from up to 500 neurons simultaneously in slice cultures of mouse somatosensory cortex for 1 h at a time. We applied a previously validated version of transfer entropy to quantify information transfer. Similar to in vivo reports, we found an approximately lognormal distribution of firing rates. Pairwise information transfer strengths also were nearly lognormally distributed, similar to reports of synaptic strengths. Some neurons transferred and received much more information than others, which is consistent with previous predictions. Neurons with the highest outgoing and incoming information transfer were more strongly connected to each other than chance, thus forming a “rich club.” We found similar results in networks recorded in vivo from rodent cortex, suggesting the generality of these findings. A rich-club structure has been found previously in large-scale human brain networks and is thought to facilitate communication between cortical regions. The discovery of a small, but information-rich, subset of neurons within cortical regions suggests that this population will play a vital role in communication, learning, and memory. SIGNIFICANCE STATEMENT Many studies have focused on communication networks between cortical brain regions. In contrast, very few studies have examined communication networks within a cortical region. This is the first study to combine such a large number of neurons (several hundred at a time) with such high temporal resolution (so we can know the direction of communication between neurons) for mapping networks within cortex. We found that information was not transferred equally through all neurons. Instead, ∼70% of the information passed through only 20% of the neurons. Network models suggest that this highly concentrated pattern of information transfer would be both efficient and robust to damage. Therefore, this work may help in understanding how the cortex processes information and responds to neurodegenerative diseases. PMID:26791200
NASA Astrophysics Data System (ADS)
Vardi, Roni; Goldental, Amir; Sardi, Shira; Sheinin, Anton; Kanter, Ido
2016-11-01
The increasing number of recording electrodes enhances the capability of capturing the network’s cooperative activity, however, using too many monitors might alter the properties of the measured neural network and induce noise. Using a technique that merges simultaneous multi-patch-clamp and multi-electrode array recordings of neural networks in-vitro, we show that the membrane potential of a single neuron is a reliable and super-sensitive probe for monitoring such cooperative activities and their detailed rhythms. Specifically, the membrane potential and the spiking activity of a single neuron are either highly correlated or highly anti-correlated with the time-dependent macroscopic activity of the entire network. This surprising observation also sheds light on the cooperative origin of neuronal burst in cultured networks. Our findings present an alternative flexible approach to the technique based on a massive tiling of networks by large-scale arrays of electrodes to monitor their activity.
Vardi, Roni; Goldental, Amir; Sardi, Shira; Sheinin, Anton; Kanter, Ido
2016-01-01
The increasing number of recording electrodes enhances the capability of capturing the network’s cooperative activity, however, using too many monitors might alter the properties of the measured neural network and induce noise. Using a technique that merges simultaneous multi-patch-clamp and multi-electrode array recordings of neural networks in-vitro, we show that the membrane potential of a single neuron is a reliable and super-sensitive probe for monitoring such cooperative activities and their detailed rhythms. Specifically, the membrane potential and the spiking activity of a single neuron are either highly correlated or highly anti-correlated with the time-dependent macroscopic activity of the entire network. This surprising observation also sheds light on the cooperative origin of neuronal burst in cultured networks. Our findings present an alternative flexible approach to the technique based on a massive tiling of networks by large-scale arrays of electrodes to monitor their activity. PMID:27824075
Lasarge, Candi L; Danzer, Steve C
2014-01-01
The phosphatidylinositol-3-kinase/phosphatase and tensin homolog (PTEN)-mammalian target of rapamycin (mTOR) pathway regulates a variety of neuronal functions, including cell proliferation, survival, growth, and plasticity. Dysregulation of the pathway is implicated in the development of both genetic and acquired epilepsies. Indeed, several causal mutations have been identified in patients with epilepsy, the most prominent of these being mutations in PTEN and tuberous sclerosis complexes 1 and 2 (TSC1, TSC2). These genes act as negative regulators of mTOR signaling, and mutations lead to hyperactivation of the pathway. Animal models deleting PTEN, TSC1, and TSC2 consistently produce epilepsy phenotypes, demonstrating that increased mTOR signaling can provoke neuronal hyperexcitability. Given the broad range of changes induced by altered mTOR signaling, however, the mechanisms underlying seizure development in these animals remain uncertain. In transgenic mice, cell populations with hyperactive mTOR have many structural abnormalities that support recurrent circuit formation, including somatic and dendritic hypertrophy, aberrant basal dendrites, and enlargement of axon tracts. At the functional level, mTOR hyperactivation is commonly, but not always, associated with enhanced synaptic transmission and plasticity. Moreover, these populations of abnormal neurons can affect the larger network, inducing secondary changes that may explain paradoxical findings reported between cell and network functioning in different models or at different developmental time points. Here, we review the animal literature examining the link between mTOR hyperactivation and epileptogenesis, emphasizing the impact of enhanced mTOR signaling on neuronal form and function.
Where the thoughts dwell: the physiology of neuronal-glial "diffuse neural net".
Verkhratsky, Alexei; Parpura, Vladimir; Rodríguez, José J
2011-01-07
The mechanisms underlying the production of thoughts by exceedingly complex cellular networks that construct the human brain constitute the most challenging problem of natural sciences. Our understanding of the brain function is very much shaped by the neuronal doctrine that assumes that neuronal networks represent the only substrate for cognition. These neuronal networks however are embedded into much larger and probably more complex network formed by neuroglia. The latter, although being electrically silent, employ many different mechanisms for intercellular signalling. It appears that astrocytes can control synaptic networks and in such a capacity they may represent an integral component of the computational power of the brain rather than being just brain "connective tissue". The fundamental question of whether neuroglia is involved in cognition and information processing remains, however, open. Indeed, a remarkable increase in the number of glial cells that distinguishes the human brain can be simply a result of exceedingly high specialisation of the neuronal networks, which delegated all matters of survival and maintenance to the neuroglia. At the same time potential power of analogue processing offered by internally connected glial networks may represent the alternative mechanism involved in cognition. Copyright © 2010 Elsevier B.V. All rights reserved.
Applying gene regulatory network logic to the evolution of social behavior.
Baran, Nicole M; McGrath, Patrick T; Streelman, J Todd
2017-06-06
Animal behavior is ultimately the product of gene regulatory networks (GRNs) for brain development and neural networks for brain function. The GRN approach has advanced the fields of genomics and development, and we identify organizational similarities between networks of genes that build the brain and networks of neurons that encode brain function. In this perspective, we engage the analogy between developmental networks and neural networks, exploring the advantages of using GRN logic to study behavior. Applying the GRN approach to the brain and behavior provides a quantitative and manipulative framework for discovery. We illustrate features of this framework using the example of social behavior and the neural circuitry of aggression.
Kono, Sho; Kushida, Takatoshi; Hirano-Iwata, Ayumi; Niwano, Michio; Tanii, Takashi
2016-01-01
Excitatory and inhibitory neurons have distinct roles in cortical dynamics. Here we present a novel method for identifying inhibitory GABAergic neurons from non-GABAergic neurons, which are mostly excitatory glutamatergic neurons, in primary cortical cultures. This was achieved using an asymmetrically designed micropattern that directs an axonal process to the longest pathway. In the current work, we first modified the micropattern geometry to improve cell viability and then studied the axon length from 2 to 7 days in vitro (DIV). The cell types of neurons were evaluated retrospectively based on immunoreactivity against GAD67, a marker for inhibitory GABAergic neurons. We found that axons of non-GABAergic neurons grow significantly longer than those of GABAergic neurons in the early stages of development. The optimal threshold for identifying GABAergic and non-GABAergic neurons was evaluated to be 110 μm at 6 DIV. The method does not require any fluorescence labelling and can be carried out on live cells. The accuracy of identification was 98.2%. We confirmed that the high accuracy was due to the use of a micropattern, which standardized the development of cultured neurons. The method promises to be beneficial both for engineering neuronal networks in vitro and for basic cellular neuroscience research. PMID:27513933
Collective behavior of large-scale neural networks with GPU acceleration.
Qu, Jingyi; Wang, Rubin
2017-12-01
In this paper, the collective behaviors of a small-world neuronal network motivated by the anatomy of a mammalian cortex based on both Izhikevich model and Rulkov model are studied. The Izhikevich model can not only reproduce the rich behaviors of biological neurons but also has only two equations and one nonlinear term. Rulkov model is in the form of difference equations that generate a sequence of membrane potential samples in discrete moments of time to improve computational efficiency. These two models are suitable for the construction of large scale neural networks. By varying some key parameters, such as the connection probability and the number of nearest neighbor of each node, the coupled neurons will exhibit types of temporal and spatial characteristics. It is demonstrated that the implementation of GPU can achieve more and more acceleration than CPU with the increasing of neuron number and iterations. These two small-world network models and GPU acceleration give us a new opportunity to reproduce the real biological network containing a large number of neurons.
A mixed-signal implementation of a polychronous spiking neural network with delay adaptation
Wang, Runchun M.; Hamilton, Tara J.; Tapson, Jonathan C.; van Schaik, André
2014-01-01
We present a mixed-signal implementation of a re-configurable polychronous spiking neural network capable of storing and recalling spatio-temporal patterns. The proposed neural network contains one neuron array and one axon array. Spike Timing Dependent Delay Plasticity is used to fine-tune delays and add dynamics to the network. In our mixed-signal implementation, the neurons and axons have been implemented as both analog and digital circuits. The system thus consists of one FPGA, containing the digital neuron array and the digital axon array, and one analog IC containing the analog neuron array and the analog axon array. The system can be easily configured to use different combinations of each. We present and discuss the experimental results of all combinations of the analog and digital axon arrays and the analog and digital neuron arrays. The test results show that the proposed neural network is capable of successfully recalling more than 85% of stored patterns using both analog and digital circuits. PMID:24672422