Sample records for complex neuronal network

  1. Computational exploration of neuron and neural network models in neurobiology.

    PubMed

    Prinz, Astrid A

    2007-01-01

    The electrical activity of individual neurons and neuronal networks is shaped by the complex interplay of a large number of non-linear processes, including the voltage-dependent gating of ion channels and the activation of synaptic receptors. These complex dynamics make it difficult to understand how individual neuron or network parameters-such as the number of ion channels of a given type in a neuron's membrane or the strength of a particular synapse-influence neural system function. Systematic exploration of cellular or network model parameter spaces by computational brute force can overcome this difficulty and generate comprehensive data sets that contain information about neuron or network behavior for many different combinations of parameters. Searching such data sets for parameter combinations that produce functional neuron or network output provides insights into how narrowly different neural system parameters have to be tuned to produce a desired behavior. This chapter describes the construction and analysis of databases of neuron or neuronal network models and describes some of the advantages and downsides of such exploration methods.

  2. Neuronal avalanches of a self-organized neural network with active-neuron-dominant structure.

    PubMed

    Li, Xiumin; Small, Michael

    2012-06-01

    Neuronal avalanche is a spontaneous neuronal activity which obeys a power-law distribution of population event sizes with an exponent of -3/2. It has been observed in the superficial layers of cortex both in vivo and in vitro. In this paper, we analyze the information transmission of a novel self-organized neural network with active-neuron-dominant structure. Neuronal avalanches can be observed in this network with appropriate input intensity. We find that the process of network learning via spike-timing dependent plasticity dramatically increases the complexity of network structure, which is finally self-organized to be active-neuron-dominant connectivity. Both the entropy of activity patterns and the complexity of their resulting post-synaptic inputs are maximized when the network dynamics are propagated as neuronal avalanches. This emergent topology is beneficial for information transmission with high efficiency and also could be responsible for the large information capacity of this network compared with alternative archetypal networks with different neural connectivity.

  3. Complex Rotation Quantum Dynamic Neural Networks (CRQDNN) using Complex Quantum Neuron (CQN): Applications to time series prediction.

    PubMed

    Cui, Yiqian; Shi, Junyou; Wang, Zili

    2015-11-01

    Quantum Neural Networks (QNN) models have attracted great attention since it innovates a new neural computing manner based on quantum entanglement. However, the existing QNN models are mainly based on the real quantum operations, and the potential of quantum entanglement is not fully exploited. In this paper, we proposes a novel quantum neuron model called Complex Quantum Neuron (CQN) that realizes a deep quantum entanglement. Also, a novel hybrid networks model Complex Rotation Quantum Dynamic Neural Networks (CRQDNN) is proposed based on Complex Quantum Neuron (CQN). CRQDNN is a three layer model with both CQN and classical neurons. An infinite impulse response (IIR) filter is embedded in the Networks model to enable the memory function to process time series inputs. The Levenberg-Marquardt (LM) algorithm is used for fast parameter learning. The networks model is developed to conduct time series predictions. Two application studies are done in this paper, including the chaotic time series prediction and electronic remaining useful life (RUL) prediction. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Uncovering Neuronal Networks Defined by Consistent Between-Neuron Spike Timing from Neuronal Spike Recordings

    PubMed Central

    2018-01-01

    Abstract It is widely assumed that distributed neuronal networks are fundamental to the functioning of the brain. Consistent spike timing between neurons is thought to be one of the key principles for the formation of these networks. This can involve synchronous spiking or spiking with time delays, forming spike sequences when the order of spiking is consistent. Finding networks defined by their sequence of time-shifted spikes, denoted here as spike timing networks, is a tremendous challenge. As neurons can participate in multiple spike sequences at multiple between-spike time delays, the possible complexity of networks is prohibitively large. We present a novel approach that is capable of (1) extracting spike timing networks regardless of their sequence complexity, and (2) that describes their spiking sequences with high temporal precision. We achieve this by decomposing frequency-transformed neuronal spiking into separate networks, characterizing each network’s spike sequence by a time delay per neuron, forming a spike sequence timeline. These networks provide a detailed template for an investigation of the experimental relevance of their spike sequences. Using simulated spike timing networks, we show network extraction is robust to spiking noise, spike timing jitter, and partial occurrences of the involved spike sequences. Using rat multineuron recordings, we demonstrate the approach is capable of revealing real spike timing networks with sub-millisecond temporal precision. By uncovering spike timing networks, the prevalence, structure, and function of complex spike sequences can be investigated in greater detail, allowing us to gain a better understanding of their role in neuronal functioning. PMID:29789811

  5. Cultured Neuronal Networks Express Complex Patterns of Activity and Morphological Memory

    NASA Astrophysics Data System (ADS)

    Raichman, Nadav; Rubinsky, Liel; Shein, Mark; Baruchi, Itay; Volman, Vladislav; Ben-Jacob, Eshel

    The following sections are included: * Cultured Neuronal Networks * Recording the Network Activity * Network Engineering * The Formation of Synchronized Bursting Events * The Characterization of the SBEs * Highly-Active Neurons * Function-Form Relations in Cultured Networks * Analyzing the SBEs Motifs * Network Repertoire * Network under Hypothermia * Summary * Acknowledgments * References

  6. Intrinsic protective mechanisms of the neuron-glia network against glioma invasion.

    PubMed

    Iwadate, Yasuo; Fukuda, Kazumasa; Matsutani, Tomoo; Saeki, Naokatsu

    2016-04-01

    Gliomas arising in the brain parenchyma infiltrate into the surrounding brain and break down established complex neuron-glia networks. However, mounting evidence suggests that initially the network microenvironment of the adult central nervous system (CNS) is innately non-permissive to glioma cell invasion. The main players are inhibitory molecules in CNS myelin, as well as proteoglycans associated with astrocytes. Neural stem cells, and neurons themselves, possess inhibitory functions against neighboring tumor cells. These mechanisms have evolved to protect the established neuron-glia network, which is necessary for brain function. Greater insight into the interaction between glioma cells and the surrounding neuron-glia network is crucial for developing new therapies for treating these devastating tumors while preserving the important and complex neural functions of patients. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Numerical simulation of coherent resonance in a model network of Rulkov neurons

    NASA Astrophysics Data System (ADS)

    Andreev, Andrey V.; Runnova, Anastasia E.; Pisarchik, Alexander N.

    2018-04-01

    In this paper we study the spiking behaviour of a neuronal network consisting of Rulkov elements. We find that the regularity of this behaviour maximizes at a certain level of environment noise. This effect referred to as coherence resonance is demonstrated in a random complex network of Rulkov neurons. An external stimulus added to some of neurons excites them, and then activates other neurons in the network. The network coherence is also maximized at the certain stimulus amplitude.

  8. Small Modifications to Network Topology Can Induce Stochastic Bistable Spiking Dynamics in a Balanced Cortical Model

    PubMed Central

    McDonnell, Mark D.; Ward, Lawrence M.

    2014-01-01

    Abstract Directed random graph models frequently are used successfully in modeling the population dynamics of networks of cortical neurons connected by chemical synapses. Experimental results consistently reveal that neuronal network topology is complex, however, in the sense that it differs statistically from a random network, and differs for classes of neurons that are physiologically different. This suggests that complex network models whose subnetworks have distinct topological structure may be a useful, and more biologically realistic, alternative to random networks. Here we demonstrate that the balanced excitation and inhibition frequently observed in small cortical regions can transiently disappear in otherwise standard neuronal-scale models of fluctuation-driven dynamics, solely because the random network topology was replaced by a complex clustered one, whilst not changing the in-degree of any neurons. In this network, a small subset of cells whose inhibition comes only from outside their local cluster are the cause of bistable population dynamics, where different clusters of these cells irregularly switch back and forth from a sparsely firing state to a highly active state. Transitions to the highly active state occur when a cluster of these cells spikes sufficiently often to cause strong unbalanced positive feedback to each other. Transitions back to the sparsely firing state rely on occasional large fluctuations in the amount of non-local inhibition received. Neurons in the model are homogeneous in their intrinsic dynamics and in-degrees, but differ in the abundance of various directed feedback motifs in which they participate. Our findings suggest that (i) models and simulations should take into account complex structure that varies for neuron and synapse classes; (ii) differences in the dynamics of neurons with similar intrinsic properties may be caused by their membership in distinctive local networks; (iii) it is important to identify neurons that share physiological properties and location, but differ in their connectivity. PMID:24743633

  9. Microfluidic neurite guidance to study structure-function relationships in topologically-complex population-based neural networks.

    PubMed

    Honegger, Thibault; Thielen, Moritz I; Feizi, Soheil; Sanjana, Neville E; Voldman, Joel

    2016-06-22

    The central nervous system is a dense, layered, 3D interconnected network of populations of neurons, and thus recapitulating that complexity for in vitro CNS models requires methods that can create defined topologically-complex neuronal networks. Several three-dimensional patterning approaches have been developed but none have demonstrated the ability to control the connections between populations of neurons. Here we report a method using AC electrokinetic forces that can guide, accelerate, slow down and push up neurites in un-modified collagen scaffolds. We present a means to create in vitro neural networks of arbitrary complexity by using such forces to create 3D intersections of primary neuronal populations that are plated in a 2D plane. We report for the first time in vitro basic brain motifs that have been previously observed in vivo and show that their functional network is highly decorrelated to their structure. This platform can provide building blocks to reproduce in vitro the complexity of neural circuits and provide a minimalistic environment to study the structure-function relationship of the brain circuitry.

  10. Microfluidic neurite guidance to study structure-function relationships in topologically-complex population-based neural networks

    NASA Astrophysics Data System (ADS)

    Honegger, Thibault; Thielen, Moritz I.; Feizi, Soheil; Sanjana, Neville E.; Voldman, Joel

    2016-06-01

    The central nervous system is a dense, layered, 3D interconnected network of populations of neurons, and thus recapitulating that complexity for in vitro CNS models requires methods that can create defined topologically-complex neuronal networks. Several three-dimensional patterning approaches have been developed but none have demonstrated the ability to control the connections between populations of neurons. Here we report a method using AC electrokinetic forces that can guide, accelerate, slow down and push up neurites in un-modified collagen scaffolds. We present a means to create in vitro neural networks of arbitrary complexity by using such forces to create 3D intersections of primary neuronal populations that are plated in a 2D plane. We report for the first time in vitro basic brain motifs that have been previously observed in vivo and show that their functional network is highly decorrelated to their structure. This platform can provide building blocks to reproduce in vitro the complexity of neural circuits and provide a minimalistic environment to study the structure-function relationship of the brain circuitry.

  11. Can simple rules control development of a pioneer vertebrate neuronal network generating behavior?

    PubMed

    Roberts, Alan; Conte, Deborah; Hull, Mike; Merrison-Hort, Robert; al Azad, Abul Kalam; Buhl, Edgar; Borisyuk, Roman; Soffe, Stephen R

    2014-01-08

    How do the pioneer networks in the axial core of the vertebrate nervous system first develop? Fundamental to understanding any full-scale neuronal network is knowledge of the constituent neurons, their properties, synaptic interconnections, and normal activity. Our novel strategy uses basic developmental rules to generate model networks that retain individual neuron and synapse resolution and are capable of reproducing correct, whole animal responses. We apply our developmental strategy to young Xenopus tadpoles, whose brainstem and spinal cord share a core vertebrate plan, but at a tractable complexity. Following detailed anatomical and physiological measurements to complete a descriptive library of each type of spinal neuron, we build models of their axon growth controlled by simple chemical gradients and physical barriers. By adding dendrites and allowing probabilistic formation of synaptic connections, we reconstruct network connectivity among up to 2000 neurons. When the resulting "network" is populated by model neurons and synapses, with properties based on physiology, it can respond to sensory stimulation by mimicking tadpole swimming behavior. This functioning model represents the most complete reconstruction of a vertebrate neuronal network that can reproduce the complex, rhythmic behavior of a whole animal. The findings validate our novel developmental strategy for generating realistic networks with individual neuron- and synapse-level resolution. We use it to demonstrate how early functional neuronal connectivity and behavior may in life result from simple developmental "rules," which lay out a scaffold for the vertebrate CNS without specific neuron-to-neuron recognition.

  12. Complexity in neuronal noise depends on network interconnectivity.

    PubMed

    Serletis, Demitre; Zalay, Osbert C; Valiante, Taufik A; Bardakjian, Berj L; Carlen, Peter L

    2011-06-01

    "Noise," or noise-like activity (NLA), defines background electrical membrane potential fluctuations at the cellular level of the nervous system, comprising an important aspect of brain dynamics. Using whole-cell voltage recordings from fast-spiking stratum oriens interneurons and stratum pyramidale neurons located in the CA3 region of the intact mouse hippocampus, we applied complexity measures from dynamical systems theory (i.e., 1/f(γ) noise and correlation dimension) and found evidence for complexity in neuronal NLA, ranging from high- to low-complexity dynamics. Importantly, these high- and low-complexity signal features were largely dependent on gap junction and chemical synaptic transmission. Progressive neuronal isolation from the surrounding local network via gap junction blockade (abolishing gap junction-dependent spikelets) and then chemical synaptic blockade (abolishing excitatory and inhibitory post-synaptic potentials), or the reverse order of these treatments, resulted in emergence of high-complexity NLA dynamics. Restoring local network interconnectivity via blockade washout resulted in resolution to low-complexity behavior. These results suggest that the observed increase in background NLA complexity is the result of reduced network interconnectivity, thereby highlighting the potential importance of the NLA signal to the study of network state transitions arising in normal and abnormal brain dynamics (such as in epilepsy, for example).

  13. Rhythmogenic neuronal networks, emergent leaders, and k-cores.

    PubMed

    Schwab, David J; Bruinsma, Robijn F; Feldman, Jack L; Levine, Alex J

    2010-11-01

    Neuronal network behavior results from a combination of the dynamics of individual neurons and the connectivity of the network that links them together. We study a simplified model, based on the proposal of Feldman and Del Negro (FDN) [Nat. Rev. Neurosci. 7, 232 (2006)], of the preBötzinger Complex, a small neuronal network that participates in the control of the mammalian breathing rhythm through periodic firing bursts. The dynamics of this randomly connected network of identical excitatory neurons differ from those of a uniformly connected one. Specifically, network connectivity determines the identity of emergent leader neurons that trigger the firing bursts. When neuronal desensitization is controlled by the number of input signals to the neurons (as proposed by FDN), the network's collective desensitization--required for successful burst termination--is mediated by k-core clusters of neurons.

  14. The Topographical Mapping in Drosophila Central Complex Network and Its Signal Routing

    PubMed Central

    Chang, Po-Yen; Su, Ta-Shun; Shih, Chi-Tin; Lo, Chung-Chuan

    2017-01-01

    Neural networks regulate brain functions by routing signals. Therefore, investigating the detailed organization of a neural circuit at the cellular levels is a crucial step toward understanding the neural mechanisms of brain functions. To study how a complicated neural circuit is organized, we analyzed recently published data on the neural circuit of the Drosophila central complex, a brain structure associated with a variety of functions including sensory integration and coordination of locomotion. We discovered that, except for a small number of “atypical” neuron types, the network structure formed by the identified 194 neuron types can be described by only a few simple mathematical rules. Specifically, the topological mapping formed by these neurons can be reconstructed by applying a generation matrix on a small set of initial neurons. By analyzing how information flows propagate with or without the atypical neurons, we found that while the general pattern of signal propagation in the central complex follows the simple topological mapping formed by the “typical” neurons, some atypical neurons can substantially re-route the signal pathways, implying specific roles of these neurons in sensory signal integration. The present study provides insights into the organization principle and signal integration in the central complex. PMID:28443014

  15. Observing complex action sequences: The role of the fronto-parietal mirror neuron system.

    PubMed

    Molnar-Szakacs, Istvan; Kaplan, Jonas; Greenfield, Patricia M; Iacoboni, Marco

    2006-11-15

    A fronto-parietal mirror neuron network in the human brain supports the ability to represent and understand observed actions allowing us to successfully interact with others and our environment. Using functional magnetic resonance imaging (fMRI), we wanted to investigate the response of this network in adults during observation of hierarchically organized action sequences of varying complexity that emerge at different developmental stages. We hypothesized that fronto-parietal systems may play a role in coding the hierarchical structure of object-directed actions. The observation of all action sequences recruited a common bilateral network including the fronto-parietal mirror neuron system and occipito-temporal visual motion areas. Activity in mirror neuron areas varied according to the motoric complexity of the observed actions, but not according to the developmental sequence of action structures, possibly due to the fact that our subjects were all adults. These results suggest that the mirror neuron system provides a fairly accurate simulation process of observed actions, mimicking internally the level of motoric complexity. We also discuss the results in terms of the links between mirror neurons, language development and evolution.

  16. Biological conservation law as an emerging functionality in dynamical neuronal networks.

    PubMed

    Podobnik, Boris; Jusup, Marko; Tiganj, Zoran; Wang, Wen-Xu; Buldú, Javier M; Stanley, H Eugene

    2017-11-07

    Scientists strive to understand how functionalities, such as conservation laws, emerge in complex systems. Living complex systems in particular create high-ordered functionalities by pairing up low-ordered complementary processes, e.g., one process to build and the other to correct. We propose a network mechanism that demonstrates how collective statistical laws can emerge at a macro (i.e., whole-network) level even when they do not exist at a unit (i.e., network-node) level. Drawing inspiration from neuroscience, we model a highly stylized dynamical neuronal network in which neurons fire either randomly or in response to the firing of neighboring neurons. A synapse connecting two neighboring neurons strengthens when both of these neurons are excited and weakens otherwise. We demonstrate that during this interplay between the synaptic and neuronal dynamics, when the network is near a critical point, both recurrent spontaneous and stimulated phase transitions enable the phase-dependent processes to replace each other and spontaneously generate a statistical conservation law-the conservation of synaptic strength. This conservation law is an emerging functionality selected by evolution and is thus a form of biological self-organized criticality in which the key dynamical modes are collective.

  17. Biological conservation law as an emerging functionality in dynamical neuronal networks

    PubMed Central

    Podobnik, Boris; Tiganj, Zoran; Wang, Wen-Xu; Buldú, Javier M.

    2017-01-01

    Scientists strive to understand how functionalities, such as conservation laws, emerge in complex systems. Living complex systems in particular create high-ordered functionalities by pairing up low-ordered complementary processes, e.g., one process to build and the other to correct. We propose a network mechanism that demonstrates how collective statistical laws can emerge at a macro (i.e., whole-network) level even when they do not exist at a unit (i.e., network-node) level. Drawing inspiration from neuroscience, we model a highly stylized dynamical neuronal network in which neurons fire either randomly or in response to the firing of neighboring neurons. A synapse connecting two neighboring neurons strengthens when both of these neurons are excited and weakens otherwise. We demonstrate that during this interplay between the synaptic and neuronal dynamics, when the network is near a critical point, both recurrent spontaneous and stimulated phase transitions enable the phase-dependent processes to replace each other and spontaneously generate a statistical conservation law—the conservation of synaptic strength. This conservation law is an emerging functionality selected by evolution and is thus a form of biological self-organized criticality in which the key dynamical modes are collective. PMID:29078286

  18. The Drosophila Clock Neuron Network Features Diverse Coupling Modes and Requires Network-wide Coherence for Robust Circadian Rhythms.

    PubMed

    Yao, Zepeng; Bennett, Amelia J; Clem, Jenna L; Shafer, Orie T

    2016-12-13

    In animals, networks of clock neurons containing molecular clocks orchestrate daily rhythms in physiology and behavior. However, how various types of clock neurons communicate and coordinate with one another to produce coherent circadian rhythms is not well understood. Here, we investigate clock neuron coupling in the brain of Drosophila and demonstrate that the fly's various groups of clock neurons display unique and complex coupling relationships to core pacemaker neurons. Furthermore, we find that coordinated free-running rhythms require molecular clock synchrony not only within the well-characterized lateral clock neuron classes but also between lateral clock neurons and dorsal clock neurons. These results uncover unexpected patterns of coupling in the clock neuron network and reveal that robust free-running behavioral rhythms require a coherence of molecular oscillations across most of the fly's clock neuron network. Copyright © 2016 The Author(s). Published by Elsevier Inc. All rights reserved.

  19. Uniting functional network topology and oscillations in the fronto-parietal single unit network of behaving primates.

    PubMed

    Dann, Benjamin; Michaels, Jonathan A; Schaffelhofer, Stefan; Scherberger, Hansjörg

    2016-08-15

    The functional communication of neurons in cortical networks underlies higher cognitive processes. Yet, little is known about the organization of the single neuron network or its relationship to the synchronization processes that are essential for its formation. Here, we show that the functional single neuron network of three fronto-parietal areas during active behavior of macaque monkeys is highly complex. The network was closely connected (small-world) and consisted of functional modules spanning these areas. Surprisingly, the importance of different neurons to the network was highly heterogeneous with a small number of neurons contributing strongly to the network function (hubs), which were in turn strongly inter-connected (rich-club). Examination of the network synchronization revealed that the identified rich-club consisted of neurons that were synchronized in the beta or low frequency range, whereas other neurons were mostly non-oscillatory synchronized. Therefore, oscillatory synchrony may be a central communication mechanism for highly organized functional spiking networks.

  20. Optimization Methods for Spiking Neurons and Networks

    PubMed Central

    Russell, Alexander; Orchard, Garrick; Dong, Yi; Mihalaş, Ştefan; Niebur, Ernst; Tapson, Jonathan; Etienne-Cummings, Ralph

    2011-01-01

    Spiking neurons and spiking neural circuits are finding uses in a multitude of tasks such as robotic locomotion control, neuroprosthetics, visual sensory processing, and audition. The desired neural output is achieved through the use of complex neuron models, or by combining multiple simple neurons into a network. In either case, a means for configuring the neuron or neural circuit is required. Manual manipulation of parameters is both time consuming and non-intuitive due to the nonlinear relationship between parameters and the neuron’s output. The complexity rises even further as the neurons are networked and the systems often become mathematically intractable. In large circuits, the desired behavior and timing of action potential trains may be known but the timing of the individual action potentials is unknown and unimportant, whereas in single neuron systems the timing of individual action potentials is critical. In this paper, we automate the process of finding parameters. To configure a single neuron we derive a maximum likelihood method for configuring a neuron model, specifically the Mihalas–Niebur Neuron. Similarly, to configure neural circuits, we show how we use genetic algorithms (GAs) to configure parameters for a network of simple integrate and fire with adaptation neurons. The GA approach is demonstrated both in software simulation and hardware implementation on a reconfigurable custom very large scale integration chip. PMID:20959265

  1. Improved Autoassociative Neural Networks

    NASA Technical Reports Server (NTRS)

    Hand, Charles

    2003-01-01

    Improved autoassociative neural networks, denoted nexi, have been proposed for use in controlling autonomous robots, including mobile exploratory robots of the biomorphic type. In comparison with conventional autoassociative neural networks, nexi would be more complex but more capable in that they could be trained to do more complex tasks. A nexus would use bit weights and simple arithmetic in a manner that would enable training and operation without a central processing unit, programs, weight registers, or large amounts of memory. Only a relatively small amount of memory (to hold the bit weights) and a simple logic application- specific integrated circuit would be needed. A description of autoassociative neural networks is prerequisite to a meaningful description of a nexus. An autoassociative network is a set of neurons that are completely connected in the sense that each neuron receives input from, and sends output to, all the other neurons. (In some instantiations, a neuron could also send output back to its own input terminal.) The state of a neuron is completely determined by the inner product of its inputs with weights associated with its input channel. Setting the weights sets the behavior of the network. The neurons of an autoassociative network are usually regarded as comprising a row or vector. Time is a quantized phenomenon for most autoassociative networks in the sense that time proceeds in discrete steps. At each time step, the row of neurons forms a pattern: some neurons are firing, some are not. Hence, the current state of an autoassociative network can be described with a single binary vector. As time goes by, the network changes the vector. Autoassociative networks move vectors over hyperspace landscapes of possibilities.

  2. Neuronal avalanches and learning

    NASA Astrophysics Data System (ADS)

    de Arcangelis, Lucilla

    2011-05-01

    Networks of living neurons represent one of the most fascinating systems of biology. If the physical and chemical mechanisms at the basis of the functioning of a single neuron are quite well understood, the collective behaviour of a system of many neurons is an extremely intriguing subject. Crucial ingredient of this complex behaviour is the plasticity property of the network, namely the capacity to adapt and evolve depending on the level of activity. This plastic ability is believed, nowadays, to be at the basis of learning and memory in real brains. Spontaneous neuronal activity has recently shown features in common to other complex systems. Experimental data have, in fact, shown that electrical information propagates in a cortex slice via an avalanche mode. These avalanches are characterized by a power law distribution for the size and duration, features found in other problems in the context of the physics of complex systems and successful models have been developed to describe their behaviour. In this contribution we discuss a statistical mechanical model for the complex activity in a neuronal network. The model implements the main physiological properties of living neurons and is able to reproduce recent experimental results. Then, we discuss the learning abilities of this neuronal network. Learning occurs via plastic adaptation of synaptic strengths by a non-uniform negative feedback mechanism. The system is able to learn all the tested rules, in particular the exclusive OR (XOR) and a random rule with three inputs. The learning dynamics exhibits universal features as function of the strength of plastic adaptation. Any rule could be learned provided that the plastic adaptation is sufficiently slow.

  3. Complex-valued multistate associative memory with nonlinear multilevel functions for gray-level image reconstruction.

    PubMed

    Tanaka, Gouhei; Aihara, Kazuyuki

    2009-09-01

    A widely used complex-valued activation function for complex-valued multistate Hopfield networks is revealed to be essentially based on a multilevel step function. By replacing the multilevel step function with other multilevel characteristics, we present two alternative complex-valued activation functions. One is based on a multilevel sigmoid function, while the other on a characteristic of a multistate bifurcating neuron. Numerical experiments show that both modifications to the complex-valued activation function bring about improvements in network performance for a multistate associative memory. The advantage of the proposed networks over the complex-valued Hopfield networks with the multilevel step function is more outstanding when a complex-valued neuron represents a larger number of multivalued states. Further, the performance of the proposed networks in reconstructing noisy 256 gray-level images is demonstrated in comparison with other recent associative memories to clarify their advantages and disadvantages.

  4. Where the thoughts dwell: the physiology of neuronal-glial "diffuse neural net".

    PubMed

    Verkhratsky, Alexei; Parpura, Vladimir; Rodríguez, José J

    2011-01-07

    The mechanisms underlying the production of thoughts by exceedingly complex cellular networks that construct the human brain constitute the most challenging problem of natural sciences. Our understanding of the brain function is very much shaped by the neuronal doctrine that assumes that neuronal networks represent the only substrate for cognition. These neuronal networks however are embedded into much larger and probably more complex network formed by neuroglia. The latter, although being electrically silent, employ many different mechanisms for intercellular signalling. It appears that astrocytes can control synaptic networks and in such a capacity they may represent an integral component of the computational power of the brain rather than being just brain "connective tissue". The fundamental question of whether neuroglia is involved in cognition and information processing remains, however, open. Indeed, a remarkable increase in the number of glial cells that distinguishes the human brain can be simply a result of exceedingly high specialisation of the neuronal networks, which delegated all matters of survival and maintenance to the neuroglia. At the same time potential power of analogue processing offered by internally connected glial networks may represent the alternative mechanism involved in cognition. Copyright © 2010 Elsevier B.V. All rights reserved.

  5. Reducing Neuronal Networks to Discrete Dynamics

    PubMed Central

    Terman, David; Ahn, Sungwoo; Wang, Xueying; Just, Winfried

    2008-01-01

    We consider a general class of purely inhibitory and excitatory-inhibitory neuronal networks, with a general class of network architectures, and characterize the complex firing patterns that emerge. Our strategy for studying these networks is to first reduce them to a discrete model. In the discrete model, each neuron is represented as a finite number of states and there are rules for how a neuron transitions from one state to another. In this paper, we rigorously demonstrate that the continuous neuronal model can be reduced to the discrete model if the intrinsic and synaptic properties of the cells are chosen appropriately. In a companion paper [1], we analyze the discrete model. PMID:18443649

  6. Computational Models of Neuron-Astrocyte Interactions Lead to Improved Efficacy in the Performance of Neural Networks

    PubMed Central

    Alvarellos-González, Alberto; Pazos, Alejandro; Porto-Pazos, Ana B.

    2012-01-01

    The importance of astrocytes, one part of the glial system, for information processing in the brain has recently been demonstrated. Regarding information processing in multilayer connectionist systems, it has been shown that systems which include artificial neurons and astrocytes (Artificial Neuron-Glia Networks) have well-known advantages over identical systems including only artificial neurons. Since the actual impact of astrocytes in neural network function is unknown, we have investigated, using computational models, different astrocyte-neuron interactions for information processing; different neuron-glia algorithms have been implemented for training and validation of multilayer Artificial Neuron-Glia Networks oriented toward classification problem resolution. The results of the tests performed suggest that all the algorithms modelling astrocyte-induced synaptic potentiation improved artificial neural network performance, but their efficacy depended on the complexity of the problem. PMID:22649480

  7. Computational models of neuron-astrocyte interactions lead to improved efficacy in the performance of neural networks.

    PubMed

    Alvarellos-González, Alberto; Pazos, Alejandro; Porto-Pazos, Ana B

    2012-01-01

    The importance of astrocytes, one part of the glial system, for information processing in the brain has recently been demonstrated. Regarding information processing in multilayer connectionist systems, it has been shown that systems which include artificial neurons and astrocytes (Artificial Neuron-Glia Networks) have well-known advantages over identical systems including only artificial neurons. Since the actual impact of astrocytes in neural network function is unknown, we have investigated, using computational models, different astrocyte-neuron interactions for information processing; different neuron-glia algorithms have been implemented for training and validation of multilayer Artificial Neuron-Glia Networks oriented toward classification problem resolution. The results of the tests performed suggest that all the algorithms modelling astrocyte-induced synaptic potentiation improved artificial neural network performance, but their efficacy depended on the complexity of the problem.

  8. Dynamic range in small-world networks of Hodgkin-Huxley neurons with chemical synapses

    NASA Astrophysics Data System (ADS)

    Batista, C. A. S.; Viana, R. L.; Lopes, S. R.; Batista, A. M.

    2014-09-01

    According to Stevens' law the relationship between stimulus and response is a power-law within an interval called the dynamic range. The dynamic range of sensory organs is found to be larger than that of a single neuron, suggesting that the network structure plays a key role in the behavior of both the scaling exponent and the dynamic range of neuron assemblies. In order to verify computationally the relationships between stimulus and response for spiking neurons, we investigate small-world networks of neurons described by the Hodgkin-Huxley equations connected by chemical synapses. We found that the dynamic range increases with the network size, suggesting that the enhancement of the dynamic range observed in sensory organs, with respect to single neurons, is an emergent property of complex network dynamics.

  9. Uniting functional network topology and oscillations in the fronto-parietal single unit network of behaving primates

    PubMed Central

    Dann, Benjamin; Michaels, Jonathan A; Schaffelhofer, Stefan; Scherberger, Hansjörg

    2016-01-01

    The functional communication of neurons in cortical networks underlies higher cognitive processes. Yet, little is known about the organization of the single neuron network or its relationship to the synchronization processes that are essential for its formation. Here, we show that the functional single neuron network of three fronto-parietal areas during active behavior of macaque monkeys is highly complex. The network was closely connected (small-world) and consisted of functional modules spanning these areas. Surprisingly, the importance of different neurons to the network was highly heterogeneous with a small number of neurons contributing strongly to the network function (hubs), which were in turn strongly inter-connected (rich-club). Examination of the network synchronization revealed that the identified rich-club consisted of neurons that were synchronized in the beta or low frequency range, whereas other neurons were mostly non-oscillatory synchronized. Therefore, oscillatory synchrony may be a central communication mechanism for highly organized functional spiking networks. DOI: http://dx.doi.org/10.7554/eLife.15719.001 PMID:27525488

  10. Replicating receptive fields of simple and complex cells in primary visual cortex in a neuronal network model with temporal and population sparseness and reliability.

    PubMed

    Tanaka, Takuma; Aoyagi, Toshio; Kaneko, Takeshi

    2012-10-01

    We propose a new principle for replicating receptive field properties of neurons in the primary visual cortex. We derive a learning rule for a feedforward network, which maintains a low firing rate for the output neurons (resulting in temporal sparseness) and allows only a small subset of the neurons in the network to fire at any given time (resulting in population sparseness). Our learning rule also sets the firing rates of the output neurons at each time step to near-maximum or near-minimum levels, resulting in neuronal reliability. The learning rule is simple enough to be written in spatially and temporally local forms. After the learning stage is performed using input image patches of natural scenes, output neurons in the model network are found to exhibit simple-cell-like receptive field properties. When the output of these simple-cell-like neurons are input to another model layer using the same learning rule, the second-layer output neurons after learning become less sensitive to the phase of gratings than the simple-cell-like input neurons. In particular, some of the second-layer output neurons become completely phase invariant, owing to the convergence of the connections from first-layer neurons with similar orientation selectivity to second-layer neurons in the model network. We examine the parameter dependencies of the receptive field properties of the model neurons after learning and discuss their biological implications. We also show that the localized learning rule is consistent with experimental results concerning neuronal plasticity and can replicate the receptive fields of simple and complex cells.

  11. Synchronization properties of heterogeneous neuronal networks with mixed excitability type

    NASA Astrophysics Data System (ADS)

    Leone, Michael J.; Schurter, Brandon N.; Letson, Benjamin; Booth, Victoria; Zochowski, Michal; Fink, Christian G.

    2015-03-01

    We study the synchronization of neuronal networks with dynamical heterogeneity, showing that network structures with the same propensity for synchronization (as quantified by master stability function analysis) may develop dramatically different synchronization properties when heterogeneity is introduced with respect to neuronal excitability type. Specifically, we investigate networks composed of neurons with different types of phase response curves (PRCs), which characterize how oscillating neurons respond to excitatory perturbations. Neurons exhibiting type 1 PRC respond exclusively with phase advances, while neurons exhibiting type 2 PRC respond with either phase delays or phase advances, depending on when the perturbation occurs. We find that Watts-Strogatz small world networks transition to synchronization gradually as the proportion of type 2 neurons increases, whereas scale-free networks may transition gradually or rapidly, depending upon local correlations between node degree and excitability type. Random placement of type 2 neurons results in gradual transition to synchronization, whereas placement of type 2 neurons as hubs leads to a much more rapid transition, showing that type 2 hub cells easily "hijack" neuronal networks to synchronization. These results underscore the fact that the degree of synchronization observed in neuronal networks is determined by a complex interplay between network structure and the dynamical properties of individual neurons, indicating that efforts to recover structural connectivity from dynamical correlations must in general take both factors into account.

  12. Efficient Transmission of Subthreshold Signals in Complex Networks of Spiking Neurons

    PubMed Central

    Torres, Joaquin J.; Elices, Irene; Marro, J.

    2015-01-01

    We investigate the efficient transmission and processing of weak, subthreshold signals in a realistic neural medium in the presence of different levels of the underlying noise. Assuming Hebbian weights for maximal synaptic conductances—that naturally balances the network with excitatory and inhibitory synapses—and considering short-term synaptic plasticity affecting such conductances, we found different dynamic phases in the system. This includes a memory phase where population of neurons remain synchronized, an oscillatory phase where transitions between different synchronized populations of neurons appears and an asynchronous or noisy phase. When a weak stimulus input is applied to each neuron, increasing the level of noise in the medium we found an efficient transmission of such stimuli around the transition and critical points separating different phases for well-defined different levels of stochasticity in the system. We proved that this intriguing phenomenon is quite robust, as it occurs in different situations including several types of synaptic plasticity, different type and number of stored patterns and diverse network topologies, namely, diluted networks and complex topologies such as scale-free and small-world networks. We conclude that the robustness of the phenomenon in different realistic scenarios, including spiking neurons, short-term synaptic plasticity and complex networks topologies, make very likely that it could also occur in actual neural systems as recent psycho-physical experiments suggest. PMID:25799449

  13. Computational Modeling of Single Neuron Extracellular Electric Potentials and Network Local Field Potentials using LFPsim.

    PubMed

    Parasuram, Harilal; Nair, Bipin; D'Angelo, Egidio; Hines, Michael; Naldi, Giovanni; Diwakar, Shyam

    2016-01-01

    Local Field Potentials (LFPs) are population signals generated by complex spatiotemporal interaction of current sources and dipoles. Mathematical computations of LFPs allow the study of circuit functions and dysfunctions via simulations. This paper introduces LFPsim, a NEURON-based tool for computing population LFP activity and single neuron extracellular potentials. LFPsim was developed to be used on existing cable compartmental neuron and network models. Point source, line source, and RC based filter approximations can be used to compute extracellular activity. As a demonstration of efficient implementation, we showcase LFPs from mathematical models of electrotonically compact cerebellum granule neurons and morphologically complex neurons of the neocortical column. LFPsim reproduced neocortical LFP at 8, 32, and 56 Hz via current injection, in vitro post-synaptic N2a, N2b waves and in vivo T-C waves in cerebellum granular layer. LFPsim also includes a simulation of multi-electrode array of LFPs in network populations to aid computational inference between biophysical activity in neural networks and corresponding multi-unit activity resulting in extracellular and evoked LFP signals.

  14. [Neuronal and synaptic properties: fundamentals of network plasticity].

    PubMed

    Le Masson, G

    2000-02-01

    Neurons, within the nervous system, are organized in different neural networks through synaptic connections. Two fundamental components are dynamically interacting in these functional units. The first one are the neurons themselves, and far from being simple action potential generators, they are capable of complex electrical integrative properties due to various types, number, distribution and modulation of voltage-gated ionic channels. The second elements are the synapses where a similar complexity and plasticity is found. Identifying both cellular and synaptic intrinsic properties is necessary to understand the links between neural networks behavior and physiological function, and is a useful step towards a better control of neurological diseases.

  15. Cliques of Neurons Bound into Cavities Provide a Missing Link between Structure and Function.

    PubMed

    Reimann, Michael W; Nolte, Max; Scolamiero, Martina; Turner, Katharine; Perin, Rodrigo; Chindemi, Giuseppe; Dłotko, Paweł; Levi, Ran; Hess, Kathryn; Markram, Henry

    2017-01-01

    The lack of a formal link between neural network structure and its emergent function has hampered our understanding of how the brain processes information. We have now come closer to describing such a link by taking the direction of synaptic transmission into account, constructing graphs of a network that reflect the direction of information flow, and analyzing these directed graphs using algebraic topology. Applying this approach to a local network of neurons in the neocortex revealed a remarkably intricate and previously unseen topology of synaptic connectivity. The synaptic network contains an abundance of cliques of neurons bound into cavities that guide the emergence of correlated activity. In response to stimuli, correlated activity binds synaptically connected neurons into functional cliques and cavities that evolve in a stereotypical sequence toward peak complexity. We propose that the brain processes stimuli by forming increasingly complex functional cliques and cavities.

  16. Network Receptive Field Modeling Reveals Extensive Integration and Multi-feature Selectivity in Auditory Cortical Neurons.

    PubMed

    Harper, Nicol S; Schoppe, Oliver; Willmore, Ben D B; Cui, Zhanfeng; Schnupp, Jan W H; King, Andrew J

    2016-11-01

    Cortical sensory neurons are commonly characterized using the receptive field, the linear dependence of their response on the stimulus. In primary auditory cortex neurons can be characterized by their spectrotemporal receptive fields, the spectral and temporal features of a sound that linearly drive a neuron. However, receptive fields do not capture the fact that the response of a cortical neuron results from the complex nonlinear network in which it is embedded. By fitting a nonlinear feedforward network model (a network receptive field) to cortical responses to natural sounds, we reveal that primary auditory cortical neurons are sensitive over a substantially larger spectrotemporal domain than is seen in their standard spectrotemporal receptive fields. Furthermore, the network receptive field, a parsimonious network consisting of 1-7 sub-receptive fields that interact nonlinearly, consistently better predicts neural responses to auditory stimuli than the standard receptive fields. The network receptive field reveals separate excitatory and inhibitory sub-fields with different nonlinear properties, and interaction of the sub-fields gives rise to important operations such as gain control and conjunctive feature detection. The conjunctive effects, where neurons respond only if several specific features are present together, enable increased selectivity for particular complex spectrotemporal structures, and may constitute an important stage in sound recognition. In conclusion, we demonstrate that fitting auditory cortical neural responses with feedforward network models expands on simple linear receptive field models in a manner that yields substantially improved predictive power and reveals key nonlinear aspects of cortical processing, while remaining easy to interpret in a physiological context.

  17. Network Receptive Field Modeling Reveals Extensive Integration and Multi-feature Selectivity in Auditory Cortical Neurons

    PubMed Central

    Willmore, Ben D. B.; Cui, Zhanfeng; Schnupp, Jan W. H.; King, Andrew J.

    2016-01-01

    Cortical sensory neurons are commonly characterized using the receptive field, the linear dependence of their response on the stimulus. In primary auditory cortex neurons can be characterized by their spectrotemporal receptive fields, the spectral and temporal features of a sound that linearly drive a neuron. However, receptive fields do not capture the fact that the response of a cortical neuron results from the complex nonlinear network in which it is embedded. By fitting a nonlinear feedforward network model (a network receptive field) to cortical responses to natural sounds, we reveal that primary auditory cortical neurons are sensitive over a substantially larger spectrotemporal domain than is seen in their standard spectrotemporal receptive fields. Furthermore, the network receptive field, a parsimonious network consisting of 1–7 sub-receptive fields that interact nonlinearly, consistently better predicts neural responses to auditory stimuli than the standard receptive fields. The network receptive field reveals separate excitatory and inhibitory sub-fields with different nonlinear properties, and interaction of the sub-fields gives rise to important operations such as gain control and conjunctive feature detection. The conjunctive effects, where neurons respond only if several specific features are present together, enable increased selectivity for particular complex spectrotemporal structures, and may constitute an important stage in sound recognition. In conclusion, we demonstrate that fitting auditory cortical neural responses with feedforward network models expands on simple linear receptive field models in a manner that yields substantially improved predictive power and reveals key nonlinear aspects of cortical processing, while remaining easy to interpret in a physiological context. PMID:27835647

  18. Qualitative validation of the reduction from two reciprocally coupled neurons to one self-coupled neuron in a respiratory network model.

    PubMed

    Dunmyre, Justin R

    2011-06-01

    The pre-Bötzinger complex of the mammalian brainstem is a heterogeneous neuronal network, and individual neurons within the network have varying strengths of the persistent sodium and calcium-activated nonspecific cationic currents. Individually, these currents have been the focus of modeling efforts. Previously, Dunmyre et al. (J Comput Neurosci 1-24, 2011) proposed a model and studied the interactions of these currents within one self-coupled neuron. In this work, I consider two identical, reciprocally coupled model neurons and validate the reduction to the self-coupled case. I find that all of the dynamics of the two model neuron network and the regions of parameter space where these distinct dynamics are found are qualitatively preserved in the reduction to the self-coupled case.

  19. Probabilistic Inference in General Graphical Models through Sampling in Stochastic Networks of Spiking Neurons

    PubMed Central

    Pecevski, Dejan; Buesing, Lars; Maass, Wolfgang

    2011-01-01

    An important open problem of computational neuroscience is the generic organization of computations in networks of neurons in the brain. We show here through rigorous theoretical analysis that inherent stochastic features of spiking neurons, in combination with simple nonlinear computational operations in specific network motifs and dendritic arbors, enable networks of spiking neurons to carry out probabilistic inference through sampling in general graphical models. In particular, it enables them to carry out probabilistic inference in Bayesian networks with converging arrows (“explaining away”) and with undirected loops, that occur in many real-world tasks. Ubiquitous stochastic features of networks of spiking neurons, such as trial-to-trial variability and spontaneous activity, are necessary ingredients of the underlying computational organization. We demonstrate through computer simulations that this approach can be scaled up to neural emulations of probabilistic inference in fairly large graphical models, yielding some of the most complex computations that have been carried out so far in networks of spiking neurons. PMID:22219717

  20. Synaptic Plasticity and Spike Synchronisation in Neuronal Networks

    NASA Astrophysics Data System (ADS)

    Borges, Rafael R.; Borges, Fernando S.; Lameu, Ewandson L.; Protachevicz, Paulo R.; Iarosz, Kelly C.; Caldas, Iberê L.; Viana, Ricardo L.; Macau, Elbert E. N.; Baptista, Murilo S.; Grebogi, Celso; Batista, Antonio M.

    2017-12-01

    Brain plasticity, also known as neuroplasticity, is a fundamental mechanism of neuronal adaptation in response to changes in the environment or due to brain injury. In this review, we show our results about the effects of synaptic plasticity on neuronal networks composed by Hodgkin-Huxley neurons. We show that the final topology of the evolved network depends crucially on the ratio between the strengths of the inhibitory and excitatory synapses. Excitation of the same order of inhibition revels an evolved network that presents the rich-club phenomenon, well known to exist in the brain. For initial networks with considerably larger inhibitory strengths, we observe the emergence of a complex evolved topology, where neurons sparsely connected to other neurons, also a typical topology of the brain. The presence of noise enhances the strength of both types of synapses, but if the initial network has synapses of both natures with similar strengths. Finally, we show how the synchronous behaviour of the evolved network will reflect its evolved topology.

  1. Beyond Critical Exponents in Neuronal Avalanches

    NASA Astrophysics Data System (ADS)

    Friedman, Nir; Butler, Tom; Deville, Robert; Beggs, John; Dahmen, Karin

    2011-03-01

    Neurons form a complex network in the brain, where they interact with one another by firing electrical signals. Neurons firing can trigger other neurons to fire, potentially causing avalanches of activity in the network. In many cases these avalanches have been found to be scale independent, similar to critical phenomena in diverse systems such as magnets and earthquakes. We discuss models for neuronal activity that allow for the extraction of testable, statistical predictions. We compare these models to experimental results, and go beyond critical exponents.

  2. Modeling complex tone perception: grouping harmonics with combination-sensitive neurons.

    PubMed

    Medvedev, Andrei V; Chiao, Faye; Kanwal, Jagmeet S

    2002-06-01

    Perception of complex communication sounds is a major function of the auditory system. To create a coherent precept of these sounds the auditory system may instantaneously group or bind multiple harmonics within complex sounds. This perception strategy simplifies further processing of complex sounds and facilitates their meaningful integration with other sensory inputs. Based on experimental data and a realistic model, we propose that associative learning of combinations of harmonic frequencies and nonlinear facilitation of responses to those combinations, also referred to as "combination-sensitivity," are important for spectral grouping. For our model, we simulated combination sensitivity using Hebbian and associative types of synaptic plasticity in auditory neurons. We also provided a parallel tonotopic input that converges and diverges within the network. Neurons in higher-order layers of the network exhibited an emergent property of multifrequency tuning that is consistent with experimental findings. Furthermore, this network had the capacity to "recognize" the pitch or fundamental frequency of a harmonic tone complex even when the fundamental frequency itself was missing.

  3. Modeling fluctuations in default-mode brain network using a spiking neural network.

    PubMed

    Yamanishi, Teruya; Liu, Jian-Qin; Nishimura, Haruhiko

    2012-08-01

    Recently, numerous attempts have been made to understand the dynamic behavior of complex brain systems using neural network models. The fluctuations in blood-oxygen-level-dependent (BOLD) brain signals at less than 0.1 Hz have been observed by functional magnetic resonance imaging (fMRI) for subjects in a resting state. This phenomenon is referred to as a "default-mode brain network." In this study, we model the default-mode brain network by functionally connecting neural communities composed of spiking neurons in a complex network. Through computational simulations of the model, including transmission delays and complex connectivity, the network dynamics of the neural system and its behavior are discussed. The results show that the power spectrum of the modeled fluctuations in the neuron firing patterns is consistent with the default-mode brain network's BOLD signals when transmission delays, a characteristic property of the brain, have finite values in a given range.

  4. An FPGA-Based Silicon Neuronal Network with Selectable Excitability Silicon Neurons

    PubMed Central

    Li, Jing; Katori, Yuichi; Kohno, Takashi

    2012-01-01

    This paper presents a digital silicon neuronal network which simulates the nerve system in creatures and has the ability to execute intelligent tasks, such as associative memory. Two essential elements, the mathematical-structure-based digital spiking silicon neuron (DSSN) and the transmitter release based silicon synapse, allow us to tune the excitability of silicon neurons and are computationally efficient for hardware implementation. We adopt mixed pipeline and parallel structure and shift operations to design a sufficient large and complex network without excessive hardware resource cost. The network with 256 full-connected neurons is built on a Digilent Atlys board equipped with a Xilinx Spartan-6 LX45 FPGA. Besides, a memory control block and USB control block are designed to accomplish the task of data communication between the network and the host PC. This paper also describes the mechanism of associative memory performed in the silicon neuronal network. The network is capable of retrieving stored patterns if the inputs contain enough information of them. The retrieving probability increases with the similarity between the input and the stored pattern increasing. Synchronization of neurons is observed when the successful stored pattern retrieval occurs. PMID:23269911

  5. Intracellular Mannose Binding Lectin Mediates Subcellular Trafficking of HIV-1 gp120 in Neurons

    PubMed Central

    Teodorof, C; Divakar, S; Soontornniyomkij, B; Achim, CL; Kaul, M; Singh, KK

    2014-01-01

    Human immunodeficiency virus -1 (HIV-1) enters the brain early during infection and leads to severe neuronal damage and central nervous system impairment. HIV-1 envelope glycoprotein 120 (gp120), a neurotoxin, undergoes intracellular trafficking and transport across neurons; however mechanisms of gp120 trafficking in neurons are unclear. Our results show that mannose binding lectin (MBL) that binds to the N-linked mannose residues on gp120, participates in intravesicular packaging of gp120 in neuronal subcellular organelles and also in subcellular trafficking of these vesicles in neuronal cells. Perinuclear MBL:gp120 vesicular complexes were observed and MBL facilitated the subcellular trafficking of gp120 via the endoplasmic reticulum (ER) and Golgi vesicles. The functional carbohydrate recognition domain of MBL was required for perinuclear organization, distribution and subcellular trafficking of MBL:gp120 vesicular complexes. Nocodazole, an agent that depolymerizes the microtubule network, abolished the trafficking of MBL:gp120 vesicles, suggesting that these vesicular complexes were transported along the microtubule network. Live cell imaging confirmed the association of the MBL:gp120 complexes with dynamic subcellular vesicles that underwent trafficking in neuronal soma and along the neurites. Thus, our findings suggest that intracellular MBL mediates subcellular trafficking and transport of viral glycoproteins in a microtubule-dependent mechanism in the neurons. PMID:24825317

  6. Intracellular mannose binding lectin mediates subcellular trafficking of HIV-1 gp120 in neurons.

    PubMed

    Teodorof, C; Divakar, S; Soontornniyomkij, B; Achim, C L; Kaul, M; Singh, K K

    2014-09-01

    Human immunodeficiency virus-1 (HIV-1) enters the brain early during infection and leads to severe neuronal damage and central nervous system impairment. HIV-1 envelope glycoprotein 120 (gp120), a neurotoxin, undergoes intracellular trafficking and transport across neurons; however mechanisms of gp120 trafficking in neurons are unclear. Our results show that mannose binding lectin (MBL) that binds to the N-linked mannose residues on gp120, participates in intravesicular packaging of gp120 in neuronal subcellular organelles and also in subcellular trafficking of these vesicles in neuronal cells. Perinuclear MBL:gp120 vesicular complexes were observed and MBL facilitated the subcellular trafficking of gp120 via the endoplasmic reticulum (ER) and Golgi vesicles. The functional carbohydrate recognition domain of MBL was required for perinuclear organization, distribution and subcellular trafficking of MBL:gp120 vesicular complexes. Nocodazole, an agent that depolymerizes the microtubule network, abolished the trafficking of MBL:gp120 vesicles, suggesting that these vesicular complexes were transported along the microtubule network. Live cell imaging confirmed the association of the MBL:gp120 complexes with dynamic subcellular vesicles that underwent trafficking in neuronal soma and along the neurites. Thus, our findings suggest that intracellular MBL mediates subcellular trafficking and transport of viral glycoproteins in a microtubule-dependent mechanism in the neurons. Published by Elsevier Inc.

  7. An egalitarian network model for the emergence of simple and complex cells in visual cortex

    PubMed Central

    Tao, Louis; Shelley, Michael; McLaughlin, David; Shapley, Robert

    2004-01-01

    We explain how simple and complex cells arise in a large-scale neuronal network model of the primary visual cortex of the macaque. Our model consists of ≈4,000 integrate-and-fire, conductance-based point neurons, representing the cells in a small, 1-mm2 patch of an input layer of the primary visual cortex. In the model the local connections are isotropic and nonspecific, and convergent input from the lateral geniculate nucleus confers cortical cells with orientation and spatial phase preference. The balance between lateral connections and lateral geniculate nucleus drive determines whether individual neurons in this recurrent circuit are simple or complex. The model reproduces qualitatively the experimentally observed distributions of both extracellular and intracellular measures of simple and complex response. PMID:14695891

  8. Noise focusing and the emergence of coherent activity in neuronal cultures

    NASA Astrophysics Data System (ADS)

    Orlandi, Javier G.; Soriano, Jordi; Alvarez-Lacalle, Enrique; Teller, Sara; Casademunt, Jaume

    2013-09-01

    At early stages of development, neuronal cultures in vitro spontaneously reach a coherent state of collective firing in a pattern of nearly periodic global bursts. Although understanding the spontaneous activity of neuronal networks is of chief importance in neuroscience, the origin and nature of that pulsation has remained elusive. By combining high-resolution calcium imaging with modelling in silico, we show that this behaviour is controlled by the propagation of waves that nucleate randomly in a set of points that is specific to each culture and is selected by a non-trivial interplay between dynamics and topology. The phenomenon is explained by the noise focusing effect--a strong spatio-temporal localization of the noise dynamics that originates in the complex structure of avalanches of spontaneous activity. Results are relevant to neuronal tissues and to complex networks with integrate-and-fire dynamics and metric correlations, for instance, in rumour spreading on social networks.

  9. Activity of cardiorespiratory networks revealed by transsynaptic virus expressing GFP.

    PubMed

    Irnaten, M; Neff, R A; Wang, J; Loewy, A D; Mettenleiter, T C; Mendelowitz, D

    2001-01-01

    A fluorescent transneuronal marker capable of labeling individual neurons in a central network while maintaining their normal physiology would permit functional studies of neurons within entire networks responsible for complex behaviors such as cardiorespiratory reflexes. The Bartha strain of pseudorabies virus (PRV), an attenuated swine alpha herpesvirus, can be used as a transsynaptic marker of neural circuits. Bartha PRV invades neuronal networks in the CNS through peripherally projecting axons, replicates in these parent neurons, and then travels transsynaptically to continue labeling the second- and higher-order neurons in a time-dependent manner. A Bartha PRV mutant that expresses green fluorescent protein (GFP) was used to visualize and record from neurons that determine the vagal motor outflow to the heart. Here we show that Bartha PRV-GFP-labeled neurons retain their normal electrophysiological properties and that the labeled baroreflex pathways that control heart rate are unaltered by the virus. This novel transynaptic virus permits in vitro studies of identified neurons within functionally defined neuronal systems including networks that mediate cardiovascular and respiratory function and interactions. We also demonstrate superior laryngeal motorneurons fire spontaneously and synapse on cardiac vagal neurons in the nucleus ambiguus. This cardiorespiratory pathway provides a neural basis of respiratory sinus arrhythmias.

  10. Speed and segmentation control mechanisms characterized in rhythmically-active circuits created from spinal neurons produced from genetically-tagged embryonic stem cells

    PubMed Central

    Sternfeld, Matthew J; Hinckley, Christopher A; Moore, Niall J; Pankratz, Matthew T; Hilde, Kathryn L; Driscoll, Shawn P; Hayashi, Marito; Amin, Neal D; Bonanomi, Dario; Gifford, Wesley D; Sharma, Kamal; Goulding, Martyn; Pfaff, Samuel L

    2017-01-01

    Flexible neural networks, such as the interconnected spinal neurons that control distinct motor actions, can switch their activity to produce different behaviors. Both excitatory (E) and inhibitory (I) spinal neurons are necessary for motor behavior, but the influence of recruiting different ratios of E-to-I cells remains unclear. We constructed synthetic microphysical neural networks, called circuitoids, using precise combinations of spinal neuron subtypes derived from mouse stem cells. Circuitoids of purified excitatory interneurons were sufficient to generate oscillatory bursts with properties similar to in vivo central pattern generators. Inhibitory V1 neurons provided dual layers of regulation within excitatory rhythmogenic networks - they increased the rhythmic burst frequency of excitatory V3 neurons, and segmented excitatory motor neuron activity into sub-networks. Accordingly, the speed and pattern of spinal circuits that underlie complex motor behaviors may be regulated by quantitatively gating the intra-network cellular activity ratio of E-to-I neurons. DOI: http://dx.doi.org/10.7554/eLife.21540.001 PMID:28195039

  11. Experiments in clustered neuronal networks: A paradigm for complex modular dynamics

    NASA Astrophysics Data System (ADS)

    Teller, Sara; Soriano, Jordi

    2016-06-01

    Uncovering the interplay activity-connectivity is one of the major challenges in neuroscience. To deepen in the understanding of how a neuronal circuit shapes network dynamics, neuronal cultures have emerged as remarkable systems given their accessibility and easy manipulation. An attractive configuration of these in vitro systems consists in an ensemble of interconnected clusters of neurons. Using calcium fluorescence imaging to monitor spontaneous activity in these clustered neuronal networks, we were able to draw functional maps and reveal their topological features. We also observed that these networks exhibit a hierarchical modular dynamics, in which clusters fire in small groups that shape characteristic communities in the network. The structure and stability of these communities is sensitive to chemical or physical action, and therefore their analysis may serve as a proxy for network health. Indeed, the combination of all these approaches is helping to develop models to quantify damage upon network degradation, with promising applications for the study of neurological disorders in vitro.

  12. Mixed-mode oscillations and population bursting in the pre-Bötzinger complex

    PubMed Central

    Bacak, Bartholomew J; Kim, Taegyo; Smith, Jeffrey C; Rubin, Jonathan E; Rybak, Ilya A

    2016-01-01

    This study focuses on computational and theoretical investigations of neuronal activity arising in the pre-Bötzinger complex (pre-BötC), a medullary region generating the inspiratory phase of breathing in mammals. A progressive increase of neuronal excitability in medullary slices containing the pre-BötC produces mixed-mode oscillations (MMOs) characterized by large amplitude population bursts alternating with a series of small amplitude bursts. Using two different computational models, we demonstrate that MMOs emerge within a heterogeneous excitatory neural network because of progressive neuronal recruitment and synchronization. The MMO pattern depends on the distributed neuronal excitability, the density and weights of network interconnections, and the cellular properties underlying endogenous bursting. Critically, the latter should provide a reduction of spiking frequency within neuronal bursts with increasing burst frequency and a dependence of the after-burst recovery period on burst amplitude. Our study highlights a novel mechanism by which heterogeneity naturally leads to complex dynamics in rhythmic neuronal populations. DOI: http://dx.doi.org/10.7554/eLife.13403.001 PMID:26974345

  13. Population equations for degree-heterogenous neural networks

    NASA Astrophysics Data System (ADS)

    Kähne, M.; Sokolov, I. M.; Rüdiger, S.

    2017-11-01

    We develop a statistical framework for studying recurrent networks with broad distributions of the number of synaptic links per neuron. We treat each group of neurons with equal input degree as one population and derive a system of equations determining the population-averaged firing rates. The derivation rests on an assumption of a large number of neurons and, additionally, an assumption of a large number of synapses per neuron. For the case of binary neurons, analytical solutions can be constructed, which correspond to steps in the activity versus degree space. We apply this theory to networks with degree-correlated topology and show that complex, multi-stable regimes can result for increasing correlations. Our work is motivated by the recent finding of subnetworks of highly active neurons and the fact that these neurons tend to be connected to each other with higher probability.

  14. Genetic strategies to investigate neuronal circuit properties using stem cell-derived neurons

    PubMed Central

    Garcia, Isabella; Kim, Cynthia; Arenkiel, Benjamin R.

    2012-01-01

    The mammalian brain is anatomically and functionally complex, and prone to diverse forms of injury and neuropathology. Scientists have long strived to develop cell replacement therapies to repair damaged and diseased nervous tissue. However, this goal has remained unrealized for various reasons, including nascent knowledge of neuronal development, the inability to track and manipulate transplanted cells within complex neuronal networks, and host graft rejection. Recent advances in embryonic stem cell (ESC) and induced pluripotent stem cell (iPSC) technology, alongside novel genetic strategies to mark and manipulate stem cell-derived neurons, now provide unprecedented opportunities to investigate complex neuronal circuits in both healthy and diseased brains. Here, we review current technologies aimed at generating and manipulating neurons derived from ESCs and iPSCs toward investigation and manipulation of complex neuronal circuits, ultimately leading to the design and development of novel cell-based therapeutic approaches. PMID:23264761

  15. Interfacing 3D Engineered Neuronal Cultures to Micro-Electrode Arrays: An Innovative In Vitro Experimental Model.

    PubMed

    Tedesco, Mariateresa; Frega, Monica; Martinoia, Sergio; Pesce, Mattia; Massobrio, Paolo

    2015-10-18

    Currently, large-scale networks derived from dissociated neurons growing and developing in vitro on extracellular micro-transducer devices are the gold-standard experimental model to study basic neurophysiological mechanisms involved in the formation and maintenance of neuronal cell assemblies. However, in vitro studies have been limited to the recording of the electrophysiological activity generated by bi-dimensional (2D) neural networks. Nonetheless, given the intricate relationship between structure and dynamics, a significant improvement is necessary to investigate the formation and the developing dynamics of three-dimensional (3D) networks. In this work, a novel experimental platform in which 3D hippocampal or cortical networks are coupled to planar Micro-Electrode Arrays (MEAs) is presented. 3D networks are realized by seeding neurons in a scaffold constituted of glass microbeads (30-40 µm in diameter) on which neurons are able to grow and form complex interconnected 3D assemblies. In this way, it is possible to design engineered 3D networks made up of 5-8 layers with an expected final cell density. The increasing complexity in the morphological organization of the 3D assembly induces an enhancement of the electrophysiological patterns displayed by this type of networks. Compared with the standard 2D networks, where highly stereotyped bursting activity emerges, the 3D structure alters the bursting activity in terms of duration and frequency, as well as it allows observation of more random spiking activity. In this sense, the developed 3D model more closely resembles in vivo neural networks.

  16. Interfacing 3D Engineered Neuronal Cultures to Micro-Electrode Arrays: An Innovative In Vitro Experimental Model

    PubMed Central

    Tedesco, Mariateresa; Frega, Monica; Martinoia, Sergio; Pesce, Mattia; Massobrio, Paolo

    2015-01-01

    Currently, large-scale networks derived from dissociated neurons growing and developing in vitro on extracellular micro-transducer devices are the gold-standard experimental model to study basic neurophysiological mechanisms involved in the formation and maintenance of neuronal cell assemblies. However, in vitro studies have been limited to the recording of the electrophysiological activity generated by bi-dimensional (2D) neural networks. Nonetheless, given the intricate relationship between structure and dynamics, a significant improvement is necessary to investigate the formation and the developing dynamics of three-dimensional (3D) networks. In this work, a novel experimental platform in which 3D hippocampal or cortical networks are coupled to planar Micro-Electrode Arrays (MEAs) is presented. 3D networks are realized by seeding neurons in a scaffold constituted of glass microbeads (30-40 µm in diameter) on which neurons are able to grow and form complex interconnected 3D assemblies. In this way, it is possible to design engineered 3D networks made up of 5-8 layers with an expected final cell density. The increasing complexity in the morphological organization of the 3D assembly induces an enhancement of the electrophysiological patterns displayed by this type of networks. Compared with the standard 2D networks, where highly stereotyped bursting activity emerges, the 3D structure alters the bursting activity in terms of duration and frequency, as well as it allows observation of more random spiking activity. In this sense, the developed 3D model more closely resembles in vivo neural networks. PMID:26554533

  17. Networks within networks: The neuronal control of breathing

    PubMed Central

    Garcia, Alfredo J.; Zanella, Sebastien; Koch, Henner; Doi, Atsushi; Ramirez, Jan-Marino

    2013-01-01

    Breathing emerges through complex network interactions involving neurons distributed throughout the nervous system. The respiratory rhythm generating network is composed of micro networks functioning within larger networks to generate distinct rhythms and patterns that characterize breathing. The pre-Bötzinger complex, a rhythm generating network located within the ventrolateral medulla assumes a core function without which respiratory rhythm generation and breathing cease altogether. It contains subnetworks with distinct synaptic and intrinsic membrane properties that give rise to different types of respiratory rhythmic activities including eupneic, sigh, and gasping activities. While critical aspects of these rhythmic activities are preserved when isolated in in vitro preparations, the pre-Bötzinger complex functions in the behaving animal as part of a larger network that receives important inputs from areas such as the pons and parafacial nucleus. The respiratory network is also an integrator of modulatory and sensory inputs that imbue the network with the important ability to adapt to changes in the behavioral, metabolic, and developmental conditions of the organism. This review summarizes our current understanding of these interactions and relates the emerging concepts to insights gained in other rhythm generating networks. PMID:21333801

  18. A novel enteric neuron-glia coculture system reveals the role of glia in neuronal development.

    PubMed

    Le Berre-Scoul, Catherine; Chevalier, Julien; Oleynikova, Elena; Cossais, François; Talon, Sophie; Neunlist, Michel; Boudin, Hélène

    2017-01-15

    Unlike astrocytes in the brain, the potential role of enteric glial cells (EGCs) in the formation of the enteric neuronal circuit is currently unknown. To examine the role of EGCs in the formation of the neuronal network, we developed a novel neuron-enriched culture model from embryonic rat intestine grown in indirect coculture with EGCs. We found that EGCs shape axonal complexity and synapse density in enteric neurons, through purinergic- and glial cell line-derived neurotrophic factor-dependent pathways. Using a novel and valuable culture model to study enteric neuron-glia interactions, our study identified EGCs as a key cellular actor regulating neuronal network maturation. In the nervous system, the formation of neuronal circuitry results from a complex and coordinated action of intrinsic and extrinsic factors. In the CNS, extrinsic mediators derived from astrocytes have been shown to play a key role in neuronal maturation, including dendritic shaping, axon guidance and synaptogenesis. In the enteric nervous system (ENS), the potential role of enteric glial cells (EGCs) in the maturation of developing enteric neuronal circuit is currently unknown. A major obstacle in addressing this question is the difficulty in obtaining a valuable experimental model in which enteric neurons could be isolated and maintained without EGCs. We adapted a cell culture method previously developed for CNS neurons to establish a neuron-enriched primary culture from embryonic rat intestine which was cultured in indirect coculture with EGCs. We demonstrated that enteric neurons grown in such conditions showed several structural, phenotypic and functional hallmarks of proper development and maturation. However, when neurons were grown without EGCs, the complexity of the axonal arbour and the density of synapses were markedly reduced, suggesting that glial-derived factors contribute strongly to the formation of the neuronal circuitry. We found that these effects played by EGCs were mediated in part through purinergic P2Y 1 receptor- and glial cell line-derived neurotrophic factor-dependent pathways. Using a novel and valuable culture model to study enteric neuron-glia interactions, our study identified EGCs as a key cellular actor required for neuronal network maturation. © 2016 The Authors. The Journal of Physiology © 2016 The Physiological Society.

  19. Network complexity and synchronous behavior--an experimental approach.

    PubMed

    Neefs, P J; Steur, E; Nijmeijer, H

    2010-06-01

    We discuss synchronization in networks of Hindmarsh-Rose neurons that are interconnected via gap junctions, also known as electrical synapses. We present theoretical results for interactions without time-delay. These results are supported by experiments with a setup consisting of sixteen electronic equivalents of the Hindmarsh-Rose neuron. We show experimental results of networks where time-delay on the interaction is taken into account. We discuss in particular the influence of the network topology on the synchronization.

  20. Towards Reproducible Descriptions of Neuronal Network Models

    PubMed Central

    Nordlie, Eilen; Gewaltig, Marc-Oliver; Plesser, Hans Ekkehard

    2009-01-01

    Progress in science depends on the effective exchange of ideas among scientists. New ideas can be assessed and criticized in a meaningful manner only if they are formulated precisely. This applies to simulation studies as well as to experiments and theories. But after more than 50 years of neuronal network simulations, we still lack a clear and common understanding of the role of computational models in neuroscience as well as established practices for describing network models in publications. This hinders the critical evaluation of network models as well as their re-use. We analyze here 14 research papers proposing neuronal network models of different complexity and find widely varying approaches to model descriptions, with regard to both the means of description and the ordering and placement of material. We further observe great variation in the graphical representation of networks and the notation used in equations. Based on our observations, we propose a good model description practice, composed of guidelines for the organization of publications, a checklist for model descriptions, templates for tables presenting model structure, and guidelines for diagrams of networks. The main purpose of this good practice is to trigger a debate about the communication of neuronal network models in a manner comprehensible to humans, as opposed to machine-readable model description languages. We believe that the good model description practice proposed here, together with a number of other recent initiatives on data-, model-, and software-sharing, may lead to a deeper and more fruitful exchange of ideas among computational neuroscientists in years to come. We further hope that work on standardized ways of describing—and thinking about—complex neuronal networks will lead the scientific community to a clearer understanding of high-level concepts in network dynamics, and will thus lead to deeper insights into the function of the brain. PMID:19662159

  1. The many faces of REST oversee epigenetic programming of neuronal genes.

    PubMed

    Ballas, Nurit; Mandel, Gail

    2005-10-01

    Nervous system development relies on a complex signaling network to engineer the orderly transitions that lead to the acquisition of a neural cell fate. Progression from the non-neuronal pluripotent stem cell to a restricted neural lineage is characterized by distinct patterns of gene expression, particularly the restriction of neuronal gene expression to neurons. Concurrently, cells outside the nervous system acquire and maintain a non-neuronal fate that permanently excludes expression of neuronal genes. Studies of the transcriptional repressor REST, which regulates a large network of neuronal genes, provide a paradigm for elucidating the link between epigenetic mechanisms and neurogenesis. REST orchestrates a set of epigenetic modifications that are distinct between non-neuronal cells that give rise to neurons and those that are destined to remain as nervous system outsiders.

  2. Cascaded VLSI Chips Help Neural Network To Learn

    NASA Technical Reports Server (NTRS)

    Duong, Tuan A.; Daud, Taher; Thakoor, Anilkumar P.

    1993-01-01

    Cascading provides 12-bit resolution needed for learning. Using conventional silicon chip fabrication technology of VLSI, fully connected architecture consisting of 32 wide-range, variable gain, sigmoidal neurons along one diagonal and 7-bit resolution, electrically programmable, synaptic 32 x 31 weight matrix implemented on neuron-synapse chip. To increase weight nominally from 7 to 13 bits, synapses on chip individually cascaded with respective synapses on another 32 x 32 matrix chip with 7-bit resolution synapses only (without neurons). Cascade correlation algorithm varies number of layers effectively connected into network; adds hidden layers one at a time during learning process in such way as to optimize overall number of neurons and complexity and configuration of network.

  3. Artificial astrocytes improve neural network performance.

    PubMed

    Porto-Pazos, Ana B; Veiguela, Noha; Mesejo, Pablo; Navarrete, Marta; Alvarellos, Alberto; Ibáñez, Oscar; Pazos, Alejandro; Araque, Alfonso

    2011-04-19

    Compelling evidence indicates the existence of bidirectional communication between astrocytes and neurons. Astrocytes, a type of glial cells classically considered to be passive supportive cells, have been recently demonstrated to be actively involved in the processing and regulation of synaptic information, suggesting that brain function arises from the activity of neuron-glia networks. However, the actual impact of astrocytes in neural network function is largely unknown and its application in artificial intelligence remains untested. We have investigated the consequences of including artificial astrocytes, which present the biologically defined properties involved in astrocyte-neuron communication, on artificial neural network performance. Using connectionist systems and evolutionary algorithms, we have compared the performance of artificial neural networks (NN) and artificial neuron-glia networks (NGN) to solve classification problems. We show that the degree of success of NGN is superior to NN. Analysis of performances of NN with different number of neurons or different architectures indicate that the effects of NGN cannot be accounted for an increased number of network elements, but rather they are specifically due to astrocytes. Furthermore, the relative efficacy of NGN vs. NN increases as the complexity of the network increases. These results indicate that artificial astrocytes improve neural network performance, and established the concept of Artificial Neuron-Glia Networks, which represents a novel concept in Artificial Intelligence with implications in computational science as well as in the understanding of brain function.

  4. Artificial Astrocytes Improve Neural Network Performance

    PubMed Central

    Porto-Pazos, Ana B.; Veiguela, Noha; Mesejo, Pablo; Navarrete, Marta; Alvarellos, Alberto; Ibáñez, Oscar; Pazos, Alejandro; Araque, Alfonso

    2011-01-01

    Compelling evidence indicates the existence of bidirectional communication between astrocytes and neurons. Astrocytes, a type of glial cells classically considered to be passive supportive cells, have been recently demonstrated to be actively involved in the processing and regulation of synaptic information, suggesting that brain function arises from the activity of neuron-glia networks. However, the actual impact of astrocytes in neural network function is largely unknown and its application in artificial intelligence remains untested. We have investigated the consequences of including artificial astrocytes, which present the biologically defined properties involved in astrocyte-neuron communication, on artificial neural network performance. Using connectionist systems and evolutionary algorithms, we have compared the performance of artificial neural networks (NN) and artificial neuron-glia networks (NGN) to solve classification problems. We show that the degree of success of NGN is superior to NN. Analysis of performances of NN with different number of neurons or different architectures indicate that the effects of NGN cannot be accounted for an increased number of network elements, but rather they are specifically due to astrocytes. Furthermore, the relative efficacy of NGN vs. NN increases as the complexity of the network increases. These results indicate that artificial astrocytes improve neural network performance, and established the concept of Artificial Neuron-Glia Networks, which represents a novel concept in Artificial Intelligence with implications in computational science as well as in the understanding of brain function. PMID:21526157

  5. The preBötzinger complex as a hub for network activity along the ventral respiratory column in the neonate rat.

    PubMed

    Gourévitch, Boris; Mellen, Nicholas

    2014-09-01

    In vertebrates, respiratory control is ascribed to heterogeneous respiration-modulated neurons along the Ventral Respiratory Column (VRC) in medulla, which includes the preBötzinger Complex (preBötC), the putative respiratory rhythm generator. Here, the functional anatomy of the VRC was characterized via optical recordings in the sagittaly sectioned neonate rat hindbrain, at sampling rates permitting coupling estimation between neuron pairs, so that each neuron was described using unitary, neuron-system, and coupling attributes. Structured coupling relations in local networks, significantly oriented coupling in the peri-inspiratory interval detected in pooled data, and significant correlations between firing rate and expiratory duration in subsets of neurons revealed network regulation at multiple timescales. Spatially averaged neuronal attributes, including coupling vectors, revealed a sharp boundary at the rostral margin of the preBötC, as well as other functional anatomical features congruent with identified structures, including the parafacial respiratory group and the nucleus ambiguus. Cluster analysis of attributes identified two spatially compact, homogenous groups: the first overlapped with the preBötC, and was characterized by strong respiratory modulation and dense bidirectional coupling with itself and other groups, consistent with a central role for the preBötC in respiratory control; the second lay between preBötC and the facial nucleus, and was characterized by weak respiratory modulation and weak coupling with other respiratory neurons, which is congruent with cardiovascular regulatory networks that are found in this region. Other groups identified using cluster analysis suggested that networks along VRC regulated expiratory duration, and the transition to and from inspiration, but these groups were heterogeneous and anatomically dispersed. Thus, by recording local networks in parallel, this study found evidence for respiratory regulation at multiple timescales along the VRC, as well as a role for the preBötC in the integration of functionally disparate respiratory neurons. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. On the sample complexity of learning for networks of spiking neurons with nonlinear synaptic interactions.

    PubMed

    Schmitt, Michael

    2004-09-01

    We study networks of spiking neurons that use the timing of pulses to encode information. Nonlinear interactions model the spatial groupings of synapses on the neural dendrites and describe the computations performed at local branches. Within a theoretical framework of learning we analyze the question of how many training examples these networks must receive to be able to generalize well. Bounds for this sample complexity of learning can be obtained in terms of a combinatorial parameter known as the pseudodimension. This dimension characterizes the computational richness of a neural network and is given in terms of the number of network parameters. Two types of feedforward architectures are considered: constant-depth networks and networks of unconstrained depth. We derive asymptotically tight bounds for each of these network types. Constant depth networks are shown to have an almost linear pseudodimension, whereas the pseudodimension of general networks is quadratic. Networks of spiking neurons that use temporal coding are becoming increasingly more important in practical tasks such as computer vision, speech recognition, and motor control. The question of how well these networks generalize from a given set of training examples is a central issue for their successful application as adaptive systems. The results show that, although coding and computation in these networks is quite different and in many cases more powerful, their generalization capabilities are at least as good as those of traditional neural network models.

  7. Gene expression links functional networks across cortex and striatum.

    PubMed

    Anderson, Kevin M; Krienen, Fenna M; Choi, Eun Young; Reinen, Jenna M; Yeo, B T Thomas; Holmes, Avram J

    2018-04-12

    The human brain is comprised of a complex web of functional networks that link anatomically distinct regions. However, the biological mechanisms supporting network organization remain elusive, particularly across cortical and subcortical territories with vastly divergent cellular and molecular properties. Here, using human and primate brain transcriptional atlases, we demonstrate that spatial patterns of gene expression show strong correspondence with limbic and somato/motor cortico-striatal functional networks. Network-associated expression is consistent across independent human datasets and evolutionarily conserved in non-human primates. Genes preferentially expressed within the limbic network (encompassing nucleus accumbens, orbital/ventromedial prefrontal cortex, and temporal pole) relate to risk for psychiatric illness, chloride channel complexes, and markers of somatostatin neurons. Somato/motor associated genes are enriched for oligodendrocytes and markers of parvalbumin neurons. These analyses indicate that parallel cortico-striatal processing channels possess dissociable genetic signatures that recapitulate distributed functional networks, and nominate molecular mechanisms supporting cortico-striatal circuitry in health and disease.

  8. Singularities of Three-Layered Complex-Valued Neural Networks With Split Activation Function.

    PubMed

    Kobayashi, Masaki

    2018-05-01

    There are three important concepts related to learning processes in neural networks: reducibility, nonminimality, and singularity. Although the definitions of these three concepts differ, they are equivalent in real-valued neural networks. This is also true of complex-valued neural networks (CVNNs) with hidden neurons not employing biases. The situation of CVNNs with hidden neurons employing biases, however, is very complicated. Exceptional reducibility was found, and it was shown that reducibility and nonminimality are not the same. Irreducibility consists of minimality and exceptional reducibility. The relationship between minimality and singularity has not yet been established. In this paper, we describe our surprising finding that minimality and singularity are independent. We also provide several examples based on exceptional reducibility.

  9. Topographical maps as complex networks

    NASA Astrophysics Data System (ADS)

    da Fontoura Costa, Luciano; Diambra, Luis

    2005-02-01

    The neuronal networks in the mammalian cortex are characterized by the coexistence of hierarchy, modularity, short and long range interactions, spatial correlations, and topographical connections. Particularly interesting, the latter type of organization implies special demands on developing systems in order to achieve precise maps preserving spatial adjacencies, even at the expense of isometry. Although the object of intensive biological research, the elucidation of the main anatomic-functional purposes of the ubiquitous topographical connections in the mammalian brain remains an elusive issue. The present work reports on how recent results from complex network formalism can be used to quantify and model the effect of topographical connections between neuronal cells over the connectivity of the network. While the topographical mapping between two cortical modules is achieved by connecting nearest cells from each module, four kinds of network models are adopted for implementing intramodular connections, including random, preferential-attachment, short-range, and long-range networks. It is shown that, though spatially uniform and simple, topographical connections between modules can lead to major changes in the network properties in some specific cases, depending on intramodular connections schemes, fostering more effective intercommunication between the involved neuronal cells and modules. The possible implications of such effects on cortical operation are discussed.

  10. Effects of Morphology Constraint on Electrophysiological Properties of Cortical Neurons

    NASA Astrophysics Data System (ADS)

    Zhu, Geng; Du, Liping; Jin, Lei; Offenhäusser, Andreas

    2016-04-01

    There is growing interest in engineering nerve cells in vitro to control architecture and connectivity of cultured neuronal networks or to build neuronal networks with predictable computational function. Pattern technologies, such as micro-contact printing, have been developed to design ordered neuronal networks. However, electrophysiological characteristics of the single patterned neuron haven’t been reported. Here, micro-contact printing, using polyolefine polymer (POP) stamps with high resolution, was employed to grow cortical neurons in a designed structure. The results demonstrated that the morphology of patterned neurons was well constrained, and the number of dendrites was decreased to be about 2. Our electrophysiological results showed that alterations of dendritic morphology affected firing patterns of neurons and neural excitability. When stimulated by current, though both patterned and un-patterned neurons presented regular spiking, the dynamics and strength of the response were different. The un-patterned neurons exhibited a monotonically increasing firing frequency in response to injected current, while the patterned neurons first exhibited frequency increase and then a slow decrease. Our findings indicate that the decrease in dendritic complexity of cortical neurons will influence their electrophysiological characteristics and alter their information processing activity, which could be considered when designing neuronal circuitries.

  11. A novel enteric neuron–glia coculture system reveals the role of glia in neuronal development

    PubMed Central

    Le Berre‐Scoul, Catherine; Chevalier, Julien; Oleynikova, Elena; Cossais, François; Talon, Sophie; Neunlist, Michel

    2016-01-01

    Key points Unlike astrocytes in the brain, the potential role of enteric glial cells (EGCs) in the formation of the enteric neuronal circuit is currently unknown.To examine the role of EGCs in the formation of the neuronal network, we developed a novel neuron‐enriched culture model from embryonic rat intestine grown in indirect coculture with EGCs.We found that EGCs shape axonal complexity and synapse density in enteric neurons, through purinergic‐ and glial cell line‐derived neurotrophic factor‐dependent pathways.Using a novel and valuable culture model to study enteric neuron–glia interactions, our study identified EGCs as a key cellular actor regulating neuronal network maturation. Abstract In the nervous system, the formation of neuronal circuitry results from a complex and coordinated action of intrinsic and extrinsic factors. In the CNS, extrinsic mediators derived from astrocytes have been shown to play a key role in neuronal maturation, including dendritic shaping, axon guidance and synaptogenesis. In the enteric nervous system (ENS), the potential role of enteric glial cells (EGCs) in the maturation of developing enteric neuronal circuit is currently unknown. A major obstacle in addressing this question is the difficulty in obtaining a valuable experimental model in which enteric neurons could be isolated and maintained without EGCs. We adapted a cell culture method previously developed for CNS neurons to establish a neuron‐enriched primary culture from embryonic rat intestine which was cultured in indirect coculture with EGCs. We demonstrated that enteric neurons grown in such conditions showed several structural, phenotypic and functional hallmarks of proper development and maturation. However, when neurons were grown without EGCs, the complexity of the axonal arbour and the density of synapses were markedly reduced, suggesting that glial‐derived factors contribute strongly to the formation of the neuronal circuitry. We found that these effects played by EGCs were mediated in part through purinergic P2Y1 receptor‐ and glial cell line‐derived neurotrophic factor‐dependent pathways. Using a novel and valuable culture model to study enteric neuron–glia interactions, our study identified EGCs as a key cellular actor required for neuronal network maturation. PMID:27436013

  12. Weak connections form an infinite number of patterns in the brain

    NASA Astrophysics Data System (ADS)

    Ren, Hai-Peng; Bai, Chao; Baptista, Murilo S.; Grebogi, Celso

    2017-04-01

    Recently, much attention has been paid to interpreting the mechanisms for memory formation in terms of brain connectivity and dynamics. Within the plethora of collective states a complex network can exhibit, we show that the phenomenon of Collective Almost Synchronisation (CAS), which describes a state with an infinite number of patterns emerging in complex networks for weak coupling strengths, deserves special attention. We show that a simulated neuron network with neurons weakly connected does produce CAS patterns, and additionally produces an output that optimally model experimental electroencephalograph (EEG) signals. This work provides strong evidence that the brain operates locally in a CAS regime, allowing it to have an unlimited number of dynamical patterns, a state that could explain the enormous memory capacity of the brain, and that would give support to the idea that local clusters of neurons are sufficiently decorrelated to independently process information locally.

  13. The connection-set algebra--a novel formalism for the representation of connectivity structure in neuronal network models.

    PubMed

    Djurfeldt, Mikael

    2012-07-01

    The connection-set algebra (CSA) is a novel and general formalism for the description of connectivity in neuronal network models, from small-scale to large-scale structure. The algebra provides operators to form more complex sets of connections from simpler ones and also provides parameterization of such sets. CSA is expressive enough to describe a wide range of connection patterns, including multiple types of random and/or geometrically dependent connectivity, and can serve as a concise notation for network structure in scientific writing. CSA implementations allow for scalable and efficient representation of connectivity in parallel neuronal network simulators and could even allow for avoiding explicit representation of connections in computer memory. The expressiveness of CSA makes prototyping of network structure easy. A C+ + version of the algebra has been implemented and used in a large-scale neuronal network simulation (Djurfeldt et al., IBM J Res Dev 52(1/2):31-42, 2008b) and an implementation in Python has been publicly released.

  14. A distance constrained synaptic plasticity model of C. elegans neuronal network

    NASA Astrophysics Data System (ADS)

    Badhwar, Rahul; Bagler, Ganesh

    2017-03-01

    Brain research has been driven by enquiry for principles of brain structure organization and its control mechanisms. The neuronal wiring map of C. elegans, the only complete connectome available till date, presents an incredible opportunity to learn basic governing principles that drive structure and function of its neuronal architecture. Despite its apparently simple nervous system, C. elegans is known to possess complex functions. The nervous system forms an important underlying framework which specifies phenotypic features associated to sensation, movement, conditioning and memory. In this study, with the help of graph theoretical models, we investigated the C. elegans neuronal network to identify network features that are critical for its control. The 'driver neurons' are associated with important biological functions such as reproduction, signalling processes and anatomical structural development. We created 1D and 2D network models of C. elegans neuronal system to probe the role of features that confer controllability and small world nature. The simple 1D ring model is critically poised for the number of feed forward motifs, neuronal clustering and characteristic path-length in response to synaptic rewiring, indicating optimal rewiring. Using empirically observed distance constraint in the neuronal network as a guiding principle, we created a distance constrained synaptic plasticity model that simultaneously explains small world nature, saturation of feed forward motifs as well as observed number of driver neurons. The distance constrained model suggests optimum long distance synaptic connections as a key feature specifying control of the network.

  15. Qualitative-Modeling-Based Silicon Neurons and Their Networks

    PubMed Central

    Kohno, Takashi; Sekikawa, Munehisa; Li, Jing; Nanami, Takuya; Aihara, Kazuyuki

    2016-01-01

    The ionic conductance models of neuronal cells can finely reproduce a wide variety of complex neuronal activities. However, the complexity of these models has prompted the development of qualitative neuron models. They are described by differential equations with a reduced number of variables and their low-dimensional polynomials, which retain the core mathematical structures. Such simple models form the foundation of a bottom-up approach in computational and theoretical neuroscience. We proposed a qualitative-modeling-based approach for designing silicon neuron circuits, in which the mathematical structures in the polynomial-based qualitative models are reproduced by differential equations with silicon-native expressions. This approach can realize low-power-consuming circuits that can be configured to realize various classes of neuronal cells. In this article, our qualitative-modeling-based silicon neuron circuits for analog and digital implementations are quickly reviewed. One of our CMOS analog silicon neuron circuits can realize a variety of neuronal activities with a power consumption less than 72 nW. The square-wave bursting mode of this circuit is explained. Another circuit can realize Class I and II neuronal activities with about 3 nW. Our digital silicon neuron circuit can also realize these classes. An auto-associative memory realized on an all-to-all connected network of these silicon neurons is also reviewed, in which the neuron class plays important roles in its performance. PMID:27378842

  16. Microglomerular Synaptic Complexes in the Sky-Compass Network of the Honeybee Connect Parallel Pathways from the Anterior Optic Tubercle to the Central Complex.

    PubMed

    Held, Martina; Berz, Annuska; Hensgen, Ronja; Muenz, Thomas S; Scholl, Christina; Rössler, Wolfgang; Homberg, Uwe; Pfeiffer, Keram

    2016-01-01

    While the ability of honeybees to navigate relying on sky-compass information has been investigated in a large number of behavioral studies, the underlying neuronal system has so far received less attention. The sky-compass pathway has recently been described from its input region, the dorsal rim area (DRA) of the compound eye, to the anterior optic tubercle (AOTU). The aim of this study is to reveal the connection from the AOTU to the central complex (CX). For this purpose, we investigated the anatomy of large microglomerular synaptic complexes in the medial and lateral bulbs (MBUs/LBUs) of the lateral complex (LX). The synaptic complexes are formed by tubercle-lateral accessory lobe neuron 1 (TuLAL1) neurons of the AOTU and GABAergic tangential neurons of the central body's (CB) lower division (TL neurons). Both TuLAL1 and TL neurons strongly resemble neurons forming these complexes in other insect species. We further investigated the ultrastructure of these synaptic complexes using transmission electron microscopy. We found that single large presynaptic terminals of TuLAL1 neurons enclose many small profiles (SPs) of TL neurons. The synaptic connections between these neurons are established by two types of synapses: divergent dyads and divergent tetrads. Our data support the assumption that these complexes are a highly conserved feature in the insect brain and play an important role in reliable signal transmission within the sky-compass pathway.

  17. Cortical Specializations Underlying Fast Computations

    PubMed Central

    Volgushev, Maxim

    2016-01-01

    The time course of behaviorally relevant environmental events sets temporal constraints on neuronal processing. How does the mammalian brain make use of the increasingly complex networks of the neocortex, while making decisions and executing behavioral reactions within a reasonable time? The key parameter determining the speed of computations in neuronal networks is a time interval that neuronal ensembles need to process changes at their input and communicate results of this processing to downstream neurons. Theoretical analysis identified basic requirements for fast processing: use of neuronal populations for encoding, background activity, and fast onset dynamics of action potentials in neurons. Experimental evidence shows that populations of neocortical neurons fulfil these requirements. Indeed, they can change firing rate in response to input perturbations very quickly, within 1 to 3 ms, and encode high-frequency components of the input by phase-locking their spiking to frequencies up to 300 to 1000 Hz. This implies that time unit of computations by cortical ensembles is only few, 1 to 3 ms, which is considerably faster than the membrane time constant of individual neurons. The ability of cortical neuronal ensembles to communicate on a millisecond time scale allows for complex, multiple-step processing and precise coordination of neuronal activity in parallel processing streams, while keeping the speed of behavioral reactions within environmentally set temporal constraints. PMID:25689988

  18. Emergence of small-world structure in networks of spiking neurons through STDP plasticity.

    PubMed

    Basalyga, Gleb; Gleiser, Pablo M; Wennekers, Thomas

    2011-01-01

    In this work, we use a complex network approach to investigate how a neural network structure changes under synaptic plasticity. In particular, we consider a network of conductance-based, single-compartment integrate-and-fire excitatory and inhibitory neurons. Initially the neurons are connected randomly with uniformly distributed synaptic weights. The weights of excitatory connections can be strengthened or weakened during spiking activity by the mechanism known as spike-timing-dependent plasticity (STDP). We extract a binary directed connection matrix by thresholding the weights of the excitatory connections at every simulation step and calculate its major topological characteristics such as the network clustering coefficient, characteristic path length and small-world index. We numerically demonstrate that, under certain conditions, a nontrivial small-world structure can emerge from a random initial network subject to STDP learning.

  19. Equalization of Synaptic Efficacy by Synchronous Neural Activity

    NASA Astrophysics Data System (ADS)

    Cho, Myoung Won; Choi, M. Y.

    2007-11-01

    It is commonly believed that spike timings of a postsynaptic neuron tend to follow those of the presynaptic neuron. Such orthodromic firing may, however, cause a conflict with the functional integrity of complex neuronal networks due to asymmetric temporal Hebbian plasticity. We argue that reversed spike timing in a synapse is a typical phenomenon in the cortex, which has a stabilizing effect on the neuronal network structure. We further demonstrate how the firing causality in a synapse is perturbed by synchronous neural activity and how the equilibrium property of spike-timing dependent plasticity is determined principally by the degree of synchronization. Remarkably, even noise-induced activity and synchrony of neurons can result in equalization of synaptic efficacy.

  20. Intersection of diverse neuronal genomes and neuropsychiatric disease: The Brain Somatic Mosaicism Network.

    PubMed

    McConnell, Michael J; Moran, John V; Abyzov, Alexej; Akbarian, Schahram; Bae, Taejeong; Cortes-Ciriano, Isidro; Erwin, Jennifer A; Fasching, Liana; Flasch, Diane A; Freed, Donald; Ganz, Javier; Jaffe, Andrew E; Kwan, Kenneth Y; Kwon, Minseok; Lodato, Michael A; Mills, Ryan E; Paquola, Apua C M; Rodin, Rachel E; Rosenbluh, Chaggai; Sestan, Nenad; Sherman, Maxwell A; Shin, Joo Heon; Song, Saera; Straub, Richard E; Thorpe, Jeremy; Weinberger, Daniel R; Urban, Alexander E; Zhou, Bo; Gage, Fred H; Lehner, Thomas; Senthil, Geetha; Walsh, Christopher A; Chess, Andrew; Courchesne, Eric; Gleeson, Joseph G; Kidd, Jeffrey M; Park, Peter J; Pevsner, Jonathan; Vaccarino, Flora M

    2017-04-28

    Neuropsychiatric disorders have a complex genetic architecture. Human genetic population-based studies have identified numerous heritable sequence and structural genomic variants associated with susceptibility to neuropsychiatric disease. However, these germline variants do not fully account for disease risk. During brain development, progenitor cells undergo billions of cell divisions to generate the ~80 billion neurons in the brain. The failure to accurately repair DNA damage arising during replication, transcription, and cellular metabolism amid this dramatic cellular expansion can lead to somatic mutations. Somatic mutations that alter subsets of neuronal transcriptomes and proteomes can, in turn, affect cell proliferation and survival and lead to neurodevelopmental disorders. The long life span of individual neurons and the direct relationship between neural circuits and behavior suggest that somatic mutations in small populations of neurons can significantly affect individual neurodevelopment. The Brain Somatic Mosaicism Network has been founded to study somatic mosaicism both in neurotypical human brains and in the context of complex neuropsychiatric disorders. Copyright © 2017, American Association for the Advancement of Science.

  1. Hopf bifurcation of an (n + 1) -neuron bidirectional associative memory neural network model with delays.

    PubMed

    Xiao, Min; Zheng, Wei Xing; Cao, Jinde

    2013-01-01

    Recent studies on Hopf bifurcations of neural networks with delays are confined to simplified neural network models consisting of only two, three, four, five, or six neurons. It is well known that neural networks are complex and large-scale nonlinear dynamical systems, so the dynamics of the delayed neural networks are very rich and complicated. Although discussing the dynamics of networks with a few neurons may help us to understand large-scale networks, there are inevitably some complicated problems that may be overlooked if simplified networks are carried over to large-scale networks. In this paper, a general delayed bidirectional associative memory neural network model with n + 1 neurons is considered. By analyzing the associated characteristic equation, the local stability of the trivial steady state is examined, and then the existence of the Hopf bifurcation at the trivial steady state is established. By applying the normal form theory and the center manifold reduction, explicit formulae are derived to determine the direction and stability of the bifurcating periodic solution. Furthermore, the paper highlights situations where the Hopf bifurcations are particularly critical, in the sense that the amplitude and the period of oscillations are very sensitive to errors due to tolerances in the implementation of neuron interconnections. It is shown that the sensitivity is crucially dependent on the delay and also significantly influenced by the feature of the number of neurons. Numerical simulations are carried out to illustrate the main results.

  2. Multiscale modeling of brain dynamics: from single neurons and networks to mathematical tools.

    PubMed

    Siettos, Constantinos; Starke, Jens

    2016-09-01

    The extreme complexity of the brain naturally requires mathematical modeling approaches on a large variety of scales; the spectrum ranges from single neuron dynamics over the behavior of groups of neurons to neuronal network activity. Thus, the connection between the microscopic scale (single neuron activity) to macroscopic behavior (emergent behavior of the collective dynamics) and vice versa is a key to understand the brain in its complexity. In this work, we attempt a review of a wide range of approaches, ranging from the modeling of single neuron dynamics to machine learning. The models include biophysical as well as data-driven phenomenological models. The discussed models include Hodgkin-Huxley, FitzHugh-Nagumo, coupled oscillators (Kuramoto oscillators, Rössler oscillators, and the Hindmarsh-Rose neuron), Integrate and Fire, networks of neurons, and neural field equations. In addition to the mathematical models, important mathematical methods in multiscale modeling and reconstruction of the causal connectivity are sketched. The methods include linear and nonlinear tools from statistics, data analysis, and time series analysis up to differential equations, dynamical systems, and bifurcation theory, including Granger causal connectivity analysis, phase synchronization connectivity analysis, principal component analysis (PCA), independent component analysis (ICA), and manifold learning algorithms such as ISOMAP, and diffusion maps and equation-free techniques. WIREs Syst Biol Med 2016, 8:438-458. doi: 10.1002/wsbm.1348 For further resources related to this article, please visit the WIREs website. © 2016 Wiley Periodicals, Inc.

  3. FPGA implementation of motifs-based neuronal network and synchronization analysis

    NASA Astrophysics Data System (ADS)

    Deng, Bin; Zhu, Zechen; Yang, Shuangming; Wei, Xile; Wang, Jiang; Yu, Haitao

    2016-06-01

    Motifs in complex networks play a crucial role in determining the brain functions. In this paper, 13 kinds of motifs are implemented with Field Programmable Gate Array (FPGA) to investigate the relationships between the networks properties and motifs properties. We use discretization method and pipelined architecture to construct various motifs with Hindmarsh-Rose (HR) neuron as the node model. We also build a small-world network based on these motifs and conduct the synchronization analysis of motifs as well as the constructed network. We find that the synchronization properties of motif determine that of motif-based small-world network, which demonstrates effectiveness of our proposed hardware simulation platform. By imitation of some vital nuclei in the brain to generate normal discharges, our proposed FPGA-based artificial neuronal networks have the potential to replace the injured nuclei to complete the brain function in the treatment of Parkinson's disease and epilepsy.

  4. Identified Serotonergic Modulatory Neurons Have Heterogeneous Synaptic Connectivity within the Olfactory System of Drosophila.

    PubMed

    Coates, Kaylynn E; Majot, Adam T; Zhang, Xiaonan; Michael, Cole T; Spitzer, Stacy L; Gaudry, Quentin; Dacks, Andrew M

    2017-08-02

    Modulatory neurons project widely throughout the brain, dynamically altering network processing based on an animal's physiological state. The connectivity of individual modulatory neurons can be complex, as they often receive input from a variety of sources and are diverse in their physiology, structure, and gene expression profiles. To establish basic principles about the connectivity of individual modulatory neurons, we examined a pair of identified neurons, the "contralaterally projecting, serotonin-immunoreactive deutocerebral neurons" (CSDns), within the olfactory system of Drosophila Specifically, we determined the neuronal classes providing synaptic input to the CSDns within the antennal lobe (AL), an olfactory network targeted by the CSDns, and the degree to which CSDn active zones are uniformly distributed across the AL. Using anatomical techniques, we found that the CSDns received glomerulus-specific input from olfactory receptor neurons (ORNs) and projection neurons (PNs), and networkwide input from local interneurons (LNs). Furthermore, we quantified the number of CSDn active zones in each glomerulus and found that CSDn output is not uniform, but rather heterogeneous, across glomeruli and stereotyped from animal to animal. Finally, we demonstrate that the CSDns synapse broadly onto LNs and PNs throughout the AL but do not synapse upon ORNs. Our results demonstrate that modulatory neurons do not necessarily provide purely top-down input but rather receive neuron class-specific input from the networks that they target, and that even a two cell modulatory network has highly heterogeneous, yet stereotyped, pattern of connectivity. SIGNIFICANCE STATEMENT Modulatory neurons often project broadly throughout the brain to alter processing based on physiological state. However, the connectivity of individual modulatory neurons to their target networks is not well understood, as modulatory neuron populations are heterogeneous in their physiology, morphology, and gene expression. In this study, we use a pair of identified serotonergic neurons within the Drosophila olfactory system as a model to establish a framework for modulatory neuron connectivity. We demonstrate that individual modulatory neurons can integrate neuron class-specific input from their target network, which is often nonreciprocal. Additionally, modulatory neuron output can be stereotyped, yet nonuniform, across network regions. Our results provide new insight into the synaptic relationships that underlie network function of modulatory neurons. Copyright © 2017 the authors 0270-6474/17/377318-14$15.00/0.

  5. The dynamical analysis of modified two-compartment neuron model and FPGA implementation

    NASA Astrophysics Data System (ADS)

    Lin, Qianjin; Wang, Jiang; Yang, Shuangming; Yi, Guosheng; Deng, Bin; Wei, Xile; Yu, Haitao

    2017-10-01

    The complexity of neural models is increasing with the investigation of larger biological neural network, more various ionic channels and more detailed morphologies, and the implementation of biological neural network is a task with huge computational complexity and power consumption. This paper presents an efficient digital design using piecewise linearization on field programmable gate array (FPGA), to succinctly implement the reduced two-compartment model which retains essential features of more complicated models. The design proposes an approximate neuron model which is composed of a set of piecewise linear equations, and it can reproduce different dynamical behaviors to depict the mechanisms of a single neuron model. The consistency of hardware implementation is verified in terms of dynamical behaviors and bifurcation analysis, and the simulation results including varied ion channel characteristics coincide with the biological neuron model with a high accuracy. Hardware synthesis on FPGA demonstrates that the proposed model has reliable performance and lower hardware resource compared with the original two-compartment model. These investigations are conducive to scalability of biological neural network in reconfigurable large-scale neuromorphic system.

  6. Chimera-like states in a neuronal network model of the cat brain

    NASA Astrophysics Data System (ADS)

    Santos, M. S.; Szezech, J. D.; Borges, F. S.; Iarosz, K. C.; Caldas, I. L.; Batista, A. M.; Viana, R. L.; Kurths, J.

    2017-08-01

    Neuronal systems have been modeled by complex networks in different description levels. Recently, it has been verified that networks can simultaneously exhibit one coherent and other incoherent domain, known as chimera states. In this work, we study the existence of chimera states in a network considering the connectivity matrix based on the cat cerebral cortex. The cerebral cortex of the cat can be separated in 65 cortical areas organised into the four cognitive regions: visual, auditory, somatosensory-motor and frontolimbic. We consider a network where the local dynamics is given by the Hindmarsh-Rose model. The Hindmarsh-Rose equations are a well known model of neuronal activity that has been considered to simulate membrane potential in neuron. Here, we analyse under which conditions chimera states are present, as well as the affects induced by intensity of coupling on them. We observe the existence of chimera states in that incoherent structure can be composed of desynchronised spikes or desynchronised bursts. Moreover, we find that chimera states with desynchronised bursts are more robust to neuronal noise than with desynchronised spikes.

  7. Graph-based unsupervised segmentation algorithm for cultured neuronal networks' structure characterization and modeling.

    PubMed

    de Santos-Sierra, Daniel; Sendiña-Nadal, Irene; Leyva, Inmaculada; Almendral, Juan A; Ayali, Amir; Anava, Sarit; Sánchez-Ávila, Carmen; Boccaletti, Stefano

    2015-06-01

    Large scale phase-contrast images taken at high resolution through the life of a cultured neuronal network are analyzed by a graph-based unsupervised segmentation algorithm with a very low computational cost, scaling linearly with the image size. The processing automatically retrieves the whole network structure, an object whose mathematical representation is a matrix in which nodes are identified neurons or neurons' clusters, and links are the reconstructed connections between them. The algorithm is also able to extract any other relevant morphological information characterizing neurons and neurites. More importantly, and at variance with other segmentation methods that require fluorescence imaging from immunocytochemistry techniques, our non invasive measures entitle us to perform a longitudinal analysis during the maturation of a single culture. Such an analysis furnishes the way of individuating the main physical processes underlying the self-organization of the neurons' ensemble into a complex network, and drives the formulation of a phenomenological model yet able to describe qualitatively the overall scenario observed during the culture growth. © 2014 International Society for Advancement of Cytometry.

  8. Constructing Neuronal Network Models in Massively Parallel Environments.

    PubMed

    Ippen, Tammo; Eppler, Jochen M; Plesser, Hans E; Diesmann, Markus

    2017-01-01

    Recent advances in the development of data structures to represent spiking neuron network models enable us to exploit the complete memory of petascale computers for a single brain-scale network simulation. In this work, we investigate how well we can exploit the computing power of such supercomputers for the creation of neuronal networks. Using an established benchmark, we divide the runtime of simulation code into the phase of network construction and the phase during which the dynamical state is advanced in time. We find that on multi-core compute nodes network creation scales well with process-parallel code but exhibits a prohibitively large memory consumption. Thread-parallel network creation, in contrast, exhibits speedup only up to a small number of threads but has little overhead in terms of memory. We further observe that the algorithms creating instances of model neurons and their connections scale well for networks of ten thousand neurons, but do not show the same speedup for networks of millions of neurons. Our work uncovers that the lack of scaling of thread-parallel network creation is due to inadequate memory allocation strategies and demonstrates that thread-optimized memory allocators recover excellent scaling. An analysis of the loop order used for network construction reveals that more complex tests on the locality of operations significantly improve scaling and reduce runtime by allowing construction algorithms to step through large networks more efficiently than in existing code. The combination of these techniques increases performance by an order of magnitude and harnesses the increasingly parallel compute power of the compute nodes in high-performance clusters and supercomputers.

  9. Constructing Neuronal Network Models in Massively Parallel Environments

    PubMed Central

    Ippen, Tammo; Eppler, Jochen M.; Plesser, Hans E.; Diesmann, Markus

    2017-01-01

    Recent advances in the development of data structures to represent spiking neuron network models enable us to exploit the complete memory of petascale computers for a single brain-scale network simulation. In this work, we investigate how well we can exploit the computing power of such supercomputers for the creation of neuronal networks. Using an established benchmark, we divide the runtime of simulation code into the phase of network construction and the phase during which the dynamical state is advanced in time. We find that on multi-core compute nodes network creation scales well with process-parallel code but exhibits a prohibitively large memory consumption. Thread-parallel network creation, in contrast, exhibits speedup only up to a small number of threads but has little overhead in terms of memory. We further observe that the algorithms creating instances of model neurons and their connections scale well for networks of ten thousand neurons, but do not show the same speedup for networks of millions of neurons. Our work uncovers that the lack of scaling of thread-parallel network creation is due to inadequate memory allocation strategies and demonstrates that thread-optimized memory allocators recover excellent scaling. An analysis of the loop order used for network construction reveals that more complex tests on the locality of operations significantly improve scaling and reduce runtime by allowing construction algorithms to step through large networks more efficiently than in existing code. The combination of these techniques increases performance by an order of magnitude and harnesses the increasingly parallel compute power of the compute nodes in high-performance clusters and supercomputers. PMID:28559808

  10. Neural signal registration and analysis of axons grown in microchannels

    NASA Astrophysics Data System (ADS)

    Pigareva, Y.; Malishev, E.; Gladkov, A.; Kolpakov, V.; Bukatin, A.; Mukhina, I.; Kazantsev, V.; Pimashkin, A.

    2016-08-01

    Registration of neuronal bioelectrical signals remains one of the main physical tools to study fundamental mechanisms of signal processing in the brain. Neurons generate spiking patterns which propagate through complex map of neural network connectivity. Extracellular recording of isolated axons grown in microchannels provides amplification of the signal for detailed study of spike propagation. In this study we used neuronal hippocampal cultures grown in microfluidic devices combined with microelectrode arrays to investigate a changes of electrical activity during neural network development. We found that after 5 days in vitro after culture plating the spiking activity appears first in microchannels and on the next 2-3 days appears on the electrodes of overall neural network. We conclude that such approach provides a convenient method to study neural signal processing and functional structure development on a single cell and network level of the neuronal culture.

  11. Cytokines and cytokine networks target neurons to modulate long-term potentiation.

    PubMed

    Prieto, G Aleph; Cotman, Carl W

    2017-04-01

    Cytokines play crucial roles in the communication between brain cells including neurons and glia, as well as in the brain-periphery interactions. In the brain, cytokines modulate long-term potentiation (LTP), a cellular correlate of memory. Whether cytokines regulate LTP by direct effects on neurons or by indirect mechanisms mediated by non-neuronal cells is poorly understood. Elucidating neuron-specific effects of cytokines has been challenging because most brain cells express cytokine receptors. Moreover, cytokines commonly increase the expression of multiple cytokines in their target cells, thus increasing the complexity of brain cytokine networks even after single-cytokine challenges. Here, we review evidence on both direct and indirect-mediated modulation of LTP by cytokines. We also describe novel approaches based on neuron- and synaptosome-enriched systems to identify cytokines able to directly modulate LTP, by targeting neurons and synapses. These approaches can test multiple samples in parallel, thus allowing the study of multiple cytokines simultaneously. Hence, a cytokine networks perspective coupled with neuron-specific analysis may contribute to delineation of maps of the modulation of LTP by cytokines. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Cytokines and cytokine networks target neurons to modulate long-term potentiation

    PubMed Central

    Prieto, G. Aleph; Cotman, Carl W.

    2017-01-01

    Cytokines play crucial roles in the communication between brain cells including neurons and glia, as well as in the brain-periphery interactions. In the brain, cytokines modulate long-term potentiation (LTP), a cellular correlate of memory. Whether cytokines regulate LTP by direct effects on neurons or by indirect mechanisms mediated by non-neuronal cells is poorly understood. Elucidating neuron-specific effects of cytokines has been challenging because most brain cells express cytokine receptors. Moreover, cytokines commonly increase the expression of multiple cytokines in their target cells, thus increasing the complexity of brain cytokine networks even after single-cytokine challenges. Here, we review evidence on both direct and indirect-mediated modulation of LTP by cytokines. We also describe novel approaches based on neuron- and synaptosome-enriched systems to identify cytokines able to directly modulate LTP, by targeting neurons and synapses. These approaches can test multiple samples in parallel, thus allowing the study of multiple cytokines simultaneously. Hence, a cytokine networks perspective coupled with neuron-specific analysis may contribute to delineation of maps of the modulation of LTP by cytokines. PMID:28377062

  13. Fast reversible learning based on neurons functioning as anisotropic multiplex hubs

    NASA Astrophysics Data System (ADS)

    Vardi, Roni; Goldental, Amir; Sheinin, Anton; Sardi, Shira; Kanter, Ido

    2017-05-01

    Neural networks are composed of neurons and synapses, which are responsible for learning in a slow adaptive dynamical process. Here we experimentally show that neurons act like independent anisotropic multiplex hubs, which relay and mute incoming signals following their input directions. Theoretically, the observed information routing enriches the computational capabilities of neurons by allowing, for instance, equalization among different information routes in the network, as well as high-frequency transmission of complex time-dependent signals constructed via several parallel routes. In addition, this kind of hubs adaptively eliminate very noisy neurons from the dynamics of the network, preventing masking of information transmission. The timescales for these features are several seconds at most, as opposed to the imprint of information by the synaptic plasticity, a process which exceeds minutes. Results open the horizon to the understanding of fast and adaptive learning realities in higher cognitive brain's functionalities.

  14. Multi-layer network utilizing rewarded spike time dependent plasticity to learn a foraging task

    PubMed Central

    2017-01-01

    Neural networks with a single plastic layer employing reward modulated spike time dependent plasticity (STDP) are capable of learning simple foraging tasks. Here we demonstrate advanced pattern discrimination and continuous learning in a network of spiking neurons with multiple plastic layers. The network utilized both reward modulated and non-reward modulated STDP and implemented multiple mechanisms for homeostatic regulation of synaptic efficacy, including heterosynaptic plasticity, gain control, output balancing, activity normalization of rewarded STDP and hard limits on synaptic strength. We found that addition of a hidden layer of neurons employing non-rewarded STDP created neurons that responded to the specific combinations of inputs and thus performed basic classification of the input patterns. When combined with a following layer of neurons implementing rewarded STDP, the network was able to learn, despite the absence of labeled training data, discrimination between rewarding patterns and the patterns designated as punishing. Synaptic noise allowed for trial-and-error learning that helped to identify the goal-oriented strategies which were effective in task solving. The study predicts a critical set of properties of the spiking neuronal network with STDP that was sufficient to solve a complex foraging task involving pattern classification and decision making. PMID:28961245

  15. Learning neural connectivity from firing activity: efficient algorithms with provable guarantees on topology.

    PubMed

    Karbasi, Amin; Salavati, Amir Hesam; Vetterli, Martin

    2018-04-01

    The connectivity of a neuronal network has a major effect on its functionality and role. It is generally believed that the complex network structure of the brain provides a physiological basis for information processing. Therefore, identifying the network's topology has received a lot of attentions in neuroscience and has been the center of many research initiatives such as Human Connectome Project. Nevertheless, direct and invasive approaches that slice and observe the neural tissue have proven to be time consuming, complex and costly. As a result, the inverse methods that utilize firing activity of neurons in order to identify the (functional) connections have gained momentum recently, especially in light of rapid advances in recording technologies; It will soon be possible to simultaneously monitor the activities of tens of thousands of neurons in real time. While there are a number of excellent approaches that aim to identify the functional connections from firing activities, the scalability of the proposed techniques plays a major challenge in applying them on large-scale datasets of recorded firing activities. In exceptional cases where scalability has not been an issue, the theoretical performance guarantees are usually limited to a specific family of neurons or the type of firing activities. In this paper, we formulate the neural network reconstruction as an instance of a graph learning problem, where we observe the behavior of nodes/neurons (i.e., firing activities) and aim to find the links/connections. We develop a scalable learning mechanism and derive the conditions under which the estimated graph for a network of Leaky Integrate and Fire (LIf) neurons matches the true underlying synaptic connections. We then validate the performance of the algorithm using artificially generated data (for benchmarking) and real data recorded from multiple hippocampal areas in rats.

  16. Complexity measures of the central respiratory networks during wakefulness and sleep

    NASA Astrophysics Data System (ADS)

    Dragomir, Andrei; Akay, Yasemin; Curran, Aidan K.; Akay, Metin

    2008-06-01

    Since sleep is known to influence respiratory activity we studied whether the sleep state would affect the complexity value of the respiratory network output. Specifically, we tested the hypothesis that the complexity values of the diaphragm EMG (EMGdia) activity would be lower during REM compared to NREM. Furthermore, since REM is primarily generated by a homogeneous population of neurons in the medulla, the possibility that REM-related respiratory output would be less complex than that of the awake state was also considered. Additionally, in order to examine the influence of neuron vulnerabilities within the rostral ventral medulla (RVM) on the complexity of the respiratory network output, we inhibited respiratory neurons in the RVM by microdialysis of GABAA receptor agonist muscimol. Diaphragm EMG, nuchal EMG, EEG, EOG as well as other physiological signals (tracheal pressure, blood pressure and respiratory volume) were recorded from five unanesthetized chronically instrumented intact piglets (3-10 days old). Complexity of the diaphragm EMG (EMGdia) signal during wakefulness, NREM and REM was evaluated using the approximate entropy method (ApEn). ApEn values of the EMGdia during NREM and REM sleep were found significantly (p < 0.05 and p < 0.001, respectively) lower than those of awake EMGdia after muscimol inhibition. In the absence of muscimol, only the differences between REM and wakefulness ApEn values were found to be significantly different.

  17. Intrinsic Neuronal Properties Switch the Mode of Information Transmission in Networks

    PubMed Central

    Gjorgjieva, Julijana; Mease, Rebecca A.; Moody, William J.; Fairhall, Adrienne L.

    2014-01-01

    Diverse ion channels and their dynamics endow single neurons with complex biophysical properties. These properties determine the heterogeneity of cell types that make up the brain, as constituents of neural circuits tuned to perform highly specific computations. How do biophysical properties of single neurons impact network function? We study a set of biophysical properties that emerge in cortical neurons during the first week of development, eventually allowing these neurons to adaptively scale the gain of their response to the amplitude of the fluctuations they encounter. During the same time period, these same neurons participate in large-scale waves of spontaneously generated electrical activity. We investigate the potential role of experimentally observed changes in intrinsic neuronal properties in determining the ability of cortical networks to propagate waves of activity. We show that such changes can strongly affect the ability of multi-layered feedforward networks to represent and transmit information on multiple timescales. With properties modeled on those observed at early stages of development, neurons are relatively insensitive to rapid fluctuations and tend to fire synchronously in response to wave-like events of large amplitude. Following developmental changes in voltage-dependent conductances, these same neurons become efficient encoders of fast input fluctuations over few layers, but lose the ability to transmit slower, population-wide input variations across many layers. Depending on the neurons' intrinsic properties, noise plays different roles in modulating neuronal input-output curves, which can dramatically impact network transmission. The developmental change in intrinsic properties supports a transformation of a networks function from the propagation of network-wide information to one in which computations are scaled to local activity. This work underscores the significance of simple changes in conductance parameters in governing how neurons represent and propagate information, and suggests a role for background synaptic noise in switching the mode of information transmission. PMID:25474701

  18. Multi-channels coupling-induced pattern transition in a tri-layer neuronal network

    NASA Astrophysics Data System (ADS)

    Wu, Fuqiang; Wang, Ya; Ma, Jun; Jin, Wuyin; Hobiny, Aatef

    2018-03-01

    Neurons in nerve system show complex electrical behaviors due to complex connection types and diversity in excitability. A tri-layer network is constructed to investigate the signal propagation and pattern formation by selecting different coupling channels between layers. Each layer is set as different states, and the local kinetics is described by Hindmarsh-Rose neuron model. By changing the number of coupling channels between layers and the state of the first layer, the collective behaviors of each layer and synchronization pattern of network are investigated. A statistical factor of synchronization on each layer is calculated. It is found that quiescent state in the second layer can be excited and disordered state in the third layer is suppressed when the first layer is controlled by a pacemaker, and the developed state is dependent on the number of coupling channels. Furthermore, the collapse in the first layer can cause breakdown of other layers in the network, and the mechanism is that disordered state in the third layer is enhanced when sampled signals from the collapsed layer can impose continuous disturbance on the next layer.

  19. Nonlinear functional approximation with networks using adaptive neurons

    NASA Technical Reports Server (NTRS)

    Tawel, Raoul

    1992-01-01

    A novel mathematical framework for the rapid learning of nonlinear mappings and topological transformations is presented. It is based on allowing the neuron's parameters to adapt as a function of learning. This fully recurrent adaptive neuron model (ANM) has been successfully applied to complex nonlinear function approximation problems such as the highly degenerate inverse kinematics problem in robotics.

  20. Extracting neuronal functional network dynamics via adaptive Granger causality analysis.

    PubMed

    Sheikhattar, Alireza; Miran, Sina; Liu, Ji; Fritz, Jonathan B; Shamma, Shihab A; Kanold, Patrick O; Babadi, Behtash

    2018-04-24

    Quantifying the functional relations between the nodes in a network based on local observations is a key challenge in studying complex systems. Most existing time series analysis techniques for this purpose provide static estimates of the network properties, pertain to stationary Gaussian data, or do not take into account the ubiquitous sparsity in the underlying functional networks. When applied to spike recordings from neuronal ensembles undergoing rapid task-dependent dynamics, they thus hinder a precise statistical characterization of the dynamic neuronal functional networks underlying adaptive behavior. We develop a dynamic estimation and inference paradigm for extracting functional neuronal network dynamics in the sense of Granger, by integrating techniques from adaptive filtering, compressed sensing, point process theory, and high-dimensional statistics. We demonstrate the utility of our proposed paradigm through theoretical analysis, algorithm development, and application to synthetic and real data. Application of our techniques to two-photon Ca 2+ imaging experiments from the mouse auditory cortex reveals unique features of the functional neuronal network structures underlying spontaneous activity at unprecedented spatiotemporal resolution. Our analysis of simultaneous recordings from the ferret auditory and prefrontal cortical areas suggests evidence for the role of rapid top-down and bottom-up functional dynamics across these areas involved in robust attentive behavior.

  1. Single-Neuron NMDA Receptor Phenotype Influences Neuronal Rewiring and Reintegration following Traumatic Injury

    PubMed Central

    Patel, Tapan P.; Ventre, Scott C.; Geddes-Klein, Donna; Singh, Pallab K.

    2014-01-01

    Alterations in the activity of neural circuits are a common consequence of traumatic brain injury (TBI), but the relationship between single-neuron properties and the aggregate network behavior is not well understood. We recently reported that the GluN2B-containing NMDA receptors (NMDARs) are key in mediating mechanical forces during TBI, and that TBI produces a complex change in the functional connectivity of neuronal networks. Here, we evaluated whether cell-to-cell heterogeneity in the connectivity and aggregate contribution of GluN2B receptors to [Ca2+]i before injury influenced the functional rewiring, spontaneous activity, and network plasticity following injury using primary rat cortical dissociated neurons. We found that the functional connectivity of a neuron to its neighbors, combined with the relative influx of calcium through distinct NMDAR subtypes, together contributed to the individual neuronal response to trauma. Specifically, individual neurons whose [Ca2+]i oscillations were largely due to GluN2B NMDAR activation lost many of their functional targets 1 h following injury. In comparison, neurons with large GluN2A contribution or neurons with high functional connectivity both independently protected against injury-induced loss in connectivity. Mechanistically, we found that traumatic injury resulted in increased uncorrelated network activity, an effect linked to reduction of the voltage-sensitive Mg2+ block of GluN2B-containing NMDARs. This uncorrelated activation of GluN2B subtypes after injury significantly limited the potential for network remodeling in response to a plasticity stimulus. Together, our data suggest that two single-cell characteristics, the aggregate contribution of NMDAR subtypes and the number of functional connections, influence network structure following traumatic injury. PMID:24647941

  2. Development of pacemaker properties and rhythmogenic mechanisms in the mouse embryonic respiratory network

    PubMed Central

    Chevalier, Marc; Toporikova, Natalia; Simmers, John; Thoby-Brisson, Muriel

    2016-01-01

    Breathing is a vital rhythmic behavior generated by hindbrain neuronal circuitry, including the preBötzinger complex network (preBötC) that controls inspiration. The emergence of preBötC network activity during prenatal development has been described, but little is known regarding inspiratory neurons expressing pacemaker properties at embryonic stages. Here, we combined calcium imaging and electrophysiological recordings in mouse embryo brainstem slices together with computational modeling to reveal the existence of heterogeneous pacemaker oscillatory properties relying on distinct combinations of burst-generating INaP and ICAN conductances. The respective proportion of the different inspiratory pacemaker subtypes changes during prenatal development. Concomitantly, network rhythmogenesis switches from a purely INaP/ICAN-dependent mechanism at E16.5 to a combined pacemaker/network-driven process at E18.5. Our results provide the first description of pacemaker bursting properties in embryonic preBötC neurons and indicate that network rhythmogenesis undergoes important changes during prenatal development through alterations in both circuit properties and the biophysical characteristics of pacemaker neurons. DOI: http://dx.doi.org/10.7554/eLife.16125.001 PMID:27434668

  3. Irregular behavior in an excitatory-inhibitory neuronal network

    NASA Astrophysics Data System (ADS)

    Park, Choongseok; Terman, David

    2010-06-01

    Excitatory-inhibitory networks arise in many regions throughout the central nervous system and display complex spatiotemporal firing patterns. These neuronal activity patterns (of individual neurons and/or the whole network) are closely related to the functional status of the system and differ between normal and pathological states. For example, neurons within the basal ganglia, a group of subcortical nuclei that are responsible for the generation of movement, display a variety of dynamic behaviors such as correlated oscillatory activity and irregular, uncorrelated spiking. Neither the origins of these firing patterns nor the mechanisms that underlie the patterns are well understood. We consider a biophysical model of an excitatory-inhibitory network in the basal ganglia and explore how specific biophysical properties of the network contribute to the generation of irregular spiking. We use geometric dynamical systems and singular perturbation methods to systematically reduce the model to a simpler set of equations, which is suitable for analysis. The results specify the dependence on the strengths of synaptic connections and the intrinsic firing properties of the cells in the irregular regime when applied to the subthalamopallidal network of the basal ganglia.

  4. On the performance of voltage stepping for the simulation of adaptive, nonlinear integrate-and-fire neuronal networks.

    PubMed

    Kaabi, Mohamed Ghaith; Tonnelier, Arnaud; Martinez, Dominique

    2011-05-01

    In traditional event-driven strategies, spike timings are analytically given or calculated with arbitrary precision (up to machine precision). Exact computation is possible only for simplified neuron models, mainly the leaky integrate-and-fire model. In a recent paper, Zheng, Tonnelier, and Martinez (2009) introduced an approximate event-driven strategy, named voltage stepping, that allows the generic simulation of nonlinear spiking neurons. Promising results were achieved in the simulation of single quadratic integrate-and-fire neurons. Here, we assess the performance of voltage stepping in network simulations by considering more complex neurons (quadratic integrate-and-fire neurons with adaptation) coupled with multiple synapses. To handle the discrete nature of synaptic interactions, we recast voltage stepping in a general framework, the discrete event system specification. The efficiency of the method is assessed through simulations and comparisons with a modified time-stepping scheme of the Runge-Kutta type. We demonstrated numerically that the original order of voltage stepping is preserved when simulating connected spiking neurons, independent of the network activity and connectivity.

  5. Optimizing NEURON Simulation Environment Using Remote Memory Access with Recursive Doubling on Distributed Memory Systems.

    PubMed

    Shehzad, Danish; Bozkuş, Zeki

    2016-01-01

    Increase in complexity of neuronal network models escalated the efforts to make NEURON simulation environment efficient. The computational neuroscientists divided the equations into subnets amongst multiple processors for achieving better hardware performance. On parallel machines for neuronal networks, interprocessor spikes exchange consumes large section of overall simulation time. In NEURON for communication between processors Message Passing Interface (MPI) is used. MPI_Allgather collective is exercised for spikes exchange after each interval across distributed memory systems. The increase in number of processors though results in achieving concurrency and better performance but it inversely affects MPI_Allgather which increases communication time between processors. This necessitates improving communication methodology to decrease the spikes exchange time over distributed memory systems. This work has improved MPI_Allgather method using Remote Memory Access (RMA) by moving two-sided communication to one-sided communication, and use of recursive doubling mechanism facilitates achieving efficient communication between the processors in precise steps. This approach enhanced communication concurrency and has improved overall runtime making NEURON more efficient for simulation of large neuronal network models.

  6. Optimizing NEURON Simulation Environment Using Remote Memory Access with Recursive Doubling on Distributed Memory Systems

    PubMed Central

    Bozkuş, Zeki

    2016-01-01

    Increase in complexity of neuronal network models escalated the efforts to make NEURON simulation environment efficient. The computational neuroscientists divided the equations into subnets amongst multiple processors for achieving better hardware performance. On parallel machines for neuronal networks, interprocessor spikes exchange consumes large section of overall simulation time. In NEURON for communication between processors Message Passing Interface (MPI) is used. MPI_Allgather collective is exercised for spikes exchange after each interval across distributed memory systems. The increase in number of processors though results in achieving concurrency and better performance but it inversely affects MPI_Allgather which increases communication time between processors. This necessitates improving communication methodology to decrease the spikes exchange time over distributed memory systems. This work has improved MPI_Allgather method using Remote Memory Access (RMA) by moving two-sided communication to one-sided communication, and use of recursive doubling mechanism facilitates achieving efficient communication between the processors in precise steps. This approach enhanced communication concurrency and has improved overall runtime making NEURON more efficient for simulation of large neuronal network models. PMID:27413363

  7. Identification of the connections in biologically inspired neural networks

    NASA Technical Reports Server (NTRS)

    Demuth, H.; Leung, K.; Beale, M.; Hicklin, J.

    1990-01-01

    We developed an identification method to find the strength of the connections between neurons from their behavior in small biologically-inspired artificial neural networks. That is, given the network external inputs and the temporal firing pattern of the neurons, we can calculate a solution for the strengths of the connections between neurons and the initial neuron activations if a solution exists. The method determines directly if there is a solution to a particular neural network problem. No training of the network is required. It should be noted that this is a first pass at the solution of a difficult problem. The neuron and network models chosen are related to biology but do not contain all of its complexities, some of which we hope to add to the model in future work. A variety of new results have been obtained. First, the method has been tailored to produce connection weight matrix solutions for networks with important features of biological neural (bioneural) networks. Second, a computationally efficient method of finding a robust central solution has been developed. This later method also enables us to find the most consistent solution in the presence of noisy data. Prospects of applying our method to identify bioneural network connections are exciting because such connections are almost impossible to measure in the laboratory. Knowledge of such connections would facilitate an understanding of bioneural networks and would allow the construction of the electronic counterparts of bioneural networks on very large scale integrated (VLSI) circuits.

  8. Neuromorphic neural interfaces: from neurophysiological inspiration to biohybrid coupling with nervous systems

    NASA Astrophysics Data System (ADS)

    Broccard, Frédéric D.; Joshi, Siddharth; Wang, Jun; Cauwenberghs, Gert

    2017-08-01

    Objective. Computation in nervous systems operates with different computational primitives, and on different hardware, than traditional digital computation and is thus subjected to different constraints from its digital counterpart regarding the use of physical resources such as time, space and energy. In an effort to better understand neural computation on a physical medium with similar spatiotemporal and energetic constraints, the field of neuromorphic engineering aims to design and implement electronic systems that emulate in very large-scale integration (VLSI) hardware the organization and functions of neural systems at multiple levels of biological organization, from individual neurons up to large circuits and networks. Mixed analog/digital neuromorphic VLSI systems are compact, consume little power and operate in real time independently of the size and complexity of the model. Approach. This article highlights the current efforts to interface neuromorphic systems with neural systems at multiple levels of biological organization, from the synaptic to the system level, and discusses the prospects for future biohybrid systems with neuromorphic circuits of greater complexity. Main results. Single silicon neurons have been interfaced successfully with invertebrate and vertebrate neural networks. This approach allowed the investigation of neural properties that are inaccessible with traditional techniques while providing a realistic biological context not achievable with traditional numerical modeling methods. At the network level, populations of neurons are envisioned to communicate bidirectionally with neuromorphic processors of hundreds or thousands of silicon neurons. Recent work on brain-machine interfaces suggests that this is feasible with current neuromorphic technology. Significance. Biohybrid interfaces between biological neurons and VLSI neuromorphic systems of varying complexity have started to emerge in the literature. Primarily intended as a computational tool for investigating fundamental questions related to neural dynamics, the sophistication of current neuromorphic systems now allows direct interfaces with large neuronal networks and circuits, resulting in potentially interesting clinical applications for neuroengineering systems, neuroprosthetics and neurorehabilitation.

  9. Fast Recall for Complex-Valued Hopfield Neural Networks with Projection Rules.

    PubMed

    Kobayashi, Masaki

    2017-01-01

    Many models of neural networks have been extended to complex-valued neural networks. A complex-valued Hopfield neural network (CHNN) is a complex-valued version of a Hopfield neural network. Complex-valued neurons can represent multistates, and CHNNs are available for the storage of multilevel data, such as gray-scale images. The CHNNs are often trapped into the local minima, and their noise tolerance is low. Lee improved the noise tolerance of the CHNNs by detecting and exiting the local minima. In the present work, we propose a new recall algorithm that eliminates the local minima. We show that our proposed recall algorithm not only accelerated the recall but also improved the noise tolerance through computer simulations.

  10. Nonlinear multiplicative dendritic integration in neuron and network models

    PubMed Central

    Zhang, Danke; Li, Yuanqing; Rasch, Malte J.; Wu, Si

    2013-01-01

    Neurons receive inputs from thousands of synapses distributed across dendritic trees of complex morphology. It is known that dendritic integration of excitatory and inhibitory synapses can be highly non-linear in reality and can heavily depend on the exact location and spatial arrangement of inhibitory and excitatory synapses on the dendrite. Despite this known fact, most neuron models used in artificial neural networks today still only describe the voltage potential of a single somatic compartment and assume a simple linear summation of all individual synaptic inputs. We here suggest a new biophysical motivated derivation of a single compartment model that integrates the non-linear effects of shunting inhibition, where an inhibitory input on the route of an excitatory input to the soma cancels or “shunts” the excitatory potential. In particular, our integration of non-linear dendritic processing into the neuron model follows a simple multiplicative rule, suggested recently by experiments, and allows for strict mathematical treatment of network effects. Using our new formulation, we further devised a spiking network model where inhibitory neurons act as global shunting gates, and show that the network exhibits persistent activity in a low firing regime. PMID:23658543

  11. Dynamics of neuromodulatory feedback determines frequency modulation in a reduced respiratory network: a computational study.

    PubMed

    Toporikova, Natalia; Butera, Robert J

    2013-02-01

    Neuromodulators, such as amines and neuropeptides, alter the activity of neurons and neuronal networks. In this work, we investigate how neuromodulators, which activate G(q)-protein second messenger systems, can modulate the bursting frequency of neurons in a critical portion of the respiratory neural network, the pre-Bötzinger complex (preBötC). These neurons are a vital part of the ponto-medullary neuronal network, which generates a stable respiratory rhythm whose frequency is regulated by neuromodulator release from the nearby Raphe nucleus. Using a simulated 50-cell network of excitatory preBötC neurons with a heterogeneous distribution of persistent sodium conductance and Ca(2+), we determined conditions for frequency modulation in such a network by simulating interaction between Raphe and preBötC nuclei. We found that the positive feedback between the Raphe excitability and preBötC activity induces frequency modulation in the preBötC neurons. In addition, the frequency of the respiratory rhythm can be regulated via phasic release of excitatory neuromodulators from the Raphe nucleus. We predict that the application of a G(q) antagonist will eliminate this frequency modulation by the Raphe and keep the network frequency constant and low. In contrast, application of a G(q) agonist will result in a high frequency for all levels of Raphe stimulation. Our modeling results also suggest that high [K(+)] requirement in respiratory brain slice experiments may serve as a compensatory mechanism for low neuromodulatory tone. Copyright © 2012 Elsevier B.V. All rights reserved.

  12. Medical image processing using neural networks based on multivalued and universal binary neurons

    NASA Astrophysics Data System (ADS)

    Aizenberg, Igor N.; Aizenberg, Naum N.; Gotko, Eugen S.; Sochka, Vladimir A.

    1998-06-01

    Cellular Neural Networks (CNN) has become a very good mean for solution of the different kind of image processing problems. CNN based on multi-valued neurons (CNN-MVN) and CNN based on universal binary neurons (CNN-UBN) are the specific kinds of the CNN. MVN and UBN are neurons with complex-valued weights, and complex internal arithmetic. Their main feature is possibility of implementation of the arbitrary mapping between inputs and output described by the MVN, and arbitrary (not only threshold) Boolean function (UBN). Great advantage of the CNN is possibility of implementation of the any linear and many non-linear filters in spatial domain. Together with noise removing using CNN it is possible to implement filters, which can amplify high and medium frequencies. These filters are a very good mean for solution of the enhancement problem, and problem of details extraction against complex background. So, CNN make it possible to organize all the processing process from filtering until extraction of the important details. Organization of this process for medical image processing is considered in the paper. A major attention will be concentrated on the processing of the x-ray and ultrasound images corresponding to different oncology (or closed to oncology) pathologies. Additionally we will consider new structure of the neural network for solution of the problem of differential diagnostics of breast cancer.

  13. ARACHNE: A neural-neuroglial network builder with remotely controlled parallel computing

    PubMed Central

    Rusakov, Dmitri A.; Savtchenko, Leonid P.

    2017-01-01

    Creating and running realistic models of neural networks has hitherto been a task for computing professionals rather than experimental neuroscientists. This is mainly because such networks usually engage substantial computational resources, the handling of which requires specific programing skills. Here we put forward a newly developed simulation environment ARACHNE: it enables an investigator to build and explore cellular networks of arbitrary biophysical and architectural complexity using the logic of NEURON and a simple interface on a local computer or a mobile device. The interface can control, through the internet, an optimized computational kernel installed on a remote computer cluster. ARACHNE can combine neuronal (wired) and astroglial (extracellular volume-transmission driven) network types and adopt realistic cell models from the NEURON library. The program and documentation (current version) are available at GitHub repository https://github.com/LeonidSavtchenko/Arachne under the MIT License (MIT). PMID:28362877

  14. Neuronal synchrony: Peculiarity and generality

    PubMed Central

    Nowotny, Thomas; Huerta, Ramon; Rabinovich, Mikhail I.

    2008-01-01

    Synchronization in neuronal systems is a new and intriguing application of dynamical systems theory. Why are neuronal systems different as a subject for synchronization? (1) Neurons in themselves are multidimensional nonlinear systems that are able to exhibit a wide variety of different activity patterns. Their “dynamical repertoire” includes regular or chaotic spiking, regular or chaotic bursting, multistability, and complex transient regimes. (2) Usually, neuronal oscillations are the result of the cooperative activity of many synaptically connected neurons (a neuronal circuit). Thus, it is necessary to consider synchronization between different neuronal circuits as well. (3) The synapses that implement the coupling between neurons are also dynamical elements and their intrinsic dynamics influences the process of synchronization or entrainment significantly. In this review we will focus on four new problems: (i) the synchronization in minimal neuronal networks with plastic synapses (synchronization with activity dependent coupling), (ii) synchronization of bursts that are generated by a group of nonsymmetrically coupled inhibitory neurons (heteroclinic synchronization), (iii) the coordination of activities of two coupled neuronal networks (partial synchronization of small composite structures), and (iv) coarse grained synchronization in larger systems (synchronization on a mesoscopic scale). PMID:19045493

  15. Neuron hemilineages provide the functional ground plan for the Drosophila ventral nervous system

    PubMed Central

    Harris, Robin M; Pfeiffer, Barret D; Rubin, Gerald M; Truman, James W

    2015-01-01

    Drosophila central neurons arise from neuroblasts that generate neurons in a pair-wise fashion, with the two daughters providing the basis for distinct A and B hemilineage groups. 33 postembryonically-born hemilineages contribute over 90% of the neurons in each thoracic hemisegment. We devised genetic approaches to define the anatomy of most of these hemilineages and to assessed their functional roles using the heat-sensitive channel dTRPA1. The simplest hemilineages contained local interneurons and their activation caused tonic or phasic leg movements lacking interlimb coordination. The next level was hemilineages of similar projection cells that drove intersegmentally coordinated behaviors such as walking. The highest level involved hemilineages whose activation elicited complex behaviors such as takeoff. These activation phenotypes indicate that the hemilineages vary in their behavioral roles with some contributing to local networks for sensorimotor processing and others having higher order functions of coordinating these local networks into complex behavior. DOI: http://dx.doi.org/10.7554/eLife.04493.001 PMID:26193122

  16. Spectral Entropy Based Neuronal Network Synchronization Analysis Based on Microelectrode Array Measurements

    PubMed Central

    Kapucu, Fikret E.; Välkki, Inkeri; Mikkonen, Jarno E.; Leone, Chiara; Lenk, Kerstin; Tanskanen, Jarno M. A.; Hyttinen, Jari A. K.

    2016-01-01

    Synchrony and asynchrony are essential aspects of the functioning of interconnected neuronal cells and networks. New information on neuronal synchronization can be expected to aid in understanding these systems. Synchronization provides insight in the functional connectivity and the spatial distribution of the information processing in the networks. Synchronization is generally studied with time domain analysis of neuronal events, or using direct frequency spectrum analysis, e.g., in specific frequency bands. However, these methods have their pitfalls. Thus, we have previously proposed a method to analyze temporal changes in the complexity of the frequency of signals originating from different network regions. The method is based on the correlation of time varying spectral entropies (SEs). SE assesses the regularity, or complexity, of a time series by quantifying the uniformity of the frequency spectrum distribution. It has been previously employed, e.g., in electroencephalogram analysis. Here, we revisit our correlated spectral entropy method (CorSE), providing evidence of its justification, usability, and benefits. Here, CorSE is assessed with simulations and in vitro microelectrode array (MEA) data. CorSE is first demonstrated with a specifically tailored toy simulation to illustrate how it can identify synchronized populations. To provide a form of validation, the method was tested with simulated data from integrate-and-fire model based computational neuronal networks. To demonstrate the analysis of real data, CorSE was applied on in vitro MEA data measured from rat cortical cell cultures, and the results were compared with three known event based synchronization measures. Finally, we show the usability by tracking the development of networks in dissociated mouse cortical cell cultures. The results show that temporal correlations in frequency spectrum distributions reflect the network relations of neuronal populations. In the simulated data, CorSE unraveled the synchronizations. With the real in vitro MEA data, CorSE produced biologically plausible results. Since CorSE analyses continuous data, it is not affected by possibly poor spike or other event detection quality. We conclude that CorSE can reveal neuronal network synchronization based on in vitro MEA field potential measurements. CorSE is expected to be equally applicable also in the analysis of corresponding in vivo and ex vivo data analysis. PMID:27803660

  17. From neurons to epidemics: How trophic coherence affects spreading processes.

    PubMed

    Klaise, Janis; Johnson, Samuel

    2016-06-01

    Trophic coherence, a measure of the extent to which the nodes of a directed network are organised in levels, has recently been shown to be closely related to many structural and dynamical aspects of complex systems, including graph eigenspectra, the prevalence or absence of feedback cycles, and linear stability. Furthermore, non-trivial trophic structures have been observed in networks of neurons, species, genes, metabolites, cellular signalling, concatenated words, P2P users, and world trade. Here, we consider two simple yet apparently quite different dynamical models-one a susceptible-infected-susceptible epidemic model adapted to include complex contagion and the other an Amari-Hopfield neural network-and show that in both cases the related spreading processes are modulated in similar ways by the trophic coherence of the underlying networks. To do this, we propose a network assembly model which can generate structures with tunable trophic coherence, limiting in either perfectly stratified networks or random graphs. We find that trophic coherence can exert a qualitative change in spreading behaviour, determining whether a pulse of activity will percolate through the entire network or remain confined to a subset of nodes, and whether such activity will quickly die out or endure indefinitely. These results could be important for our understanding of phenomena such as epidemics, rumours, shocks to ecosystems, neuronal avalanches, and many other spreading processes.

  18. Antisynchronization of Two Complex Dynamical Networks

    NASA Astrophysics Data System (ADS)

    Banerjee, Ranjib; Grosu, Ioan; Dana, Syamal K.

    A nonlinear type open-plus-closed-loop (OPCL) coupling is investi-gated for antisynchronization of two complex networks under unidirectional and bidirectional interactions where each node of the networks is considered as a continuous dynamical system. We present analytical results for antisynchroni-zation in identical networks. A numerical example is given for unidirectional coupling with each node represented by a spiking-bursting type Hindmarsh-Rose neuron model. Antisynchronization for mutual interaction is allowed only to inversion symmetric dynamical systems as chosen nodes.

  19. The Complexity of Dynamics in Small Neural Circuits

    PubMed Central

    Panzeri, Stefano

    2016-01-01

    Mean-field approximations are a powerful tool for studying large neural networks. However, they do not describe well the behavior of networks composed of a small number of neurons. In this case, major differences between the mean-field approximation and the real behavior of the network can arise. Yet, many interesting problems in neuroscience involve the study of mesoscopic networks composed of a few tens of neurons. Nonetheless, mathematical methods that correctly describe networks of small size are still rare, and this prevents us to make progress in understanding neural dynamics at these intermediate scales. Here we develop a novel systematic analysis of the dynamics of arbitrarily small networks composed of homogeneous populations of excitatory and inhibitory firing-rate neurons. We study the local bifurcations of their neural activity with an approach that is largely analytically tractable, and we numerically determine the global bifurcations. We find that for strong inhibition these networks give rise to very complex dynamics, caused by the formation of multiple branching solutions of the neural dynamics equations that emerge through spontaneous symmetry-breaking. This qualitative change of the neural dynamics is a finite-size effect of the network, that reveals qualitative and previously unexplored differences between mesoscopic cortical circuits and their mean-field approximation. The most important consequence of spontaneous symmetry-breaking is the ability of mesoscopic networks to regulate their degree of functional heterogeneity, which is thought to help reducing the detrimental effect of noise correlations on cortical information processing. PMID:27494737

  20. Lamina-specific contribution of glutamatergic and GABAergic potentials to hippocampal sharp wave-ripple complexes.

    PubMed

    Schönberger, Jan; Draguhn, Andreas; Both, Martin

    2014-01-01

    The mammalian hippocampus expresses highly organized patterns of neuronal activity which form a neuronal correlate of spatial memories. These memory-encoding neuronal ensembles form on top of different network oscillations which entrain neurons in a state- and experience-dependent manner. The mechanisms underlying activation, timing and selection of participating neurons are incompletely understood. Here we studied the synaptic mechanisms underlying one prominent network pattern called sharp wave-ripple complexes (SPW-R) which are involved in memory consolidation during sleep. We recorded SPW-R with extracellular electrodes along the different layers of area CA1 in mouse hippocampal slices. Contribution of glutamatergic excitation and GABAergic inhibition, respectively, was probed by local application of receptor antagonists into s. radiatum, pyramidale and oriens. Laminar profiles of field potentials show that GABAergic potentials contribute substantially to sharp waves and superimposed ripple oscillations in s. pyramidale. Inhibitory inputs to s. pyramidale and s. oriens are crucial for action potential timing by ripple oscillations, as revealed by multiunit-recordings in the pyramidal cell layer. Glutamatergic afferents, on the other hand, contribute to sharp waves in s. radiatum where they also evoke a fast oscillation at ~200 Hz. Surprisingly, field ripples in s. radiatum are slightly slower than ripples in s. pyramidale, resulting in a systematic shift between dendritic and somatic oscillations. This complex interplay between dendritic excitation and perisomatic inhibition may be responsible for the precise timing of discharge probability during the time course of SPW-R. Together, our data illustrate a complementary role of spatially confined excitatory and inhibitory transmission during highly ordered network patterns in the hippocampus.

  1. Field coupling-induced pattern formation in two-layer neuronal network

    NASA Astrophysics Data System (ADS)

    Qin, Huixin; Wang, Chunni; Cai, Ning; An, Xinlei; Alzahrani, Faris

    2018-07-01

    The exchange of charged ions across membrane can generate fluctuation of membrane potential and also complex effect of electromagnetic induction. Diversity in excitability of neurons induces different modes selection and dynamical responses to external stimuli. Based on a neuron model with electromagnetic induction, which is described by magnetic flux and memristor, a two-layer network is proposed to discuss the pattern control and wave propagation in the network. In each layer, gap junction coupling is applied to connect the neurons, while field coupling is considered between two layers of the network. The field coupling is approached by using coupling of magnetic flux, which is associated with distribution of electromagnetic field. It is found that appropriate intensity of field coupling can enhance wave propagation from one layer to another one, and beautiful spatial patterns are formed. The developed target wave in the second layer shows some difference from target wave triggered in the first layer of the network when two layers are considered by different excitabilities. The potential mechanism could be pacemaker-like driving from the first layer will be encoded by the second layer.

  2. Simulator for neural networks and action potentials.

    PubMed

    Baxter, Douglas A; Byrne, John H

    2007-01-01

    A key challenge for neuroinformatics is to devise methods for representing, accessing, and integrating vast amounts of diverse and complex data. A useful approach to represent and integrate complex data sets is to develop mathematical models [Arbib (The Handbook of Brain Theory and Neural Networks, pp. 741-745, 2003); Arbib and Grethe (Computing the Brain: A Guide to Neuroinformatics, 2001); Ascoli (Computational Neuroanatomy: Principles and Methods, 2002); Bower and Bolouri (Computational Modeling of Genetic and Biochemical Networks, 2001); Hines et al. (J. Comput. Neurosci. 17, 7-11, 2004); Shepherd et al. (Trends Neurosci. 21, 460-468, 1998); Sivakumaran et al. (Bioinformatics 19, 408-415, 2003); Smolen et al. (Neuron 26, 567-580, 2000); Vadigepalli et al. (OMICS 7, 235-252, 2003)]. Models of neural systems provide quantitative and modifiable frameworks for representing data and analyzing neural function. These models can be developed and solved using neurosimulators. One such neurosimulator is simulator for neural networks and action potentials (SNNAP) [Ziv (J. Neurophysiol. 71, 294-308, 1994)]. SNNAP is a versatile and user-friendly tool for developing and simulating models of neurons and neural networks. SNNAP simulates many features of neuronal function, including ionic currents and their modulation by intracellular ions and/or second messengers, and synaptic transmission and synaptic plasticity. SNNAP is written in Java and runs on most computers. Moreover, SNNAP provides a graphical user interface (GUI) and does not require programming skills. This chapter describes several capabilities of SNNAP and illustrates methods for simulating neurons and neural networks. SNNAP is available at http://snnap.uth.tmc.edu .

  3. Multichannel activity propagation across an engineered axon network

    NASA Astrophysics Data System (ADS)

    Chen, H. Isaac; Wolf, John A.; Smith, Douglas H.

    2017-04-01

    Objective. Although substantial progress has been made in mapping the connections of the brain, less is known about how this organization translates into brain function. In particular, the massive interconnectivity of the brain has made it difficult to specifically examine data transmission between two nodes of the connectome, a central component of the ‘neural code.’ Here, we investigated the propagation of multiple streams of asynchronous neuronal activity across an isolated in vitro ‘connectome unit.’ Approach. We used the novel technique of axon stretch growth to create a model of a long-range cortico-cortical network, a modular system consisting of paired nodes of cortical neurons connected by axon tracts. Using optical stimulation and multi-electrode array recording techniques, we explored how input patterns are represented by cortical networks, how these representations shift as they are transmitted between cortical nodes and perturbed by external conditions, and how well the downstream node distinguishes different patterns. Main results. Stimulus representations included direct, synaptic, and multiplexed responses that grew in complexity as the distance between the stimulation source and recorded neuron increased. These representations collapsed into patterns with lower information content at higher stimulation frequencies. With internodal activity propagation, a hierarchy of network pathways, including latent circuits, was revealed using glutamatergic blockade. As stimulus channels were added, divergent, non-linear effects were observed in local versus distant network layers. Pairwise difference analysis of neuronal responses suggested that neuronal ensembles generally outperformed individual cells in discriminating input patterns. Significance. Our data illuminate the complexity of spiking activity propagation in cortical networks in vitro, which is characterized by the transformation of an input into myriad outputs over several network layers. These results provide insight into how the brain potentially processes information and generates the neural code and could guide the development of clinical therapies based on multichannel brain stimulation.

  4. Effects of an environmentally-relevant mixture of pyrethroid insecticides on spontaneous activity in primary cortical networks on microelectrode arrays.

    PubMed

    Johnstone, Andrew F M; Strickland, Jenna D; Crofton, Kevin M; Gennings, Chris; Shafer, Timothy J

    2017-05-01

    Pyrethroid insecticides exert their insecticidal and toxicological effects primarily by disrupting voltage-gated sodium channel (VGSC) function, resulting in altered neuronal excitability. Numerous studies of individual pyrethroids have characterized effects on mammalian VGSC function and neuronal excitability, yet studies examining effects of complex pyrethroid mixtures in mammalian neurons, especially in environmentally relevant mixture ratios, are limited. In the present study, concentration-response functions were characterized for five pyrethroids (permethrin, deltamethrin, cypermethrin, β-cyfluthrin and esfenvalerate) in an in vitro preparation containing cortical neurons and glia. As a metric of neuronal network activity, spontaneous mean network firing rates (MFR) were measured using microelectorde arrays (MEAs). In addition, the effect of a complex and exposure relevant mixture of the five pyrethroids (containing 52% permethrin, 28.8% cypermethrin, 12.9% β-cyfluthrin, 3.4% deltamethrin and 2.7% esfenvalerate) was also measured. Data were modeled to determine whether effects of the pyrethroid mixture were predicted by dose-addition. At concentrations up to 10μM, all compounds except permethrin reduced MFR. Deltamethrin and β-cyfluthrin were the most potent and reduced MFR by as much as 60 and 50%, respectively, while cypermethrin and esfenvalerate were of approximately equal potency and reduced MFR by only ∼20% at the highest concentration. Permethrin caused small (∼24% maximum), concentration-dependent increases in MFR. Effects of the environmentally relevant mixture did not depart from the prediction of dose-addition. These data demonstrate that an environmentally relevant mixture caused dose-additive effects on spontaneous neuronal network activity in vitro, and is consistent with other in vitro and in vivo assessments of pyrethroid mixtures. Published by Elsevier B.V.

  5. Complex Networks - A Key to Understanding Brain Function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sporns, Olaf

    2008-01-23

    The brain is a complex network of neurons, engaging in spontaneous and evoked activity that is thought to be the main substrate of mental life.  How this complex system works together to process information and generate coherent cognitive states, even consciousness, is not yet well understood.  In my talk I will review recent studies that have revealed characteristic structural and functional attributes of brain networks, and discuss efforts to build computational models of the brain that are informed by our growing knowledge of brain anatomy and physiology.

  6. Complex Networks - A Key to Understanding Brain Function

    ScienceCinema

    Sporns, Olaf

    2017-12-22

    The brain is a complex network of neurons, engaging in spontaneous and evoked activity that is thought to be the main substrate of mental life.  How this complex system works together to process information and generate coherent cognitive states, even consciousness, is not yet well understood.  In my talk I will review recent studies that have revealed characteristic structural and functional attributes of brain networks, and discuss efforts to build computational models of the brain that are informed by our growing knowledge of brain anatomy and physiology.

  7. Complexity Optimization and High-Throughput Low-Latency Hardware Implementation of a Multi-Electrode Spike-Sorting Algorithm

    PubMed Central

    Dragas, Jelena; Jäckel, David; Hierlemann, Andreas; Franke, Felix

    2017-01-01

    Reliable real-time low-latency spike sorting with large data throughput is essential for studies of neural network dynamics and for brain-machine interfaces (BMIs), in which the stimulation of neural networks is based on the networks' most recent activity. However, the majority of existing multi-electrode spike-sorting algorithms are unsuited for processing high quantities of simultaneously recorded data. Recording from large neuronal networks using large high-density electrode sets (thousands of electrodes) imposes high demands on the data-processing hardware regarding computational complexity and data transmission bandwidth; this, in turn, entails demanding requirements in terms of chip area, memory resources and processing latency. This paper presents computational complexity optimization techniques, which facilitate the use of spike-sorting algorithms in large multi-electrode-based recording systems. The techniques are then applied to a previously published algorithm, on its own, unsuited for large electrode set recordings. Further, a real-time low-latency high-performance VLSI hardware architecture of the modified algorithm is presented, featuring a folded structure capable of processing the activity of hundreds of neurons simultaneously. The hardware is reconfigurable “on-the-fly” and adaptable to the nonstationarities of neuronal recordings. By transmitting exclusively spike time stamps and/or spike waveforms, its real-time processing offers the possibility of data bandwidth and data storage reduction. PMID:25415989

  8. Complexity optimization and high-throughput low-latency hardware implementation of a multi-electrode spike-sorting algorithm.

    PubMed

    Dragas, Jelena; Jackel, David; Hierlemann, Andreas; Franke, Felix

    2015-03-01

    Reliable real-time low-latency spike sorting with large data throughput is essential for studies of neural network dynamics and for brain-machine interfaces (BMIs), in which the stimulation of neural networks is based on the networks' most recent activity. However, the majority of existing multi-electrode spike-sorting algorithms are unsuited for processing high quantities of simultaneously recorded data. Recording from large neuronal networks using large high-density electrode sets (thousands of electrodes) imposes high demands on the data-processing hardware regarding computational complexity and data transmission bandwidth; this, in turn, entails demanding requirements in terms of chip area, memory resources and processing latency. This paper presents computational complexity optimization techniques, which facilitate the use of spike-sorting algorithms in large multi-electrode-based recording systems. The techniques are then applied to a previously published algorithm, on its own, unsuited for large electrode set recordings. Further, a real-time low-latency high-performance VLSI hardware architecture of the modified algorithm is presented, featuring a folded structure capable of processing the activity of hundreds of neurons simultaneously. The hardware is reconfigurable “on-the-fly” and adaptable to the nonstationarities of neuronal recordings. By transmitting exclusively spike time stamps and/or spike waveforms, its real-time processing offers the possibility of data bandwidth and data storage reduction.

  9. A new optimized GA-RBF neural network algorithm.

    PubMed

    Jia, Weikuan; Zhao, Dean; Shen, Tian; Su, Chunyang; Hu, Chanli; Zhao, Yuyan

    2014-01-01

    When confronting the complex problems, radial basis function (RBF) neural network has the advantages of adaptive and self-learning ability, but it is difficult to determine the number of hidden layer neurons, and the weights learning ability from hidden layer to the output layer is low; these deficiencies easily lead to decreasing learning ability and recognition precision. Aiming at this problem, we propose a new optimized RBF neural network algorithm based on genetic algorithm (GA-RBF algorithm), which uses genetic algorithm to optimize the weights and structure of RBF neural network; it chooses new ways of hybrid encoding and optimizing simultaneously. Using the binary encoding encodes the number of the hidden layer's neurons and using real encoding encodes the connection weights. Hidden layer neurons number and connection weights are optimized simultaneously in the new algorithm. However, the connection weights optimization is not complete; we need to use least mean square (LMS) algorithm for further leaning, and finally get a new algorithm model. Using two UCI standard data sets to test the new algorithm, the results show that the new algorithm improves the operating efficiency in dealing with complex problems and also improves the recognition precision, which proves that the new algorithm is valid.

  10. Simple and Inexpensive Paper-Based Astrocyte Co-culture to Improve Survival of Low-Density Neuronal Networks

    PubMed Central

    Aebersold, Mathias J.; Thompson-Steckel, Greta; Joutang, Adriane; Schneider, Moritz; Burchert, Conrad; Forró, Csaba; Weydert, Serge; Han, Hana; Vörös, János

    2018-01-01

    Bottom-up neuroscience aims to engineer well-defined networks of neurons to investigate the functions of the brain. By reducing the complexity of the brain to achievable target questions, such in vitro bioassays better control experimental variables and can serve as a versatile tool for fundamental and pharmacological research. Astrocytes are a cell type critical to neuronal function, and the addition of astrocytes to neuron cultures can improve the quality of in vitro assays. Here, we present cellulose as an astrocyte culture substrate. Astrocytes cultured on the cellulose fiber matrix thrived and formed a dense 3D network. We devised a novel co-culture platform by suspending the easy-to-handle astrocytic paper cultures above neuronal networks of low densities typically needed for bottom-up neuroscience. There was significant improvement in neuronal viability after 5 days in vitro at densities ranging from 50,000 cells/cm2 down to isolated cells at 1,000 cells/cm2. Cultures exhibited spontaneous spiking even at the very low densities, with a significantly greater spike frequency per cell compared to control mono-cultures. Applying the co-culture platform to an engineered network of neurons on a patterned substrate resulted in significantly improved viability and almost doubled the density of live cells. Lastly, the shape of the cellulose substrate can easily be customized to a wide range of culture vessels, making the platform versatile for different applications that will further enable research in bottom-up neuroscience and drug development. PMID:29535595

  11. Interaction of compass sensing and object-motion detection in the locust central complex.

    PubMed

    Bockhorst, Tobias; Homberg, Uwe

    2017-07-01

    Goal-directed behavior is often complicated by unpredictable events, such as the appearance of a predator during directed locomotion. This situation requires adaptive responses like evasive maneuvers followed by subsequent reorientation and course correction. Here we study the possible neural underpinnings of such a situation in an insect, the desert locust. As in other insects, its sense of spatial orientation strongly relies on the central complex, a group of midline brain neuropils. The central complex houses sky compass cells that signal the polarization plane of skylight and thus indicate the animal's steering direction relative to the sun. Most of these cells additionally respond to small moving objects that drive fast sensory-motor circuits for escape. Here we investigate how the presentation of a moving object influences activity of the neurons during compass signaling. Cells responded in one of two ways: in some neurons, responses to the moving object were simply added to the compass response that had adapted during continuous stimulation by stationary polarized light. By contrast, other neurons disadapted, i.e., regained their full compass response to polarized light, when a moving object was presented. We propose that the latter case could help to prepare for reorientation of the animal after escape. A neuronal network based on central-complex architecture can explain both responses by slight changes in the dynamics and amplitudes of adaptation to polarized light in CL columnar input neurons of the system. NEW & NOTEWORTHY Neurons of the central complex in several insects signal compass directions through sensitivity to the sky polarization pattern. In locusts, these neurons also respond to moving objects. We show here that during polarized-light presentation, responses to moving objects override their compass signaling or restore adapted inhibitory as well as excitatory compass responses. A network model is presented to explain the variations of these responses that likely serve to redirect flight or walking following evasive maneuvers. Copyright © 2017 the American Physiological Society.

  12. From neurons to epidemics: How trophic coherence affects spreading processes

    NASA Astrophysics Data System (ADS)

    Klaise, Janis; Johnson, Samuel

    2016-06-01

    Trophic coherence, a measure of the extent to which the nodes of a directed network are organised in levels, has recently been shown to be closely related to many structural and dynamical aspects of complex systems, including graph eigenspectra, the prevalence or absence of feedback cycles, and linear stability. Furthermore, non-trivial trophic structures have been observed in networks of neurons, species, genes, metabolites, cellular signalling, concatenated words, P2P users, and world trade. Here, we consider two simple yet apparently quite different dynamical models—one a susceptible-infected-susceptible epidemic model adapted to include complex contagion and the other an Amari-Hopfield neural network—and show that in both cases the related spreading processes are modulated in similar ways by the trophic coherence of the underlying networks. To do this, we propose a network assembly model which can generate structures with tunable trophic coherence, limiting in either perfectly stratified networks or random graphs. We find that trophic coherence can exert a qualitative change in spreading behaviour, determining whether a pulse of activity will percolate through the entire network or remain confined to a subset of nodes, and whether such activity will quickly die out or endure indefinitely. These results could be important for our understanding of phenomena such as epidemics, rumours, shocks to ecosystems, neuronal avalanches, and many other spreading processes.

  13. Two-Dimensional Optoelectronic Graphene Nanoprobes for Neural Nerwork

    NASA Astrophysics Data System (ADS)

    Hong, Tu; Kitko, Kristina; Wang, Rui; Zhang, Qi; Xu, Yaqiong

    2014-03-01

    Brain is the most complex network created by nature, with billions of neurons connected by trillions of synapses through sophisticated wiring patterns and countless modulatory mechanisms. Current methods to study the neuronal process, either by electrophysiology or optical imaging, have significant limitations on throughput and sensitivity. Here, we use graphene, a monolayer of carbon atoms, as a two-dimensional nanoprobe for neural network. Scanning photocurrent measurement is applied to detect the local integration of electrical and chemical signals in mammalian neurons. Such interface between nanoscale electronic device and biological system provides not only ultra-high sensitivity, but also sub-millisecond temporal resolution, owing to the high carrier mobility of graphene.

  14. Large-Scale Modeling of Epileptic Seizures: Scaling Properties of Two Parallel Neuronal Network Simulation Algorithms

    DOE PAGES

    Pesce, Lorenzo L.; Lee, Hyong C.; Hereld, Mark; ...

    2013-01-01

    Our limited understanding of the relationship between the behavior of individual neurons and large neuronal networks is an important limitation in current epilepsy research and may be one of the main causes of our inadequate ability to treat it. Addressing this problem directly via experiments is impossibly complex; thus, we have been developing and studying medium-large-scale simulations of detailed neuronal networks to guide us. Flexibility in the connection schemas and a complete description of the cortical tissue seem necessary for this purpose. In this paper we examine some of the basic issues encountered in these multiscale simulations. We have determinedmore » the detailed behavior of two such simulators on parallel computer systems. The observed memory and computation-time scaling behavior for a distributed memory implementation were very good over the range studied, both in terms of network sizes (2,000 to 400,000 neurons) and processor pool sizes (1 to 256 processors). Our simulations required between a few megabytes and about 150 gigabytes of RAM and lasted between a few minutes and about a week, well within the capability of most multinode clusters. Therefore, simulations of epileptic seizures on networks with millions of cells should be feasible on current supercomputers.« less

  15. An Approximation of the Error Backpropagation Algorithm in a Predictive Coding Network with Local Hebbian Synaptic Plasticity

    PubMed Central

    Whittington, James C. R.; Bogacz, Rafal

    2017-01-01

    To efficiently learn from feedback, cortical networks need to update synaptic weights on multiple levels of cortical hierarchy. An effective and well-known algorithm for computing such changes in synaptic weights is the error backpropagation algorithm. However, in this algorithm, the change in synaptic weights is a complex function of weights and activities of neurons not directly connected with the synapse being modified, whereas the changes in biological synapses are determined only by the activity of presynaptic and postsynaptic neurons. Several models have been proposed that approximate the backpropagation algorithm with local synaptic plasticity, but these models require complex external control over the network or relatively complex plasticity rules. Here we show that a network developed in the predictive coding framework can efficiently perform supervised learning fully autonomously, employing only simple local Hebbian plasticity. Furthermore, for certain parameters, the weight change in the predictive coding model converges to that of the backpropagation algorithm. This suggests that it is possible for cortical networks with simple Hebbian synaptic plasticity to implement efficient learning algorithms in which synapses in areas on multiple levels of hierarchy are modified to minimize the error on the output. PMID:28333583

  16. An Approximation of the Error Backpropagation Algorithm in a Predictive Coding Network with Local Hebbian Synaptic Plasticity.

    PubMed

    Whittington, James C R; Bogacz, Rafal

    2017-05-01

    To efficiently learn from feedback, cortical networks need to update synaptic weights on multiple levels of cortical hierarchy. An effective and well-known algorithm for computing such changes in synaptic weights is the error backpropagation algorithm. However, in this algorithm, the change in synaptic weights is a complex function of weights and activities of neurons not directly connected with the synapse being modified, whereas the changes in biological synapses are determined only by the activity of presynaptic and postsynaptic neurons. Several models have been proposed that approximate the backpropagation algorithm with local synaptic plasticity, but these models require complex external control over the network or relatively complex plasticity rules. Here we show that a network developed in the predictive coding framework can efficiently perform supervised learning fully autonomously, employing only simple local Hebbian plasticity. Furthermore, for certain parameters, the weight change in the predictive coding model converges to that of the backpropagation algorithm. This suggests that it is possible for cortical networks with simple Hebbian synaptic plasticity to implement efficient learning algorithms in which synapses in areas on multiple levels of hierarchy are modified to minimize the error on the output.

  17. Engineering-Aligned 3D Neural Circuit in Microfluidic Device.

    PubMed

    Bang, Seokyoung; Na, Sangcheol; Jang, Jae Myung; Kim, Jinhyun; Jeon, Noo Li

    2016-01-07

    The brain is one of the most important and complex organs in the human body. Although various neural network models have been proposed for in vitro 3D neuronal networks, it has been difficult to mimic functional and structural complexity of the in vitro neural circuit. Here, a microfluidic model of a simplified 3D neural circuit is reported. First, the microfluidic device is filled with Matrigel and continuous flow is delivered across the device during gelation. The fluidic flow aligns the extracellular matrix (ECM) components along the flow direction. Following the alignment of ECM fibers, neurites of primary rat cortical neurons are grown into the Matrigel at the average speed of 250 μm d(-1) and form axon bundles approximately 1500 μm in length at 6 days in vitro (DIV). Additionally, neural networks are developed from presynaptic to postsynaptic neurons at 14 DIV. The establishment of aligned 3D neural circuits is confirmed with the immunostaining of PSD-95 and synaptophysin and the observation of calcium signal transmission. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Forecasting PM10 in Algiers: efficacy of multilayer perceptron networks.

    PubMed

    Abderrahim, Hamza; Chellali, Mohammed Reda; Hamou, Ahmed

    2016-01-01

    Air quality forecasting system has acquired high importance in atmospheric pollution due to its negative impacts on the environment and human health. The artificial neural network is one of the most common soft computing methods that can be pragmatic for carving such complex problem. In this paper, we used a multilayer perceptron neural network to forecast the daily averaged concentration of the respirable suspended particulates with aerodynamic diameter of not more than 10 μm (PM10) in Algiers, Algeria. The data for training and testing the network are based on the data sampled from 2002 to 2006 collected by SAMASAFIA network center at El Hamma station. The meteorological data, air temperature, relative humidity, and wind speed, are used as inputs network parameters in the formation of model. The training patterns used correspond to 41 days data. The performance of the developed models was evaluated on the basis index of agreement and other statistical parameters. It was seen that the overall performance of model with 15 neurons is better than the ones with 5 and 10 neurons. The results of multilayer network with as few as one hidden layer and 15 neurons were quite reasonable than the ones with 5 and 10 neurons. Finally, an error around 9% has been reached.

  19. Temporal neural networks and transient analysis of complex engineering systems

    NASA Astrophysics Data System (ADS)

    Uluyol, Onder

    A theory is introduced for a multi-layered Local Output Gamma Feedback (LOGF) neural network within the paradigm of Locally-Recurrent Globally-Feedforward neural networks. It is developed for the identification, prediction, and control tasks of spatio-temporal systems and allows for the presentation of different time scales through incorporation of a gamma memory. It is initially applied to the tasks of sunspot and Mackey-Glass series prediction as benchmarks, then it is extended to the task of power level control of a nuclear reactor at different fuel cycle conditions. The developed LOGF neuron model can also be viewed as a Transformed Input and State (TIS) Gamma memory for neural network architectures for temporal processing. The novel LOGF neuron model extends the static neuron model by incorporating into it a short-term memory structure in the form of a digital gamma filter. A feedforward neural network made up of LOGF neurons can thus be used to model dynamic systems. A learning algorithm based upon the Backpropagation-Through-Time (BTT) approach is derived. It is applicable for training a general L-layer LOGF neural network. The spatial and temporal weights and parameters of the network are iteratively optimized for a given problem using the derived learning algorithm.

  20. Temporal integration at consecutive processing stages in the auditory pathway of the grasshopper.

    PubMed

    Wirtssohn, Sarah; Ronacher, Bernhard

    2015-04-01

    Temporal integration in the auditory system of locusts was quantified by presenting single clicks and click pairs while performing intracellular recordings. Auditory neurons were studied at three processing stages, which form a feed-forward network in the metathoracic ganglion. Receptor neurons and most first-order interneurons ("local neurons") encode the signal envelope, while second-order interneurons ("ascending neurons") tend to extract more complex, behaviorally relevant sound features. In different neuron types of the auditory pathway we found three response types: no significant temporal integration (some ascending neurons), leaky energy integration (receptor neurons and some local neurons), and facilitatory processes (some local and ascending neurons). The receptor neurons integrated input over very short time windows (<2 ms). Temporal integration on longer time scales was found at subsequent processing stages, indicative of within-neuron computations and network activity. These different strategies, realized at separate processing stages and in parallel neuronal pathways within one processing stage, could enable the grasshopper's auditory system to evaluate longer time windows and thus to implement temporal filters, while at the same time maintaining a high temporal resolution. Copyright © 2015 the American Physiological Society.

  1. Self-sustained asynchronous irregular states and Up-Down states in thalamic, cortical and thalamocortical networks of nonlinear integrate-and-fire neurons.

    PubMed

    Destexhe, Alain

    2009-12-01

    Randomly-connected networks of integrate-and-fire (IF) neurons are known to display asynchronous irregular (AI) activity states, which resemble the discharge activity recorded in the cerebral cortex of awake animals. However, it is not clear whether such activity states are specific to simple IF models, or if they also exist in networks where neurons are endowed with complex intrinsic properties similar to electrophysiological measurements. Here, we investigate the occurrence of AI states in networks of nonlinear IF neurons, such as the adaptive exponential IF (Brette-Gerstner-Izhikevich) model. This model can display intrinsic properties such as low-threshold spike (LTS), regular spiking (RS) or fast-spiking (FS). We successively investigate the oscillatory and AI dynamics of thalamic, cortical and thalamocortical networks using such models. AI states can be found in each case, sometimes with surprisingly small network size of the order of a few tens of neurons. We show that the presence of LTS neurons in cortex or in thalamus, explains the robust emergence of AI states for relatively small network sizes. Finally, we investigate the role of spike-frequency adaptation (SFA). In cortical networks with strong SFA in RS cells, the AI state is transient, but when SFA is reduced, AI states can be self-sustained for long times. In thalamocortical networks, AI states are found when the cortex is itself in an AI state, but with strong SFA, the thalamocortical network displays Up and Down state transitions, similar to intracellular recordings during slow-wave sleep or anesthesia. Self-sustained Up and Down states could also be generated by two-layer cortical networks with LTS cells. These models suggest that intrinsic properties such as adaptation and low-threshold bursting activity are crucial for the genesis and control of AI states in thalamocortical networks.

  2. Focal expression of mutant huntingtin in the songbird basal ganglia disrupts cortico-basal ganglia networks and vocal sequences

    PubMed Central

    Tanaka, Masashi; Singh Alvarado, Jonnathan; Murugan, Malavika; Mooney, Richard

    2016-01-01

    The basal ganglia (BG) promote complex sequential movements by helping to select elementary motor gestures appropriate to a given behavioral context. Indeed, Huntington’s disease (HD), which causes striatal atrophy in the BG, is characterized by hyperkinesia and chorea. How striatal cell loss alters activity in the BG and downstream motor cortical regions to cause these disorganized movements remains unknown. Here, we show that expressing the genetic mutation that causes HD in a song-related region of the songbird BG destabilizes syllable sequences and increases overall vocal activity, but leave the structure of individual syllables intact. These behavioral changes are paralleled by the selective loss of striatal neurons and reduction of inhibitory synapses on pallidal neurons that serve as the BG output. Chronic recordings in singing birds revealed disrupted temporal patterns of activity in pallidal neurons and downstream cortical neurons. Moreover, reversible inactivation of the cortical neurons rescued the disorganized vocal sequences in transfected birds. These findings shed light on a key role of temporal patterns of cortico-BG activity in the regulation of complex motor sequences and show how a genetic mutation alters cortico-BG networks to cause disorganized movements. PMID:26951661

  3. Exploring complex networks.

    PubMed

    Strogatz, S H

    2001-03-08

    The study of networks pervades all of science, from neurobiology to statistical physics. The most basic issues are structural: how does one characterize the wiring diagram of a food web or the Internet or the metabolic network of the bacterium Escherichia coli? Are there any unifying principles underlying their topology? From the perspective of nonlinear dynamics, we would also like to understand how an enormous network of interacting dynamical systems-be they neurons, power stations or lasers-will behave collectively, given their individual dynamics and coupling architecture. Researchers are only now beginning to unravel the structure and dynamics of complex networks.

  4. "Scientific roots" of dualism in neuroscience.

    PubMed

    Arshavsky, Yuri I

    2006-07-01

    Although the dualistic concept is unpopular among neuroscientists involved in experimental studies of the brain, neurophysiological literature is full of covert dualistic statements on the possibility of understanding neural mechanisms of human consciousness. Particularly, the covert dualistic attitude is exhibited in the unwillingness to discuss neural mechanisms of consciousness, leaving the problem of consciousness to psychologists and philosophers. This covert dualism seems to be rooted in the main paradigm of neuroscience that suggests that cognitive functions, such as language production and comprehension, face recognition, declarative memory, emotions, etc., are performed by neural networks consisting of simple elements. I argue that neural networks of any complexity consisting of neurons whose function is limited to the generation of electrical potentials and the transmission of signals to other neurons are hardly capable of producing human mental activity, including consciousness. Based on results obtained in physiological, morphological, clinical, and genetic studies of cognitive functions (mainly linguistic ones), I advocate the hypothesis that the performance of cognitive functions is based on complex cooperative activity of "complex" neurons that are carriers of "elementary cognition." The uniqueness of human cognitive functions, which has a genetic basis, is determined by the specificity of genes expressed by these "complex" neurons. The main goal of the review is to show that the identification of the genes implicated in cognitive functions and the understanding of a functional role of their products is a possible way to overcome covert dualism in neuroscience.

  5. A Rotational Motion Perception Neural Network Based on Asymmetric Spatiotemporal Visual Information Processing.

    PubMed

    Hu, Bin; Yue, Shigang; Zhang, Zhuhong

    All complex motion patterns can be decomposed into several elements, including translation, expansion/contraction, and rotational motion. In biological vision systems, scientists have found that specific types of visual neurons have specific preferences to each of the three motion elements. There are computational models on translation and expansion/contraction perceptions; however, little has been done in the past to create computational models for rotational motion perception. To fill this gap, we proposed a neural network that utilizes a specific spatiotemporal arrangement of asymmetric lateral inhibited direction selective neural networks (DSNNs) for rotational motion perception. The proposed neural network consists of two parts-presynaptic and postsynaptic parts. In the presynaptic part, there are a number of lateral inhibited DSNNs to extract directional visual cues. In the postsynaptic part, similar to the arrangement of the directional columns in the cerebral cortex, these direction selective neurons are arranged in a cyclic order to perceive rotational motion cues. In the postsynaptic network, the delayed excitation from each direction selective neuron is multiplied by the gathered excitation from this neuron and its unilateral counterparts depending on which rotation, clockwise (cw) or counter-cw (ccw), to perceive. Systematic experiments under various conditions and settings have been carried out and validated the robustness and reliability of the proposed neural network in detecting cw or ccw rotational motion. This research is a critical step further toward dynamic visual information processing.All complex motion patterns can be decomposed into several elements, including translation, expansion/contraction, and rotational motion. In biological vision systems, scientists have found that specific types of visual neurons have specific preferences to each of the three motion elements. There are computational models on translation and expansion/contraction perceptions; however, little has been done in the past to create computational models for rotational motion perception. To fill this gap, we proposed a neural network that utilizes a specific spatiotemporal arrangement of asymmetric lateral inhibited direction selective neural networks (DSNNs) for rotational motion perception. The proposed neural network consists of two parts-presynaptic and postsynaptic parts. In the presynaptic part, there are a number of lateral inhibited DSNNs to extract directional visual cues. In the postsynaptic part, similar to the arrangement of the directional columns in the cerebral cortex, these direction selective neurons are arranged in a cyclic order to perceive rotational motion cues. In the postsynaptic network, the delayed excitation from each direction selective neuron is multiplied by the gathered excitation from this neuron and its unilateral counterparts depending on which rotation, clockwise (cw) or counter-cw (ccw), to perceive. Systematic experiments under various conditions and settings have been carried out and validated the robustness and reliability of the proposed neural network in detecting cw or ccw rotational motion. This research is a critical step further toward dynamic visual information processing.

  6. Synaptic dynamics contribute to long-term single neuron response fluctuations.

    PubMed

    Reinartz, Sebastian; Biro, Istvan; Gal, Asaf; Giugliano, Michele; Marom, Shimon

    2014-01-01

    Firing rate variability at the single neuron level is characterized by long-memory processes and complex statistics over a wide range of time scales (from milliseconds up to several hours). Here, we focus on the contribution of non-stationary efficacy of the ensemble of synapses-activated in response to a given stimulus-on single neuron response variability. We present and validate a method tailored for controlled and specific long-term activation of a single cortical neuron in vitro via synaptic or antidromic stimulation, enabling a clear separation between two determinants of neuronal response variability: membrane excitability dynamics vs. synaptic dynamics. Applying this method we show that, within the range of physiological activation frequencies, the synaptic ensemble of a given neuron is a key contributor to the neuronal response variability, long-memory processes and complex statistics observed over extended time scales. Synaptic transmission dynamics impact on response variability in stimulation rates that are substantially lower compared to stimulation rates that drive excitability resources to fluctuate. Implications to network embedded neurons are discussed.

  7. Rich-Club Organization in Effective Connectivity among Cortical Neurons.

    PubMed

    Nigam, Sunny; Shimono, Masanori; Ito, Shinya; Yeh, Fang-Chin; Timme, Nicholas; Myroshnychenko, Maxym; Lapish, Christopher C; Tosi, Zachary; Hottowy, Pawel; Smith, Wesley C; Masmanidis, Sotiris C; Litke, Alan M; Sporns, Olaf; Beggs, John M

    2016-01-20

    The performance of complex networks, like the brain, depends on how effectively their elements communicate. Despite the importance of communication, it is virtually unknown how information is transferred in local cortical networks, consisting of hundreds of closely spaced neurons. To address this, it is important to record simultaneously from hundreds of neurons at a spacing that matches typical axonal connection distances, and at a temporal resolution that matches synaptic delays. We used a 512-electrode array (60 μm spacing) to record spontaneous activity at 20 kHz from up to 500 neurons simultaneously in slice cultures of mouse somatosensory cortex for 1 h at a time. We applied a previously validated version of transfer entropy to quantify information transfer. Similar to in vivo reports, we found an approximately lognormal distribution of firing rates. Pairwise information transfer strengths also were nearly lognormally distributed, similar to reports of synaptic strengths. Some neurons transferred and received much more information than others, which is consistent with previous predictions. Neurons with the highest outgoing and incoming information transfer were more strongly connected to each other than chance, thus forming a "rich club." We found similar results in networks recorded in vivo from rodent cortex, suggesting the generality of these findings. A rich-club structure has been found previously in large-scale human brain networks and is thought to facilitate communication between cortical regions. The discovery of a small, but information-rich, subset of neurons within cortical regions suggests that this population will play a vital role in communication, learning, and memory. Significance statement: Many studies have focused on communication networks between cortical brain regions. In contrast, very few studies have examined communication networks within a cortical region. This is the first study to combine such a large number of neurons (several hundred at a time) with such high temporal resolution (so we can know the direction of communication between neurons) for mapping networks within cortex. We found that information was not transferred equally through all neurons. Instead, ∼70% of the information passed through only 20% of the neurons. Network models suggest that this highly concentrated pattern of information transfer would be both efficient and robust to damage. Therefore, this work may help in understanding how the cortex processes information and responds to neurodegenerative diseases. Copyright © 2016 Nigam et al.

  8. Rich-Club Organization in Effective Connectivity among Cortical Neurons

    PubMed Central

    Shimono, Masanori; Ito, Shinya; Yeh, Fang-Chin; Timme, Nicholas; Myroshnychenko, Maxym; Lapish, Christopher C.; Tosi, Zachary; Hottowy, Pawel; Smith, Wesley C.; Masmanidis, Sotiris C.; Litke, Alan M.; Sporns, Olaf; Beggs, John M.

    2016-01-01

    The performance of complex networks, like the brain, depends on how effectively their elements communicate. Despite the importance of communication, it is virtually unknown how information is transferred in local cortical networks, consisting of hundreds of closely spaced neurons. To address this, it is important to record simultaneously from hundreds of neurons at a spacing that matches typical axonal connection distances, and at a temporal resolution that matches synaptic delays. We used a 512-electrode array (60 μm spacing) to record spontaneous activity at 20 kHz from up to 500 neurons simultaneously in slice cultures of mouse somatosensory cortex for 1 h at a time. We applied a previously validated version of transfer entropy to quantify information transfer. Similar to in vivo reports, we found an approximately lognormal distribution of firing rates. Pairwise information transfer strengths also were nearly lognormally distributed, similar to reports of synaptic strengths. Some neurons transferred and received much more information than others, which is consistent with previous predictions. Neurons with the highest outgoing and incoming information transfer were more strongly connected to each other than chance, thus forming a “rich club.” We found similar results in networks recorded in vivo from rodent cortex, suggesting the generality of these findings. A rich-club structure has been found previously in large-scale human brain networks and is thought to facilitate communication between cortical regions. The discovery of a small, but information-rich, subset of neurons within cortical regions suggests that this population will play a vital role in communication, learning, and memory. SIGNIFICANCE STATEMENT Many studies have focused on communication networks between cortical brain regions. In contrast, very few studies have examined communication networks within a cortical region. This is the first study to combine such a large number of neurons (several hundred at a time) with such high temporal resolution (so we can know the direction of communication between neurons) for mapping networks within cortex. We found that information was not transferred equally through all neurons. Instead, ∼70% of the information passed through only 20% of the neurons. Network models suggest that this highly concentrated pattern of information transfer would be both efficient and robust to damage. Therefore, this work may help in understanding how the cortex processes information and responds to neurodegenerative diseases. PMID:26791200

  9. Beyond blow-up in excitatory integrate and fire neuronal networks: Refractory period and spontaneous activity.

    PubMed

    Cáceres, María J; Perthame, Benoît

    2014-06-07

    The Network Noisy Leaky Integrate and Fire equation is among the simplest model allowing for a self-consistent description of neural networks and gives a rule to determine the probability to find a neuron at the potential v. However, its mathematical structure is still poorly understood and, concerning its solutions, very few results are available. In the midst of them, a recent result shows blow-up in finite time for fully excitatory networks. The intuitive explanation is that each firing neuron induces a discharge of the others; thus increases the activity and consequently the discharge rate of the full network. In order to better understand the details of the phenomena and show that the equation is more complex and fruitful than expected, we analyze further the model. We extend the finite time blow-up result to the case when neurons, after firing, enter a refractory state for a given period of time. We also show that spontaneous activity may occur when, additionally, randomness is included on the firing potential VF in regimes where blow-up occurs for a fixed value of VF. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Blur identification by multilayer neural network based on multivalued neurons.

    PubMed

    Aizenberg, Igor; Paliy, Dmitriy V; Zurada, Jacek M; Astola, Jaakko T

    2008-05-01

    A multilayer neural network based on multivalued neurons (MLMVN) is a neural network with a traditional feedforward architecture. At the same time, this network has a number of specific different features. Its backpropagation learning algorithm is derivative-free. The functionality of MLMVN is superior to that of the traditional feedforward neural networks and of a variety kernel-based networks. Its higher flexibility and faster adaptation to the target mapping enables to model complex problems using simpler networks. In this paper, the MLMVN is used to identify both type and parameters of the point spread function, whose precise identification is of crucial importance for the image deblurring. The simulation results show the high efficiency of the proposed approach. It is confirmed that the MLMVN is a powerful tool for solving classification problems, especially multiclass ones.

  11. Artificial neural networks using complex numbers and phase encoded weights.

    PubMed

    Michel, Howard E; Awwal, Abdul Ahad S

    2010-04-01

    The model of a simple perceptron using phase-encoded inputs and complex-valued weights is proposed. The aggregation function, activation function, and learning rule for the proposed neuron are derived and applied to Boolean logic functions and simple computer vision tasks. The complex-valued neuron (CVN) is shown to be superior to traditional perceptrons. An improvement of 135% over the theoretical maximum of 104 linearly separable problems (of three variables) solvable by conventional perceptrons is achieved without additional logic, neuron stages, or higher order terms such as those required in polynomial logic gates. The application of CVN in distortion invariant character recognition and image segmentation is demonstrated. Implementation details are discussed, and the CVN is shown to be very attractive for optical implementation since optical computations are naturally complex. The cost of the CVN is less in all cases than the traditional neuron when implemented optically. Therefore, all the benefits of the CVN can be obtained without additional cost. However, on those implementations dependent on standard serial computers, CVN will be more cost effective only in those applications where its increased power can offset the requirement for additional neurons.

  12. Overexpression of cypin alters dendrite morphology, single neuron activity, and network properties via distinct mechanisms

    NASA Astrophysics Data System (ADS)

    Rodríguez, Ana R.; O'Neill, Kate M.; Swiatkowski, Przemyslaw; Patel, Mihir V.; Firestein, Bonnie L.

    2018-02-01

    Objective. This study investigates the effect that overexpression of cytosolic PSD-95 interactor (cypin), a regulator of synaptic PSD-95 protein localization and a core regulator of dendrite branching, exerts on the electrical activity of rat hippocampal neurons and networks. Approach. We cultured rat hippocampal neurons and used lipid-mediated transfection and lentiviral gene transfer to achieve high levels of cypin or cypin mutant (cypinΔPDZ PSD-95 non-binding) expression cellularly and network-wide, respectively. Main results. Our analysis revealed that although overexpression of cypin and cypinΔPDZ increase dendrite numbers and decrease spine density, cypin and cypinΔPDZ distinctly regulate neuronal activity. At the single cell level, cypin promotes decreases in bursting activity while cypinΔPDZ reduces sEPSC frequency and further decreases bursting compared to cypin. At the network level, by using the Fano factor as a measure of spike count variability, cypin overexpression results in an increase in variability of spike count, and this effect is abolished when cypin cannot bind PSD-95. This variability is also dependent on baseline activity levels and on mean spike rate over time. Finally, our spike sorting data show that overexpression of cypin results in a more complex distribution of spike waveforms and that binding to PSD-95 is essential for this complexity. Significance. Our data suggest that dendrite morphology does not play a major role in cypin action on electrical activity.

  13. Regeneration in the era of functional genomics and gene network analysis.

    PubMed

    Smith, Joel; Morgan, Jennifer R; Zottoli, Steven J; Smith, Peter J; Buxbaum, Joseph D; Bloom, Ona E

    2011-08-01

    What gives an organism the ability to regrow tissues and to recover function where another organism fails is the central problem of regenerative biology. The challenge is to describe the mechanisms of regeneration at the molecular level, delivering detailed insights into the many components that are cross-regulated. In other words, a broad, yet deep dissection of the system-wide network of molecular interactions is needed. Functional genomics has been used to elucidate gene regulatory networks (GRNs) in developing tissues, which, like regeneration, are complex systems. Therefore, we reason that the GRN approach, aided by next generation technologies, can also be applied to study the molecular mechanisms underlying the complex functions of regeneration. We ask what characteristics a model system must have to support a GRN analysis. Our discussion focuses on regeneration in the central nervous system, where loss of function has particularly devastating consequences for an organism. We examine a cohort of cells conserved across all vertebrates, the reticulospinal (RS) neurons, which lend themselves well to experimental manipulations. In the lamprey, a jawless vertebrate, there are giant RS neurons whose large size and ability to regenerate make them particularly suited for a GRN analysis. Adding to their value, a distinct subset of lamprey RS neurons reproducibly fail to regenerate, presenting an opportunity for side-by-side comparison of gene networks that promote or inhibit regeneration. Thus, determining the GRN for regeneration in RS neurons will provide a mechanistic understanding of the fundamental cues that lead to success or failure to regenerate.

  14. Human Age Recognition by Electrocardiogram Signal Based on Artificial Neural Network

    NASA Astrophysics Data System (ADS)

    Dasgupta, Hirak

    2016-12-01

    The objective of this work is to make a neural network function approximation model to detect human age from the electrocardiogram (ECG) signal. The input vectors of the neural network are the Katz fractal dimension of the ECG signal, frequencies in the QRS complex, male or female (represented by numeric constant) and the average of successive R-R peak distance of a particular ECG signal. The QRS complex has been detected by short time Fourier transform algorithm. The successive R peak has been detected by, first cutting the signal into periods by auto-correlation method and then finding the absolute of the highest point in each period. The neural network used in this problem consists of two layers, with Sigmoid neuron in the input and linear neuron in the output layer. The result shows the mean of errors as -0.49, 1.03, 0.79 years and the standard deviation of errors as 1.81, 1.77, 2.70 years during training, cross validation and testing with unknown data sets, respectively.

  15. Altered proliferation and networks in neural cells derived from idiopathic autistic individuals.

    PubMed

    Marchetto, Maria C; Belinson, Haim; Tian, Yuan; Freitas, Beatriz C; Fu, Chen; Vadodaria, Krishna; Beltrao-Braga, Patricia; Trujillo, Cleber A; Mendes, Ana P D; Padmanabhan, Krishnan; Nunez, Yanelli; Ou, Jing; Ghosh, Himanish; Wright, Rebecca; Brennand, Kristen; Pierce, Karen; Eichenfield, Lawrence; Pramparo, Tiziano; Eyler, Lisa; Barnes, Cynthia C; Courchesne, Eric; Geschwind, Daniel H; Gage, Fred H; Wynshaw-Boris, Anthony; Muotri, Alysson R

    2017-06-01

    Autism spectrum disorders (ASD) are common, complex and heterogeneous neurodevelopmental disorders. Cellular and molecular mechanisms responsible for ASD pathogenesis have been proposed based on genetic studies, brain pathology and imaging, but a major impediment to testing ASD hypotheses is the lack of human cell models. Here, we reprogrammed fibroblasts to generate induced pluripotent stem cells, neural progenitor cells (NPCs) and neurons from ASD individuals with early brain overgrowth and non-ASD controls with normal brain size. ASD-derived NPCs display increased cell proliferation because of dysregulation of a β-catenin/BRN2 transcriptional cascade. ASD-derived neurons display abnormal neurogenesis and reduced synaptogenesis leading to functional defects in neuronal networks. Interestingly, defects in neuronal networks could be rescued by insulin growth factor 1 (IGF-1), a drug that is currently in clinical trials for ASD. This work demonstrates that selection of ASD subjects based on endophenotypes unraveled biologically relevant pathway disruption and revealed a potential cellular mechanism for the therapeutic effect of IGF-1.

  16. Respiratory Network Stability and Modulatory Response to Substance P Require Nalcn.

    PubMed

    Yeh, Szu-Ying; Huang, Wei-Hsiang; Wang, Wei; Ward, Christopher S; Chao, Eugene S; Wu, Zhenyu; Tang, Bin; Tang, Jianrong; Sun, Jenny J; Esther van der Heijden, Meike; Gray, Paul A; Xue, Mingshan; Ray, Russell S; Ren, Dejian; Zoghbi, Huda Y

    2017-04-19

    Respiration is a rhythmic activity as well as one that requires responsiveness to internal and external circumstances; both the rhythm and neuromodulatory responses of breathing are controlled by brainstem neurons in the preBötzinger complex (preBötC) and the retrotrapezoid nucleus (RTN), but the specific ion channels essential to these activities remain to be identified. Because deficiency of sodium leak channel, non-selective (Nalcn) causes lethal apnea in humans and mice, we investigated Nalcn function in these neuronal groups. We found that one-third of mice lacking Nalcn in excitatory preBötC neurons died soon after birth; surviving mice developed apneas in adulthood. Interestingly, in both preBötC and RTN neurons, the Nalcn current influences the resting membrane potential, contributes to maintenance of stable network activity, and mediates modulatory responses to the neuropeptide substance P. These findings reveal Nalcn's specific role in both rhythmic stability and responsiveness to neuropeptides within the respiratory network. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Graph Theoretic and Motif Analyses of the Hippocampal Neuron Type Potential Connectome.

    PubMed

    Rees, Christopher L; Wheeler, Diek W; Hamilton, David J; White, Charise M; Komendantov, Alexander O; Ascoli, Giorgio A

    2016-01-01

    We computed the potential connectivity map of all known neuron types in the rodent hippocampal formation by supplementing scantly available synaptic data with spatial distributions of axons and dendrites from the open-access knowledge base Hippocampome.org. The network that results from this endeavor, the broadest and most complete for a mammalian cortical region at the neuron-type level to date, contains more than 3200 connections among 122 neuron types across six subregions. Analyses of these data using graph theory metrics unveil the fundamental architectural principles of the hippocampal circuit. Globally, we identify a highly specialized topology minimizing communication cost; a modular structure underscoring the prominence of the trisynaptic loop; a core set of neuron types serving as information-processing hubs as well as a distinct group of particular antihub neurons; a nested, two-tier rich club managing much of the network traffic; and an innate resilience to random perturbations. At the local level, we uncover the basic building blocks, or connectivity patterns, that combine to produce complex global functionality, and we benchmark their utilization in the circuit relative to random networks. Taken together, these results provide a comprehensive connectivity profile of the hippocampus, yielding novel insights on its functional operations at the computationally crucial level of neuron types.

  18. A Complex-Valued Firing-Rate Model That Approximates the Dynamics of Spiking Networks

    PubMed Central

    Schaffer, Evan S.; Ostojic, Srdjan; Abbott, L. F.

    2013-01-01

    Firing-rate models provide an attractive approach for studying large neural networks because they can be simulated rapidly and are amenable to mathematical analysis. Traditional firing-rate models assume a simple form in which the dynamics are governed by a single time constant. These models fail to replicate certain dynamic features of populations of spiking neurons, especially those involving synchronization. We present a complex-valued firing-rate model derived from an eigenfunction expansion of the Fokker-Planck equation and apply it to the linear, quadratic and exponential integrate-and-fire models. Despite being almost as simple as a traditional firing-rate description, this model can reproduce firing-rate dynamics due to partial synchronization of the action potentials in a spiking model, and it successfully predicts the transition to spike synchronization in networks of coupled excitatory and inhibitory neurons. PMID:24204236

  19. Phase Transitions in Living Neural Networks

    NASA Astrophysics Data System (ADS)

    Williams-Garcia, Rashid Vladimir

    Our nervous systems are composed of intricate webs of interconnected neurons interacting in complex ways. These complex interactions result in a wide range of collective behaviors with implications for features of brain function, e.g., information processing. Under certain conditions, such interactions can drive neural network dynamics towards critical phase transitions, where power-law scaling is conjectured to allow optimal behavior. Recent experimental evidence is consistent with this idea and it seems plausible that healthy neural networks would tend towards optimality. This hypothesis, however, is based on two problematic assumptions, which I describe and for which I present alternatives in this thesis. First, critical transitions may vanish due to the influence of an environment, e.g., a sensory stimulus, and so living neural networks may be incapable of achieving "critical" optimality. I develop a framework known as quasicriticality, in which a relative optimality can be achieved depending on the strength of the environmental influence. Second, the power-law scaling supporting this hypothesis is based on statistical analysis of cascades of activity known as neuronal avalanches, which conflate causal and non-causal activity, thus confounding important dynamical information. In this thesis, I present a new method to unveil causal links, known as causal webs, between neuronal activations, thus allowing for experimental tests of the quasicriticality hypothesis and other practical applications.

  20. Closed-Loop Control of Complex Networks: A Trade-Off between Time and Energy

    NASA Astrophysics Data System (ADS)

    Sun, Yong-Zheng; Leng, Si-Yang; Lai, Ying-Cheng; Grebogi, Celso; Lin, Wei

    2017-11-01

    Controlling complex nonlinear networks is largely an unsolved problem at the present. Existing works focus either on open-loop control strategies and their energy consumptions or on closed-loop control schemes with an infinite-time duration. We articulate a finite-time, closed-loop controller with an eye toward the physical and mathematical underpinnings of the trade-off between the control time and energy as well as their dependence on the network parameters and structure. The closed-loop controller is tested on a large number of real systems including stem cell differentiation, food webs, random ecosystems, and spiking neuronal networks. Our results represent a step forward in developing a rigorous and general framework to control nonlinear dynamical networks with a complex topology.

  1. Acetylcholine as a neuromodulator: cholinergic signaling shapes nervous system function and behavior

    PubMed Central

    Picciotto, Marina R.; Higley, Michael J.; Mineur, Yann S.

    2012-01-01

    Acetylcholine in the brain alters neuronal excitability, influences synaptic transmission, induces synaptic plasticity and coordinates the firing of groups of neurons. As a result, it changes the state of neuronal networks throughout the brain and modifies their response to internal and external inputs: the classical role of a neuromodulator. Here we identify actions of cholinergic signaling on cellular and synaptic properties of neurons in several brain areas and discuss the consequences of this signaling on behaviors related to drug abuse, attention, food intake, and affect. The diverse effects of acetylcholine depend on the site of release, the receptor subtypes, and the target neuronal population, however, a common theme is that acetylcholine potentiates behaviors that are adaptive to environmental stimuli and decreases responses to ongoing stimuli that do not require immediate action. The ability of acetylcholine to coordinate the response of neuronal networks in many brain areas makes cholinergic modulation an essential mechanism underlying complex behaviors. PMID:23040810

  2. Energy-efficient neural information processing in individual neurons and neuronal networks.

    PubMed

    Yu, Lianchun; Yu, Yuguo

    2017-11-01

    Brains are composed of networks of an enormous number of neurons interconnected with synapses. Neural information is carried by the electrical signals within neurons and the chemical signals among neurons. Generating these electrical and chemical signals is metabolically expensive. The fundamental issue raised here is whether brains have evolved efficient ways of developing an energy-efficient neural code from the molecular level to the circuit level. Here, we summarize the factors and biophysical mechanisms that could contribute to the energy-efficient neural code for processing input signals. The factors range from ion channel kinetics, body temperature, axonal propagation of action potentials, low-probability release of synaptic neurotransmitters, optimal input and noise, the size of neurons and neuronal clusters, excitation/inhibition balance, coding strategy, cortical wiring, and the organization of functional connectivity. Both experimental and computational evidence suggests that neural systems may use these factors to maximize the efficiency of energy consumption in processing neural signals. Studies indicate that efficient energy utilization may be universal in neuronal systems as an evolutionary consequence of the pressure of limited energy. As a result, neuronal connections may be wired in a highly economical manner to lower energy costs and space. Individual neurons within a network may encode independent stimulus components to allow a minimal number of neurons to represent whole stimulus characteristics efficiently. This basic principle may fundamentally change our view of how billions of neurons organize themselves into complex circuits to operate and generate the most powerful intelligent cognition in nature. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  3. 3D quantitative phase imaging of neural networks using WDT

    NASA Astrophysics Data System (ADS)

    Kim, Taewoo; Liu, S. C.; Iyer, Raj; Gillette, Martha U.; Popescu, Gabriel

    2015-03-01

    White-light diffraction tomography (WDT) is a recently developed 3D imaging technique based on a quantitative phase imaging system called spatial light interference microscopy (SLIM). The technique has achieved a sub-micron resolution in all three directions with high sensitivity granted by the low-coherence of a white-light source. Demonstrations of the technique on single cell imaging have been presented previously; however, imaging on any larger sample, including a cluster of cells, has not been demonstrated using the technique. Neurons in an animal body form a highly complex and spatially organized 3D structure, which can be characterized by neuronal networks or circuits. Currently, the most common method of studying the 3D structure of neuron networks is by using a confocal fluorescence microscope, which requires fluorescence tagging with either transient membrane dyes or after fixation of the cells. Therefore, studies on neurons are often limited to samples that are chemically treated and/or dead. WDT presents a solution for imaging live neuron networks with a high spatial and temporal resolution, because it is a 3D imaging method that is label-free and non-invasive. Using this method, a mouse or rat hippocampal neuron culture and a mouse dorsal root ganglion (DRG) neuron culture have been imaged in order to see the extension of processes between the cells in 3D. Furthermore, the tomogram is compared with a confocal fluorescence image in order to investigate the 3D structure at synapses.

  4. PTEN Loss Increases the Connectivity of Fast Synaptic Motifs and Functional Connectivity in a Developing Hippocampal Network.

    PubMed

    Barrows, Caitlynn M; McCabe, Matthew P; Chen, Hongmei; Swann, John W; Weston, Matthew C

    2017-09-06

    Changes in synaptic strength and connectivity are thought to be a major mechanism through which many gene variants cause neurological disease. Hyperactivation of the PI3K-mTOR signaling network, via loss of function of repressors such as PTEN, causes epilepsy in humans and animal models, and altered mTOR signaling may contribute to a broad range of neurological diseases. Changes in synaptic transmission have been reported in animal models of PTEN loss; however, the full extent of these changes, and their effect on network function, is still unknown. To better understand the scope of these changes, we recorded from pairs of mouse hippocampal neurons cultured in a two-neuron microcircuit configuration that allowed us to characterize all four major connection types within the hippocampus. Loss of PTEN caused changes in excitatory and inhibitory connectivity, and these changes were postsynaptic, presynaptic, and transynaptic, suggesting that disruption of PTEN has the potential to affect most connection types in the hippocampal circuit. Given the complexity of the changes at the synaptic level, we measured changes in network behavior after deleting Pten from neurons in an organotypic hippocampal slice network. Slices containing Pten -deleted neurons showed increased recruitment of neurons into network bursts. Importantly, these changes were not confined to Pten -deleted neurons, but involved the entire network, suggesting that the extensive changes in synaptic connectivity rewire the entire network in such a way that promotes a widespread increase in functional connectivity. SIGNIFICANCE STATEMENT Homozygous deletion of the Pten gene in neuronal subpopulations in the mouse serves as a valuable model of epilepsy caused by mTOR hyperactivation. To better understand how gene deletions lead to altered neuronal activity, we investigated the synaptic and network effects that occur 1 week after Pten deletion. PTEN loss increased the connectivity of all four types of hippocampal synaptic connections, including two forms of increased inhibition of inhibition, and increased network functional connectivity. These data suggest that single gene mutations that cause neurological diseases such as epilepsy may affect a surprising range of connection types. Moreover, given the robustness of homeostatic plasticity, these diverse effects on connection types may be necessary to cause network phenotypes such as increased synchrony. Copyright © 2017 the authors 0270-6474/17/378595-17$15.00/0.

  5. PTEN Loss Increases the Connectivity of Fast Synaptic Motifs and Functional Connectivity in a Developing Hippocampal Network

    PubMed Central

    McCabe, Matthew P.; Chen, Hongmei; Swann, John W.

    2017-01-01

    Changes in synaptic strength and connectivity are thought to be a major mechanism through which many gene variants cause neurological disease. Hyperactivation of the PI3K-mTOR signaling network, via loss of function of repressors such as PTEN, causes epilepsy in humans and animal models, and altered mTOR signaling may contribute to a broad range of neurological diseases. Changes in synaptic transmission have been reported in animal models of PTEN loss; however, the full extent of these changes, and their effect on network function, is still unknown. To better understand the scope of these changes, we recorded from pairs of mouse hippocampal neurons cultured in a two-neuron microcircuit configuration that allowed us to characterize all four major connection types within the hippocampus. Loss of PTEN caused changes in excitatory and inhibitory connectivity, and these changes were postsynaptic, presynaptic, and transynaptic, suggesting that disruption of PTEN has the potential to affect most connection types in the hippocampal circuit. Given the complexity of the changes at the synaptic level, we measured changes in network behavior after deleting Pten from neurons in an organotypic hippocampal slice network. Slices containing Pten-deleted neurons showed increased recruitment of neurons into network bursts. Importantly, these changes were not confined to Pten-deleted neurons, but involved the entire network, suggesting that the extensive changes in synaptic connectivity rewire the entire network in such a way that promotes a widespread increase in functional connectivity. SIGNIFICANCE STATEMENT Homozygous deletion of the Pten gene in neuronal subpopulations in the mouse serves as a valuable model of epilepsy caused by mTOR hyperactivation. To better understand how gene deletions lead to altered neuronal activity, we investigated the synaptic and network effects that occur 1 week after Pten deletion. PTEN loss increased the connectivity of all four types of hippocampal synaptic connections, including two forms of increased inhibition of inhibition, and increased network functional connectivity. These data suggest that single gene mutations that cause neurological diseases such as epilepsy may affect a surprising range of connection types. Moreover, given the robustness of homeostatic plasticity, these diverse effects on connection types may be necessary to cause network phenotypes such as increased synchrony. PMID:28751459

  6. Individual nodeʼs contribution to the mesoscale of complex networks

    NASA Astrophysics Data System (ADS)

    Klimm, Florian; Borge-Holthoefer, Javier; Wessel, Niels; Kurths, Jürgen; Zamora-López, Gorka

    2014-12-01

    The analysis of complex networks is devoted to the statistical characterization of the topology of graphs at different scales of organization in order to understand their functionality. While the modular structure of networks has become an essential element to better apprehend their complexity, the efforts to characterize the mesoscale of networks have focused on the identification of the modules rather than describing the mesoscale in an informative manner. Here we propose a framework to characterize the position every node takes within the modular configuration of complex networks and to evaluate their function accordingly. For illustration, we apply this framework to a set of synthetic networks, empirical neural networks, and to the transcriptional regulatory network of the Mycobacterium tuberculosis. We find that the architecture of both neuronal and transcriptional networks are optimized for the processing of multisensory information with the coexistence of well-defined modules of specialized components and the presence of hubs conveying information from and to the distinct functional domains.

  7. Synchronization and coordination of sequences in two neural ensembles

    NASA Astrophysics Data System (ADS)

    Venaille, Antoine; Varona, Pablo; Rabinovich, Mikhail I.

    2005-06-01

    There are many types of neural networks involved in the sequential motor behavior of animals. For high species, the control and coordination of the network dynamics is a function of the higher levels of the central nervous system, in particular the cerebellum. However, in many cases, especially for invertebrates, such coordination is the result of direct synaptic connections between small circuits. We show here that even the chaotic sequential activity of small model networks can be coordinated by electrotonic synapses connecting one or several pairs of neurons that belong to two different networks. As an example, we analyzed the coordination and synchronization of the sequential activity of two statocyst model networks of the marine mollusk Clione. The statocysts are gravity sensory organs that play a key role in postural control of the animal and the generation of a complex hunting motor program. Each statocyst network was modeled by a small ensemble of neurons with Lotka-Volterra type dynamics and nonsymmetric inhibitory interactions. We studied how two such networks were synchronized by electrical coupling in the presence of an external signal which lead to winnerless competition among the neurons. We found that as a function of the number and the strength of connections between the two networks, it is possible to coordinate and synchronize the sequences that each network generates with its own chaotic dynamics. In spite of the chaoticity, the coordination of the signals is established through an activation sequence lock for those neurons that are active at a particular instant of time.

  8. A unified framework for spiking and gap-junction interactions in distributed neuronal network simulations.

    PubMed

    Hahne, Jan; Helias, Moritz; Kunkel, Susanne; Igarashi, Jun; Bolten, Matthias; Frommer, Andreas; Diesmann, Markus

    2015-01-01

    Contemporary simulators for networks of point and few-compartment model neurons come with a plethora of ready-to-use neuron and synapse models and support complex network topologies. Recent technological advancements have broadened the spectrum of application further to the efficient simulation of brain-scale networks on supercomputers. In distributed network simulations the amount of spike data that accrues per millisecond and process is typically low, such that a common optimization strategy is to communicate spikes at relatively long intervals, where the upper limit is given by the shortest synaptic transmission delay in the network. This approach is well-suited for simulations that employ only chemical synapses but it has so far impeded the incorporation of gap-junction models, which require instantaneous neuronal interactions. Here, we present a numerical algorithm based on a waveform-relaxation technique which allows for network simulations with gap junctions in a way that is compatible with the delayed communication strategy. Using a reference implementation in the NEST simulator, we demonstrate that the algorithm and the required data structures can be smoothly integrated with existing code such that they complement the infrastructure for spiking connections. To show that the unified framework for gap-junction and spiking interactions achieves high performance and delivers high accuracy in the presence of gap junctions, we present benchmarks for workstations, clusters, and supercomputers. Finally, we discuss limitations of the novel technology.

  9. A unified framework for spiking and gap-junction interactions in distributed neuronal network simulations

    PubMed Central

    Hahne, Jan; Helias, Moritz; Kunkel, Susanne; Igarashi, Jun; Bolten, Matthias; Frommer, Andreas; Diesmann, Markus

    2015-01-01

    Contemporary simulators for networks of point and few-compartment model neurons come with a plethora of ready-to-use neuron and synapse models and support complex network topologies. Recent technological advancements have broadened the spectrum of application further to the efficient simulation of brain-scale networks on supercomputers. In distributed network simulations the amount of spike data that accrues per millisecond and process is typically low, such that a common optimization strategy is to communicate spikes at relatively long intervals, where the upper limit is given by the shortest synaptic transmission delay in the network. This approach is well-suited for simulations that employ only chemical synapses but it has so far impeded the incorporation of gap-junction models, which require instantaneous neuronal interactions. Here, we present a numerical algorithm based on a waveform-relaxation technique which allows for network simulations with gap junctions in a way that is compatible with the delayed communication strategy. Using a reference implementation in the NEST simulator, we demonstrate that the algorithm and the required data structures can be smoothly integrated with existing code such that they complement the infrastructure for spiking connections. To show that the unified framework for gap-junction and spiking interactions achieves high performance and delivers high accuracy in the presence of gap junctions, we present benchmarks for workstations, clusters, and supercomputers. Finally, we discuss limitations of the novel technology. PMID:26441628

  10. The formation and distribution of hippocampal synapses on patterned neuronal networks

    NASA Astrophysics Data System (ADS)

    Dowell-Mesfin, Natalie M.

    Communication within the central nervous system is highly orchestrated with neurons forming trillions of specialized junctions called synapses. In vivo, biochemical and topographical cues can regulate neuronal growth. Biochemical cues also influence synaptogenesis and synaptic plasticity. The effects of topography on the development of synapses have been less studied. In vitro, neuronal growth is unorganized and complex making it difficult to study the development of networks. Patterned topographical cues guide and control the growth of neuronal processes (axons and dendrites) into organized networks. The aim of this dissertation was to determine if patterned topographical cues can influence synapse formation and distribution. Standard fabrication and compression molding procedures were used to produce silicon masters and polystyrene replicas with topographical cues presented as 1 mum high pillars with diameters of 0.5 and 2.0 mum and gaps of 1.0 to 5.0 mum. Embryonic rat hippocampal neurons grown unto patterned surfaces. A developmental analysis with immunocytochemistry was used to assess the distribution of pre- and post-synaptic proteins. Activity-dependent pre-synaptic vesicle uptake using functional imaging dyes was also performed. Adaptive filtering computer algorithms identified synapses by segmenting juxtaposed pairs of pre- and post-synaptic labels. Synapse number and area were automatically extracted from each deconvolved data set. In addition, neuronal processes were traced automatically to assess changes in synapse distribution. The results of these experiments demonstrated that patterned topographic cues can induce organized and functional neuronal networks that can serve as models for the study of synapse formation and plasticity as well as for the development of neuroprosthetic devices.

  11. Superconducting Optoelectronic Circuits for Neuromorphic Computing

    NASA Astrophysics Data System (ADS)

    Shainline, Jeffrey M.; Buckley, Sonia M.; Mirin, Richard P.; Nam, Sae Woo

    2017-03-01

    Neural networks have proven effective for solving many difficult computational problems, yet implementing complex neural networks in software is computationally expensive. To explore the limits of information processing, it is necessary to implement new hardware platforms with large numbers of neurons, each with a large number of connections to other neurons. Here we propose a hybrid semiconductor-superconductor hardware platform for the implementation of neural networks and large-scale neuromorphic computing. The platform combines semiconducting few-photon light-emitting diodes with superconducting-nanowire single-photon detectors to behave as spiking neurons. These processing units are connected via a network of optical waveguides, and variable weights of connection can be implemented using several approaches. The use of light as a signaling mechanism overcomes fanout and parasitic constraints on electrical signals while simultaneously introducing physical degrees of freedom which can be employed for computation. The use of supercurrents achieves the low power density (1 mW /cm2 at 20-MHz firing rate) necessary to scale to systems with enormous entropy. Estimates comparing the proposed hardware platform to a human brain show that with the same number of neurons (1 011) and 700 independent connections per neuron, the hardware presented here may achieve an order of magnitude improvement in synaptic events per second per watt.

  12. A neural network simulation package in CLIPS

    NASA Technical Reports Server (NTRS)

    Bhatnagar, Himanshu; Krolak, Patrick D.; Mcgee, Brenda J.; Coleman, John

    1990-01-01

    The intrinsic similarity between the firing of a rule and the firing of a neuron has been captured in this research to provide a neural network development system within an existing production system (CLIPS). A very important by-product of this research has been the emergence of an integrated technique of using rule based systems in conjunction with the neural networks to solve complex problems. The systems provides a tool kit for an integrated use of the two techniques and is also extendible to accommodate other AI techniques like the semantic networks, connectionist networks, and even the petri nets. This integrated technique can be very useful in solving complex AI problems.

  13. Homeostatic structural plasticity can account for topology changes following deafferentation and focal stroke.

    PubMed

    Butz, Markus; Steenbuck, Ines D; van Ooyen, Arjen

    2014-01-01

    After brain lesions caused by tumors or stroke, or after lasting loss of input (deafferentation), inter- and intra-regional brain networks respond with complex changes in topology. Not only areas directly affected by the lesion but also regions remote from the lesion may alter their connectivity-a phenomenon known as diaschisis. Changes in network topology after brain lesions can lead to cognitive decline and increasing functional disability. However, the principles governing changes in network topology are poorly understood. Here, we investigated whether homeostatic structural plasticity can account for changes in network topology after deafferentation and brain lesions. Homeostatic structural plasticity postulates that neurons aim to maintain a desired level of electrical activity by deleting synapses when neuronal activity is too high and by providing new synaptic contacts when activity is too low. Using our Model of Structural Plasticity, we explored how local changes in connectivity induced by a focal loss of input affected global network topology. In accordance with experimental and clinical data, we found that after partial deafferentation, the network as a whole became more random, although it maintained its small-world topology, while deafferentated neurons increased their betweenness centrality as they rewired and returned to the homeostatic range of activity. Furthermore, deafferentated neurons increased their global but decreased their local efficiency and got longer tailed degree distributions, indicating the emergence of hub neurons. Together, our results suggest that homeostatic structural plasticity may be an important driving force for lesion-induced network reorganization and that the increase in betweenness centrality of deafferentated areas may hold as a biomarker for brain repair.

  14. Spiking and bursting patterns of fractional-order Izhikevich model

    NASA Astrophysics Data System (ADS)

    Teka, Wondimu W.; Upadhyay, Ranjit Kumar; Mondal, Argha

    2018-03-01

    Bursting and spiking oscillations play major roles in processing and transmitting information in the brain through cortical neurons that respond differently to the same signal. These oscillations display complex dynamics that might be produced by using neuronal models and varying many model parameters. Recent studies have shown that models with fractional order can produce several types of history-dependent neuronal activities without the adjustment of several parameters. We studied the fractional-order Izhikevich model and analyzed different kinds of oscillations that emerge from the fractional dynamics. The model produces a wide range of neuronal spike responses, including regular spiking, fast spiking, intrinsic bursting, mixed mode oscillations, regular bursting and chattering, by adjusting only the fractional order. Both the active and silent phase of the burst increase when the fractional-order model further deviates from the classical model. For smaller fractional order, the model produces memory dependent spiking activity after the pulse signal turned off. This special spiking activity and other properties of the fractional-order model are caused by the memory trace that emerges from the fractional-order dynamics and integrates all the past activities of the neuron. On the network level, the response of the neuronal network shifts from random to scale-free spiking. Our results suggest that the complex dynamics of spiking and bursting can be the result of the long-term dependence and interaction of intracellular and extracellular ionic currents.

  15. Development and application of an optogenetic platform for controlling and imaging a large number of individual neurons

    NASA Astrophysics Data System (ADS)

    Mohammed, Ali Ibrahim Ali

    The understanding and treatment of brain disorders as well as the development of intelligent machines is hampered by the lack of knowledge of how the brain fundamentally functions. Over the past century, we have learned much about how individual neurons and neural networks behave, however new tools are critically needed to interrogate how neural networks give rise to complex brain processes and disease conditions. Recent innovations in molecular techniques, such as optogenetics, have enabled neuroscientists unprecedented precision to excite, inhibit and record defined neurons. The impressive sensitivity of currently available optogenetic sensors and actuators has now enabled the possibility of analyzing a large number of individual neurons in the brains of behaving animals. To promote the use of these optogenetic tools, this thesis integrates cutting edge optogenetic molecular sensors which is ultrasensitive for imaging neuronal activity with custom wide field optical microscope to analyze a large number of individual neurons in living brains. Wide-field microscopy provides a large field of view and better spatial resolution approaching the Abbe diffraction limit of fluorescent microscope. To demonstrate the advantages of this optical platform, we imaged a deep brain structure, the Hippocampus, and tracked hundreds of neurons over time while mouse was performing a memory task to investigate how those individual neurons related to behavior. In addition, we tested our optical platform in investigating transient neural network changes upon mechanical perturbation related to blast injuries. In this experiment, all blasted mice show a consistent change in neural network. A small portion of neurons showed a sustained calcium increase for an extended period of time, whereas the majority lost their activities. Finally, using optogenetic silencer to control selective motor cortex neurons, we examined their contributions to the network pathology of basal ganglia related to Parkinson's disease. We found that inhibition of motor cortex does not alter exaggerated beta oscillations in the striatum that are associated with parkinsonianism. Together, these results demonstrate the potential of developing integrated optogenetic system to advance our understanding of the principles underlying neural network computation, which would have broad applications from advancing artificial intelligence to disease diagnosis and treatment.

  16. Spatiotemporal alterations of cortical network activity by selective loss of NOS-expressing interneurons.

    PubMed

    Shlosberg, Dan; Buskila, Yossi; Abu-Ghanem, Yasmin; Amitai, Yael

    2012-01-01

    Deciphering the role of GABAergic neurons in large neuronal networks such as the neocortex forms a particularly complex task as they comprise a highly diverse population. The neuronal isoform of the enzyme nitric oxide synthase (nNOS) is expressed in the neocortex by specific subsets of GABAergic neurons. These neurons can be identified in live brain slices by the nitric oxide (NO) fluorescent indicator diaminofluorescein-2 diacetate (DAF-2DA). However, this indicator was found to be highly toxic to the stained neurons. We used this feature to induce acute phototoxic damage to NO-producing neurons in cortical slices, and measured subsequent alterations in parameters of cellular and network activity. Neocortical slices were briefly incubated in DAF-2DA and then illuminated through the 4× objective. Histochemistry for NADPH-diaphorase (NADPH-d), a marker for nNOS activity, revealed elimination of staining in the illuminated areas following treatment. Whole cell recordings from several neuronal types before, during, and after illumination confirmed the selective damage to non-fast-spiking (FS) interneurons. Treated slices displayed mild disinhibition. The reversal potential of compound synaptic events on pyramidal neurons became more positive, and their decay time constant was elongated, substantiating the removal of an inhibitory conductance. The horizontal decay of local field potentials (LFPs) was significantly reduced at distances of 300-400 μm from the stimulation, but not when inhibition was non-selectively weakened with the GABA(A) blocker picrotoxin. Finally, whereas the depression of LFPs along short trains of 40 Hz stimuli was linearly reduced with distance or initial amplitude in control slices, this ordered relationship was disrupted in DAF-treated slices. These results reveal that NO-producing interneurons in the neocortex convey lateral inhibition to neighboring columns, and shape the spatiotemporal dynamics of the network's activity.

  17. Metastability and Inter-Band Frequency Modulation in Networks of Oscillating Spiking Neuron Populations

    PubMed Central

    Bhowmik, David; Shanahan, Murray

    2013-01-01

    Groups of neurons firing synchronously are hypothesized to underlie many cognitive functions such as attention, associative learning, memory, and sensory selection. Recent theories suggest that transient periods of synchronization and desynchronization provide a mechanism for dynamically integrating and forming coalitions of functionally related neural areas, and that at these times conditions are optimal for information transfer. Oscillating neural populations display a great amount of spectral complexity, with several rhythms temporally coexisting in different structures and interacting with each other. This paper explores inter-band frequency modulation between neural oscillators using models of quadratic integrate-and-fire neurons and Hodgkin-Huxley neurons. We vary the structural connectivity in a network of neural oscillators, assess the spectral complexity, and correlate the inter-band frequency modulation. We contrast this correlation against measures of metastable coalition entropy and synchrony. Our results show that oscillations in different neural populations modulate each other so as to change frequency, and that the interaction of these fluctuating frequencies in the network as a whole is able to drive different neural populations towards episodes of synchrony. Further to this, we locate an area in the connectivity space in which the system directs itself in this way so as to explore a large repertoire of synchronous coalitions. We suggest that such dynamics facilitate versatile exploration, integration, and communication between functionally related neural areas, and thereby supports sophisticated cognitive processing in the brain. PMID:23614040

  18. Neural networks with excitatory and inhibitory components: Direct and inverse problems by a mean-field approach

    NASA Astrophysics Data System (ADS)

    di Volo, Matteo; Burioni, Raffaella; Casartelli, Mario; Livi, Roberto; Vezzani, Alessandro

    2016-01-01

    We study the dynamics of networks with inhibitory and excitatory leak-integrate-and-fire neurons with short-term synaptic plasticity in the presence of depressive and facilitating mechanisms. The dynamics is analyzed by a heterogeneous mean-field approximation, which allows us to keep track of the effects of structural disorder in the network. We describe the complex behavior of different classes of excitatory and inhibitory components, which give rise to a rich dynamical phase diagram as a function of the fraction of inhibitory neurons. Using the same mean-field approach, we study and solve a global inverse problem: reconstructing the degree probability distributions of the inhibitory and excitatory components and the fraction of inhibitory neurons from the knowledge of the average synaptic activity field. This approach unveils new perspectives on the numerical study of neural network dynamics and the possibility of using these models as a test bed for the analysis of experimental data.

  19. Connectionist Learning Procedures.

    ERIC Educational Resources Information Center

    Hinton, Geoffrey E.

    A major goal of research on networks of neuron-like processing units is to discover efficient learning procedures that allow these networks to construct complex internal representations of their environment. The learning procedures must be capable of modifying the connection strengths in such a way that internal units which are not part of the…

  20. The interplay between neurons and glia in synapse development and plasticity.

    PubMed

    Stogsdill, Jeff A; Eroglu, Cagla

    2017-02-01

    In the brain, the formation of complex neuronal networks amenable to experience-dependent remodeling is complicated by the diversity of neurons and synapse types. The establishment of a functional brain depends not only on neurons, but also non-neuronal glial cells. Glia are in continuous bi-directional communication with neurons to direct the formation and refinement of synaptic connectivity. This article reviews important findings, which uncovered cellular and molecular aspects of the neuron-glia cross-talk that govern the formation and remodeling of synapses and circuits. In vivo evidence demonstrating the critical interplay between neurons and glia will be the major focus. Additional attention will be given to how aberrant communication between neurons and glia may contribute to neural pathologies. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. [Measurement and performance analysis of functional neural network].

    PubMed

    Li, Shan; Liu, Xinyu; Chen, Yan; Wan, Hong

    2018-04-01

    The measurement of network is one of the important researches in resolving neuronal population information processing mechanism using complex network theory. For the quantitative measurement problem of functional neural network, the relation between the measure indexes, i.e. the clustering coefficient, the global efficiency, the characteristic path length and the transitivity, and the network topology was analyzed. Then, the spike-based functional neural network was established and the simulation results showed that the measured network could represent the original neural connections among neurons. On the basis of the former work, the coding of functional neural network in nidopallium caudolaterale (NCL) about pigeon's motion behaviors was studied. We found that the NCL functional neural network effectively encoded the motion behaviors of the pigeon, and there were significant differences in four indexes among the left-turning, the forward and the right-turning. Overall, the establishment method of spike-based functional neural network is available and it is an effective tool to parse the brain information processing mechanism.

  2. Very long transients, irregular firing, and chaotic dynamics in networks of randomly connected inhibitory integrate-and-fire neurons.

    PubMed

    Zillmer, Rüdiger; Brunel, Nicolas; Hansel, David

    2009-03-01

    We present results of an extensive numerical study of the dynamics of networks of integrate-and-fire neurons connected randomly through inhibitory interactions. We first consider delayed interactions with infinitely fast rise and decay. Depending on the parameters, the network displays transients which are short or exponentially long in the network size. At the end of these transients, the dynamics settle on a periodic attractor. If the number of connections per neuron is large ( approximately 1000) , this attractor is a cluster state with a short period. In contrast, if the number of connections per neuron is small ( approximately 100) , the attractor has complex dynamics and very long period. During the long transients the neurons fire in a highly irregular manner. They can be viewed as quasistationary states in which, depending on the coupling strength, the pattern of activity is asynchronous or displays population oscillations. In the first case, the average firing rates and the variability of the single-neuron activity are well described by a mean-field theory valid in the thermodynamic limit. Bifurcations of the long transient dynamics from asynchronous to synchronous activity are also well predicted by this theory. The transient dynamics display features reminiscent of stable chaos. In particular, despite being linearly stable, the trajectories of the transient dynamics are destabilized by finite perturbations as small as O(1/N) . We further show that stable chaos is also observed for postsynaptic currents with finite decay time. However, we report in this type of network that chaotic dynamics characterized by positive Lyapunov exponents can also be observed. We show in fact that chaos occurs when the decay time of the synaptic currents is long compared to the synaptic delay, provided that the network is sufficiently large.

  3. Reinforcement Learning of Two-Joint Virtual Arm Reaching in a Computer Model of Sensorimotor Cortex

    PubMed Central

    Neymotin, Samuel A.; Chadderdon, George L.; Kerr, Cliff C.; Francis, Joseph T.; Lytton, William W.

    2014-01-01

    Neocortical mechanisms of learning sensorimotor control involve a complex series of interactions at multiple levels, from synaptic mechanisms to cellular dynamics to network connectomics. We developed a model of sensory and motor neocortex consisting of 704 spiking model neurons. Sensory and motor populations included excitatory cells and two types of interneurons. Neurons were interconnected with AMPA/NMDA and GABAA synapses. We trained our model using spike-timing-dependent reinforcement learning to control a two-joint virtual arm to reach to a fixed target. For each of 125 trained networks, we used 200 training sessions, each involving 15 s reaches to the target from 16 starting positions. Learning altered network dynamics, with enhancements to neuronal synchrony and behaviorally relevant information flow between neurons. After learning, networks demonstrated retention of behaviorally relevant memories by using proprioceptive information to perform reach-to-target from multiple starting positions. Networks dynamically controlled which joint rotations to use to reach a target, depending on current arm position. Learning-dependent network reorganization was evident in both sensory and motor populations: learned synaptic weights showed target-specific patterning optimized for particular reach movements. Our model embodies an integrative hypothesis of sensorimotor cortical learning that could be used to interpret future electrophysiological data recorded in vivo from sensorimotor learning experiments. We used our model to make the following predictions: learning enhances synchrony in neuronal populations and behaviorally relevant information flow across neuronal populations, enhanced sensory processing aids task-relevant motor performance and the relative ease of a particular movement in vivo depends on the amount of sensory information required to complete the movement. PMID:24047323

  4. Fluorescent tagging of rhythmically active respiratory neurons within the pre-Bötzinger complex of rat medullary slice preparations.

    PubMed

    Pagliardini, Silvia; Adachi, Tadafumi; Ren, Jun; Funk, Gregory D; Greer, John J

    2005-03-09

    Elucidation of the neuronal mechanisms underlying respiratory rhythmogenesis is a major focal point in respiratory physiology. An area of the ventrolateral medulla, the pre-Bötzinger complex (preBotC), is a critical site. Attention is now focused on understanding the cellular and network properties within the preBotC that underlie this critical function. The inability to clearly identify key "rhythm-generating" neurons within the heterogeneous population of preBotC neurons has been a significant limitation. Here we report an advancement allowing precise targeting of neurons expressing neurokinin-1 receptors (NK1Rs), which are hypothesized to be essential for respiratory rhythmogenesis. The internalization of tetramethylrhodamine conjugated substance P in rhythmically active medullary slice preparations provided clear visualization of NK1R-expressing neurons for subsequent whole-cell patch-clamp recordings. Among labeled neurons, 82% were inspiratory modulated, and 25% had pacemaker properties. We propose that this approach can be used to greatly expedite progress toward understanding the neuronal processes underlying the control of breathing.

  5. Bio-inspired spiking neural network for nonlinear systems control.

    PubMed

    Pérez, Javier; Cabrera, Juan A; Castillo, Juan J; Velasco, Juan M

    2018-08-01

    Spiking neural networks (SNN) are the third generation of artificial neural networks. SNN are the closest approximation to biological neural networks. SNNs make use of temporal spike trains to command inputs and outputs, allowing a faster and more complex computation. As demonstrated by biological organisms, they are a potentially good approach to designing controllers for highly nonlinear dynamic systems in which the performance of controllers developed by conventional techniques is not satisfactory or difficult to implement. SNN-based controllers exploit their ability for online learning and self-adaptation to evolve when transferred from simulations to the real world. SNN's inherent binary and temporary way of information codification facilitates their hardware implementation compared to analog neurons. Biological neural networks often require a lower number of neurons compared to other controllers based on artificial neural networks. In this work, these neuronal systems are imitated to perform the control of non-linear dynamic systems. For this purpose, a control structure based on spiking neural networks has been designed. Particular attention has been paid to optimizing the structure and size of the neural network. The proposed structure is able to control dynamic systems with a reduced number of neurons and connections. A supervised learning process using evolutionary algorithms has been carried out to perform controller training. The efficiency of the proposed network has been verified in two examples of dynamic systems control. Simulations show that the proposed control based on SNN exhibits superior performance compared to other approaches based on Neural Networks and SNNs. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. Realistic modeling of neurons and networks: towards brain simulation.

    PubMed

    D'Angelo, Egidio; Solinas, Sergio; Garrido, Jesus; Casellato, Claudia; Pedrocchi, Alessandra; Mapelli, Jonathan; Gandolfi, Daniela; Prestori, Francesca

    2013-01-01

    Realistic modeling is a new advanced methodology for investigating brain functions. Realistic modeling is based on a detailed biophysical description of neurons and synapses, which can be integrated into microcircuits. The latter can, in turn, be further integrated to form large-scale brain networks and eventually to reconstruct complex brain systems. Here we provide a review of the realistic simulation strategy and use the cerebellar network as an example. This network has been carefully investigated at molecular and cellular level and has been the object of intense theoretical investigation. The cerebellum is thought to lie at the core of the forward controller operations of the brain and to implement timing and sensory prediction functions. The cerebellum is well described and provides a challenging field in which one of the most advanced realistic microcircuit models has been generated. We illustrate how these models can be elaborated and embedded into robotic control systems to gain insight into how the cellular properties of cerebellar neurons emerge in integrated behaviors. Realistic network modeling opens up new perspectives for the investigation of brain pathologies and for the neurorobotic field.

  7. Realistic modeling of neurons and networks: towards brain simulation

    PubMed Central

    D’Angelo, Egidio; Solinas, Sergio; Garrido, Jesus; Casellato, Claudia; Pedrocchi, Alessandra; Mapelli, Jonathan; Gandolfi, Daniela; Prestori, Francesca

    Summary Realistic modeling is a new advanced methodology for investigating brain functions. Realistic modeling is based on a detailed biophysical description of neurons and synapses, which can be integrated into microcircuits. The latter can, in turn, be further integrated to form large-scale brain networks and eventually to reconstruct complex brain systems. Here we provide a review of the realistic simulation strategy and use the cerebellar network as an example. This network has been carefully investigated at molecular and cellular level and has been the object of intense theoretical investigation. The cerebellum is thought to lie at the core of the forward controller operations of the brain and to implement timing and sensory prediction functions. The cerebellum is well described and provides a challenging field in which one of the most advanced realistic microcircuit models has been generated. We illustrate how these models can be elaborated and embedded into robotic control systems to gain insight into how the cellular properties of cerebellar neurons emerge in integrated behaviors. Realistic network modeling opens up new perspectives for the investigation of brain pathologies and for the neurorobotic field. PMID:24139652

  8. Auditory and audio-vocal responses of single neurons in the monkey ventral premotor cortex.

    PubMed

    Hage, Steffen R

    2018-03-20

    Monkey vocalization is a complex behavioral pattern, which is flexibly used in audio-vocal communication. A recently proposed dual neural network model suggests that cognitive control might be involved in this behavior, originating from a frontal cortical network in the prefrontal cortex and mediated via projections from the rostral portion of the ventral premotor cortex (PMvr) and motor cortex to the primary vocal motor network in the brainstem. For the rapid adjustment of vocal output to external acoustic events, strong interconnections between vocal motor and auditory sites are needed, which are present at cortical and subcortical levels. However, the role of the PMvr in audio-vocal integration processes remains unclear. In the present study, single neurons in the PMvr were recorded in rhesus monkeys (Macaca mulatta) while volitionally producing vocalizations in a visual detection task or passively listening to monkey vocalizations. Ten percent of randomly selected neurons in the PMvr modulated their discharge rate in response to acoustic stimulation with species-specific calls. More than four-fifths of these auditory neurons showed an additional modulation of their discharge rates either before and/or during the monkeys' motor production of the vocalization. Based on these audio-vocal interactions, the PMvr might be well positioned to mediate higher order auditory processing with cognitive control of the vocal motor output to the primary vocal motor network. Such audio-vocal integration processes in the premotor cortex might constitute a precursor for the evolution of complex learned audio-vocal integration systems, ultimately giving rise to human speech. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. A Possible Role for End-Stopped V1 Neurons in the Perception of Motion: A Computational Model

    PubMed Central

    Zarei Eskikand, Parvin; Kameneva, Tatiana; Ibbotson, Michael R.; Burkitt, Anthony N.; Grayden, David B.

    2016-01-01

    We present a model of the early stages of processing in the visual cortex, in particular V1 and MT, to investigate the potential role of end-stopped V1 neurons in solving the aperture problem. A hierarchical network is used in which the incoming motion signals provided by complex V1 neurons and end-stopped V1 neurons proceed to MT neurons at the next stage. MT neurons are categorized into two types based on their function: integration and segmentation. The role of integration neurons is to propagate unambiguous motion signals arriving from those V1 neurons that emphasize object terminators (e.g. corners). Segmentation neurons detect the discontinuities in the input stimulus to control the activity of integration neurons. Although the activity of the complex V1 neurons at the terminators of the object accurately represents the direction of the motion, their level of activity is less than the activity of the neurons along the edges. Therefore, a model incorporating end-stopped neurons is essential to suppress ambiguous motion signals along the edges of the stimulus. It is shown that the unambiguous motion signals at terminators propagate over the rest of the object to achieve an accurate representation of motion. PMID:27741307

  10. Reverse engineering a mouse embryonic stem cell-specific transcriptional network reveals a new modulator of neuronal differentiation.

    PubMed

    De Cegli, Rossella; Iacobacci, Simona; Flore, Gemma; Gambardella, Gennaro; Mao, Lei; Cutillo, Luisa; Lauria, Mario; Klose, Joachim; Illingworth, Elizabeth; Banfi, Sandro; di Bernardo, Diego

    2013-01-01

    Gene expression profiles can be used to infer previously unknown transcriptional regulatory interaction among thousands of genes, via systems biology 'reverse engineering' approaches. We 'reverse engineered' an embryonic stem (ES)-specific transcriptional network from 171 gene expression profiles, measured in ES cells, to identify master regulators of gene expression ('hubs'). We discovered that E130012A19Rik (E13), highly expressed in mouse ES cells as compared with differentiated cells, was a central 'hub' of the network. We demonstrated that E13 is a protein-coding gene implicated in regulating the commitment towards the different neuronal subtypes and glia cells. The overexpression and knock-down of E13 in ES cell lines, undergoing differentiation into neurons and glia cells, caused a strong up-regulation of the glutamatergic neurons marker Vglut2 and a strong down-regulation of the GABAergic neurons marker GAD65 and of the radial glia marker Blbp. We confirmed E13 expression in the cerebral cortex of adult mice and during development. By immuno-based affinity purification, we characterized protein partners of E13, involved in the Polycomb complex. Our results suggest a role of E13 in regulating the division between glutamatergic projection neurons and GABAergic interneurons and glia cells possibly by epigenetic-mediated transcriptional regulation.

  11. Serotonergic Modulation Differentially Targets Distinct Network Elements within the Antennal Lobe of Drosophila melanogaster

    PubMed Central

    Sizemore, Tyler R.; Dacks, Andrew M.

    2016-01-01

    Neuromodulation confers flexibility to anatomically-restricted neural networks so that animals are able to properly respond to complex internal and external demands. However, determining the mechanisms underlying neuromodulation is challenging without knowledge of the functional class and spatial organization of neurons that express individual neuromodulatory receptors. Here, we describe the number and functional identities of neurons in the antennal lobe of Drosophila melanogaster that express each of the receptors for one such neuromodulator, serotonin (5-HT). Although 5-HT enhances odor-evoked responses of antennal lobe projection neurons (PNs) and local interneurons (LNs), the receptor basis for this enhancement is unknown. We used endogenous reporters of transcription and translation for each of the five 5-HT receptors (5-HTRs) to identify neurons, based on cell class and transmitter content, that express each receptor. We find that specific receptor types are expressed by distinct combinations of functional neuronal classes. For instance, the excitatory PNs express the excitatory 5-HTRs, while distinct classes of LNs each express different 5-HTRs. This study therefore provides a detailed atlas of 5-HT receptor expression within a well-characterized neural network, and enables future dissection of the role of serotonergic modulation of olfactory processing. PMID:27845422

  12. SLP-2 interacts with Parkin in mitochondria and prevents mitochondrial dysfunction in Parkin-deficient human iPSC-derived neurons and Drosophila.

    PubMed

    Zanon, Alessandra; Kalvakuri, Sreehari; Rakovic, Aleksandar; Foco, Luisa; Guida, Marianna; Schwienbacher, Christine; Serafin, Alice; Rudolph, Franziska; Trilck, Michaela; Grünewald, Anne; Stanslowsky, Nancy; Wegner, Florian; Giorgio, Valentina; Lavdas, Alexandros A; Bodmer, Rolf; Pramstaller, Peter P; Klein, Christine; Hicks, Andrew A; Pichler, Irene; Seibler, Philip

    2017-07-01

    Mutations in the Parkin gene (PARK2) have been linked to a recessive form of Parkinson's disease (PD) characterized by the loss of dopaminergic neurons in the substantia nigra. Deficiencies of mitochondrial respiratory chain complex I activity have been observed in the substantia nigra of PD patients, and loss of Parkin results in the reduction of complex I activity shown in various cell and animal models. Using co-immunoprecipitation and proximity ligation assays on endogenous proteins, we demonstrate that Parkin interacts with mitochondrial Stomatin-like protein 2 (SLP-2), which also binds the mitochondrial lipid cardiolipin and functions in the assembly of respiratory chain proteins. SH-SY5Y cells with a stable knockdown of Parkin or SLP-2, as well as induced pluripotent stem cell-derived neurons from Parkin mutation carriers, showed decreased complex I activity and altered mitochondrial network morphology. Importantly, induced expression of SLP-2 corrected for these mitochondrial alterations caused by reduced Parkin function in these cells. In-vivo Drosophila studies showed a genetic interaction of Parkin and SLP-2, and further, tissue-specific or global overexpression of SLP-2 transgenes rescued parkin mutant phenotypes, in particular loss of dopaminergic neurons, mitochondrial network structure, reduced ATP production, and flight and motor dysfunction. The physical and genetic interaction between Parkin and SLP-2 and the compensatory potential of SLP-2 suggest a functional epistatic relationship to Parkin and a protective role of SLP-2 in neurons. This finding places further emphasis on the significance of Parkin for the maintenance of mitochondrial function in neurons and provides a novel target for therapeutic strategies. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  13. The role of the parafascicular complex (CM-Pf) of the human thalamus in the neuronal mechanisms of selective attention.

    PubMed

    Raeva, S N

    2006-03-01

    The reactions of 93 neurons in the parafascicular complex (CM-Pf) of the human thalamus were studied by microelectrode recording during stereotaxic neurosurgical operations in patients with spastic torticollis. High reactivity was demonstrated for two previously classified types of neurons with identical irregular (type A) and bursting Ca2+ -dependent (type B) activities in response to presentation of relevant verbal stimuli evoking selective attention in humans. Concordant changes in the network activity of A and B neurons were observed, in the form of linked activatory-inhibitory patterns of responses and the appearance, at the moment of presentation of an imperative morpheme of the command stimulus, of rapidly occurring intercellular interactions consisting of local synchronization with simultaneously developing rhythmic oscillatory (3-4 Hz) activity. Data are presented on the existence of a direct connection between these neuronal rearrangements and activation of selective attention, providing evidence for the involvement of the thalamic parafascicular complex (CM-Pf) in the mechanisms of selective attention and processing of relevant verbal information during the preparative period of voluntary actions.

  14. The interdependence of excitation and inhibition for the control of dynamic breathing rhythms.

    PubMed

    Baertsch, Nathan Andrew; Baertsch, Hans Christopher; Ramirez, Jan Marino

    2018-02-26

    The preBötzinger Complex (preBötC), a medullary network critical for breathing, relies on excitatory interneurons to generate the inspiratory rhythm. Yet, half of preBötC neurons are inhibitory, and the role of inhibition in rhythmogenesis remains controversial. Using optogenetics and electrophysiology in vitro and in vivo, we demonstrate that the intrinsic excitability of excitatory neurons is reduced following large depolarizing inspiratory bursts. This refractory period limits the preBötC to very slow breathing frequencies. Inhibition integrated within the network is required to prevent overexcitation of preBötC neurons, thereby regulating the refractory period and allowing rapid breathing. In vivo, sensory feedback inhibition also regulates the refractory period, and in slowly breathing mice with sensory feedback removed, activity of inhibitory, but not excitatory, neurons restores breathing to physiological frequencies. We conclude that excitation and inhibition are interdependent for the breathing rhythm, because inhibition permits physiological preBötC bursting by controlling refractory properties of excitatory neurons.

  15. Defects formation and wave emitting from defects in excitable media

    NASA Astrophysics Data System (ADS)

    Ma, Jun; Xu, Ying; Tang, Jun; Wang, Chunni

    2016-05-01

    Abnormal electrical activities in neuronal system could be associated with some neuronal diseases. Indeed, external forcing can cause breakdown even collapse in nervous system under appropriate condition. The excitable media sometimes could be described by neuronal network with different topologies. The collective behaviors of neurons can show complex spatiotemporal dynamical properties and spatial distribution for electrical activities due to self-organization even from the regulating from central nervous system. Defects in the nervous system can emit continuous waves or pulses, and pacemaker-like source is generated to perturb the normal signal propagation in nervous system. How these defects are developed? In this paper, a network of neurons is designed in two-dimensional square array with nearest-neighbor connection type; the formation mechanism of defects is investigated by detecting the wave propagation induced by external forcing. It is found that defects could be induced under external periodical forcing under the boundary, and then the wave emitted from the defects can keep balance with the waves excited from external forcing.

  16. Network dynamics of 3D engineered neuronal cultures: a new experimental model for in-vitro electrophysiology.

    PubMed

    Frega, Monica; Tedesco, Mariateresa; Massobrio, Paolo; Pesce, Mattia; Martinoia, Sergio

    2014-06-30

    Despite the extensive use of in-vitro models for neuroscientific investigations and notwithstanding the growing field of network electrophysiology, all studies on cultured cells devoted to elucidate neurophysiological mechanisms and computational properties, are based on 2D neuronal networks. These networks are usually grown onto specific rigid substrates (also with embedded electrodes) and lack of most of the constituents of the in-vivo like environment: cell morphology, cell-to-cell interaction and neuritic outgrowth in all directions. Cells in a brain region develop in a 3D space and interact with a complex multi-cellular environment and extracellular matrix. Under this perspective, 3D networks coupled to micro-transducer arrays, represent a new and powerful in-vitro model capable of better emulating in-vivo physiology. In this work, we present a new experimental paradigm constituted by 3D hippocampal networks coupled to Micro-Electrode-Arrays (MEAs) and we show how the features of the recorded network dynamics differ from the corresponding 2D network model. Further development of the proposed 3D in-vitro model by adding embedded functionalized scaffolds might open new prospects for manipulating, stimulating and recording the neuronal activity to elucidate neurophysiological mechanisms and to design bio-hybrid microsystems.

  17. Network dynamics of 3D engineered neuronal cultures: a new experimental model for in-vitro electrophysiology

    PubMed Central

    Frega, Monica; Tedesco, Mariateresa; Massobrio, Paolo; Pesce, Mattia; Martinoia, Sergio

    2014-01-01

    Despite the extensive use of in-vitro models for neuroscientific investigations and notwithstanding the growing field of network electrophysiology, all studies on cultured cells devoted to elucidate neurophysiological mechanisms and computational properties, are based on 2D neuronal networks. These networks are usually grown onto specific rigid substrates (also with embedded electrodes) and lack of most of the constituents of the in-vivo like environment: cell morphology, cell-to-cell interaction and neuritic outgrowth in all directions. Cells in a brain region develop in a 3D space and interact with a complex multi-cellular environment and extracellular matrix. Under this perspective, 3D networks coupled to micro-transducer arrays, represent a new and powerful in-vitro model capable of better emulating in-vivo physiology. In this work, we present a new experimental paradigm constituted by 3D hippocampal networks coupled to Micro-Electrode-Arrays (MEAs) and we show how the features of the recorded network dynamics differ from the corresponding 2D network model. Further development of the proposed 3D in-vitro model by adding embedded functionalized scaffolds might open new prospects for manipulating, stimulating and recording the neuronal activity to elucidate neurophysiological mechanisms and to design bio-hybrid microsystems. PMID:24976386

  18. Electrophysiological models of neural processing.

    PubMed

    Nelson, Mark E

    2011-01-01

    The brain is an amazing information processing system that allows organisms to adaptively monitor and control complex dynamic interactions with their environment across multiple spatial and temporal scales. Mathematical modeling and computer simulation techniques have become essential tools in understanding diverse aspects of neural processing ranging from sub-millisecond temporal coding in the sound localization circuity of barn owls to long-term memory storage and retrieval in humans that can span decades. The processing capabilities of individual neurons lie at the core of these models, with the emphasis shifting upward and downward across different levels of biological organization depending on the nature of the questions being addressed. This review provides an introduction to the techniques for constructing biophysically based models of individual neurons and local networks. Topics include Hodgkin-Huxley-type models of macroscopic membrane currents, Markov models of individual ion-channel currents, compartmental models of neuronal morphology, and network models involving synaptic interactions among multiple neurons.

  19. Properties of Neurons in External Globus Pallidus Can Support Optimal Action Selection

    PubMed Central

    Bogacz, Rafal; Martin Moraud, Eduardo; Abdi, Azzedine; Magill, Peter J.; Baufreton, Jérôme

    2016-01-01

    The external globus pallidus (GPe) is a key nucleus within basal ganglia circuits that are thought to be involved in action selection. A class of computational models assumes that, during action selection, the basal ganglia compute for all actions available in a given context the probabilities that they should be selected. These models suggest that a network of GPe and subthalamic nucleus (STN) neurons computes the normalization term in Bayes’ equation. In order to perform such computation, the GPe needs to send feedback to the STN equal to a particular function of the activity of STN neurons. However, the complex form of this function makes it unlikely that individual GPe neurons, or even a single GPe cell type, could compute it. Here, we demonstrate how this function could be computed within a network containing two types of GABAergic GPe projection neuron, so-called ‘prototypic’ and ‘arkypallidal’ neurons, that have different response properties in vivo and distinct connections. We compare our model predictions with the experimentally-reported connectivity and input-output functions (f-I curves) of the two populations of GPe neurons. We show that, together, these dichotomous cell types fulfil the requirements necessary to compute the function needed for optimal action selection. We conclude that, by virtue of their distinct response properties and connectivities, a network of arkypallidal and prototypic GPe neurons comprises a neural substrate capable of supporting the computation of the posterior probabilities of actions. PMID:27389780

  20. Goal-seeking neural net for recall and recognition

    NASA Astrophysics Data System (ADS)

    Omidvar, Omid M.

    1990-07-01

    Neural networks have been used to mimic cognitive processes which take place in animal brains. The learning capability inherent in neural networks makes them suitable candidates for adaptive tasks such as recall and recognition. The synaptic reinforcements create a proper condition for adaptation, which results in memorization, formation of perception, and higher order information processing activities. In this research a model of a goal seeking neural network is studied and the operation of the network with regard to recall and recognition is analyzed. In these analyses recall is defined as retrieval of stored information where little or no matching is involved. On the other hand recognition is recall with matching; therefore it involves memorizing a piece of information with complete presentation. This research takes the generalized view of reinforcement in which all the signals are potential reinforcers. The neuronal response is considered to be the source of the reinforcement. This local approach to adaptation leads to the goal seeking nature of the neurons as network components. In the proposed model all the synaptic strengths are reinforced in parallel while the reinforcement among the layers is done in a distributed fashion and pipeline mode from the last layer inward. A model of complex neuron with varying threshold is developed to account for inhibitory and excitatory behavior of real neuron. A goal seeking model of a neural network is presented. This network is utilized to perform recall and recognition tasks. The performance of the model with regard to the assigned tasks is presented.

  1. Accelerated intoxication of GABAergic synapses by botulinum neurotoxin A disinhibits stem cell-derived neuron networks prior to network silencing

    PubMed Central

    Beske, Phillip H.; Scheeler, Stephen M.; Adler, Michael; McNutt, Patrick M.

    2015-01-01

    Botulinum neurotoxins (BoNTs) are extremely potent toxins that specifically cleave SNARE proteins in peripheral synapses, preventing neurotransmitter release. Neuronal responses to BoNT intoxication are traditionally studied by quantifying SNARE protein cleavage in vitro or monitoring physiological paralysis in vivo. Consequently, the dynamic effects of intoxication on synaptic behaviors are not well-understood. We have reported that mouse embryonic stem cell-derived neurons (ESNs) are highly sensitive to BoNT based on molecular readouts of intoxication. Here we study the time-dependent changes in synapse- and network-level behaviors following addition of BoNT/A to spontaneously active networks of glutamatergic and GABAergic ESNs. Whole-cell patch-clamp recordings indicated that BoNT/A rapidly blocked synaptic neurotransmission, confirming that ESNs replicate the functional pathophysiology responsible for clinical botulism. Quantitation of spontaneous neurotransmission in pharmacologically isolated synapses revealed accelerated silencing of GABAergic synapses compared to glutamatergic synapses, which was consistent with the selective accumulation of cleaved SNAP-25 at GAD1+ pre-synaptic terminals at early timepoints. Different latencies of intoxication resulted in complex network responses to BoNT/A addition, involving rapid disinhibition of stochastic firing followed by network silencing. Synaptic activity was found to be highly sensitive to SNAP-25 cleavage, reflecting the functional consequences of the localized cleavage of the small subpopulation of SNAP-25 that is engaged in neurotransmitter release in the nerve terminal. Collectively these findings illustrate that use of synaptic function assays in networked neurons cultures offers a novel and highly sensitive approach for mechanistic studies of toxin:neuron interactions and synaptic responses to BoNT. PMID:25954159

  2. Neuromorphic device architectures with global connectivity through electrolyte gating

    NASA Astrophysics Data System (ADS)

    Gkoupidenis, Paschalis; Koutsouras, Dimitrios A.; Malliaras, George G.

    2017-05-01

    Information processing in the brain takes place in a network of neurons that are connected with each other by an immense number of synapses. At the same time, neurons are immersed in a common electrochemical environment, and global parameters such as concentrations of various hormones regulate the overall network function. This computational paradigm of global regulation, also known as homeoplasticity, has important implications in the overall behaviour of large neural ensembles and is barely addressed in neuromorphic device architectures. Here, we demonstrate the global control of an array of organic devices based on poly(3,4ethylenedioxythiophene):poly(styrene sulf) that are immersed in an electrolyte, a behaviour that resembles homeoplasticity phenomena of the neural environment. We use this effect to produce behaviour that is reminiscent of the coupling between local activity and global oscillations in the biological neural networks. We further show that the electrolyte establishes complex connections between individual devices, and leverage these connections to implement coincidence detection. These results demonstrate that electrolyte gating offers significant advantages for the realization of networks of neuromorphic devices of higher complexity and with minimal hardwired connectivity.

  3. A meta-cognitive learning algorithm for a Fully Complex-valued Relaxation Network.

    PubMed

    Savitha, R; Suresh, S; Sundararajan, N

    2012-08-01

    This paper presents a meta-cognitive learning algorithm for a single hidden layer complex-valued neural network called "Meta-cognitive Fully Complex-valued Relaxation Network (McFCRN)". McFCRN has two components: a cognitive component and a meta-cognitive component. A Fully Complex-valued Relaxation Network (FCRN) with a fully complex-valued Gaussian like activation function (sech) in the hidden layer and an exponential activation function in the output layer forms the cognitive component. The meta-cognitive component contains a self-regulatory learning mechanism which controls the learning ability of FCRN by deciding what-to-learn, when-to-learn and how-to-learn from a sequence of training data. The input parameters of cognitive components are chosen randomly and the output parameters are estimated by minimizing a logarithmic error function. The problem of explicit minimization of magnitude and phase errors in the logarithmic error function is converted to system of linear equations and output parameters of FCRN are computed analytically. McFCRN starts with zero hidden neuron and builds the number of neurons required to approximate the target function. The meta-cognitive component selects the best learning strategy for FCRN to acquire the knowledge from training data and also adapts the learning strategies to implement best human learning components. Performance studies on a function approximation and real-valued classification problems show that proposed McFCRN performs better than the existing results reported in the literature. Copyright © 2012 Elsevier Ltd. All rights reserved.

  4. Causal Inference and Explaining Away in a Spiking Network

    PubMed Central

    Moreno-Bote, Rubén; Drugowitsch, Jan

    2015-01-01

    While the brain uses spiking neurons for communication, theoretical research on brain computations has mostly focused on non-spiking networks. The nature of spike-based algorithms that achieve complex computations, such as object probabilistic inference, is largely unknown. Here we demonstrate that a family of high-dimensional quadratic optimization problems with non-negativity constraints can be solved exactly and efficiently by a network of spiking neurons. The network naturally imposes the non-negativity of causal contributions that is fundamental to causal inference, and uses simple operations, such as linear synapses with realistic time constants, and neural spike generation and reset non-linearities. The network infers the set of most likely causes from an observation using explaining away, which is dynamically implemented by spike-based, tuned inhibition. The algorithm performs remarkably well even when the network intrinsically generates variable spike trains, the timing of spikes is scrambled by external sources of noise, or the network is mistuned. This type of network might underlie tasks such as odor identification and classification. PMID:26621426

  5. Causal Inference and Explaining Away in a Spiking Network.

    PubMed

    Moreno-Bote, Rubén; Drugowitsch, Jan

    2015-12-01

    While the brain uses spiking neurons for communication, theoretical research on brain computations has mostly focused on non-spiking networks. The nature of spike-based algorithms that achieve complex computations, such as object probabilistic inference, is largely unknown. Here we demonstrate that a family of high-dimensional quadratic optimization problems with non-negativity constraints can be solved exactly and efficiently by a network of spiking neurons. The network naturally imposes the non-negativity of causal contributions that is fundamental to causal inference, and uses simple operations, such as linear synapses with realistic time constants, and neural spike generation and reset non-linearities. The network infers the set of most likely causes from an observation using explaining away, which is dynamically implemented by spike-based, tuned inhibition. The algorithm performs remarkably well even when the network intrinsically generates variable spike trains, the timing of spikes is scrambled by external sources of noise, or the network is mistuned. This type of network might underlie tasks such as odor identification and classification.

  6. Towards the understanding of network information processing in biology

    NASA Astrophysics Data System (ADS)

    Singh, Vijay

    Living organisms perform incredibly well in detecting a signal present in the environment. This information processing is achieved near optimally and quite reliably, even though the sources of signals are highly variable and complex. The work in the last few decades has given us a fair understanding of how individual signal processing units like neurons and cell receptors process signals, but the principles of collective information processing on biological networks are far from clear. Information processing in biological networks, like the brain, metabolic circuits, cellular-signaling circuits, etc., involves complex interactions among a large number of units (neurons, receptors). The combinatorially large number of states such a system can exist in makes it impossible to study these systems from the first principles, starting from the interactions between the basic units. The principles of collective information processing on such complex networks can be identified using coarse graining approaches. This could provide insights into the organization and function of complex biological networks. Here I study models of biological networks using continuum dynamics, renormalization, maximum likelihood estimation and information theory. Such coarse graining approaches identify features that are essential for certain processes performed by underlying biological networks. We find that long-range connections in the brain allow for global scale feature detection in a signal. These also suppress the noise and remove any gaps present in the signal. Hierarchical organization with long-range connections leads to large-scale connectivity at low synapse numbers. Time delays can be utilized to separate a mixture of signals with temporal scales. Our observations indicate that the rules in multivariate signal processing are quite different from traditional single unit signal processing.

  7. Single-cell axotomy of cultured hippocampal neurons integrated in neuronal circuits.

    PubMed

    Gomis-Rüth, Susana; Stiess, Michael; Wierenga, Corette J; Meyn, Liane; Bradke, Frank

    2014-05-01

    An understanding of the molecular mechanisms of axon regeneration after injury is key for the development of potential therapies. Single-cell axotomy of dissociated neurons enables the study of the intrinsic regenerative capacities of injured axons. This protocol describes how to perform single-cell axotomy on dissociated hippocampal neurons containing synapses. Furthermore, to axotomize hippocampal neurons integrated in neuronal circuits, we describe how to set up coculture with a few fluorescently labeled neurons. This approach allows axotomy of single cells in a complex neuronal network and the observation of morphological and molecular changes during axon regeneration. Thus, single-cell axotomy of mature neurons is a valuable tool for gaining insights into cell intrinsic axon regeneration and the plasticity of neuronal polarity of mature neurons. Dissociation of the hippocampus and plating of hippocampal neurons takes ∼2 h. Neurons are then left to grow for 2 weeks, during which time they integrate into neuronal circuits. Subsequent axotomy takes 10 min per neuron and further imaging takes 10 min per neuron.

  8. Neuron-Like Networks Between Ribosomal Proteins Within the Ribosome

    NASA Astrophysics Data System (ADS)

    Poirot, Olivier; Timsit, Youri

    2016-05-01

    From brain to the World Wide Web, information-processing networks share common scale invariant properties. Here, we reveal the existence of neural-like networks at a molecular scale within the ribosome. We show that with their extensions, ribosomal proteins form complex assortative interaction networks through which they communicate through tiny interfaces. The analysis of the crystal structures of 50S eubacterial particles reveals that most of these interfaces involve key phylogenetically conserved residues. The systematic observation of interactions between basic and aromatic amino acids at the interfaces and along the extension provides new structural insights that may contribute to decipher the molecular mechanisms of signal transmission within or between the ribosomal proteins. Similar to neurons interacting through “molecular synapses”, ribosomal proteins form a network that suggest an analogy with a simple molecular brain in which the “sensory-proteins” innervate the functional ribosomal sites, while the “inter-proteins” interconnect them into circuits suitable to process the information flow that circulates during protein synthesis. It is likely that these circuits have evolved to coordinate both the complex macromolecular motions and the binding of the multiple factors during translation. This opens new perspectives on nanoscale information transfer and processing.

  9. Learning Probabilistic Inference through Spike-Timing-Dependent Plasticity.

    PubMed

    Pecevski, Dejan; Maass, Wolfgang

    2016-01-01

    Numerous experimental data show that the brain is able to extract information from complex, uncertain, and often ambiguous experiences. Furthermore, it can use such learnt information for decision making through probabilistic inference. Several models have been proposed that aim at explaining how probabilistic inference could be performed by networks of neurons in the brain. We propose here a model that can also explain how such neural network could acquire the necessary information for that from examples. We show that spike-timing-dependent plasticity in combination with intrinsic plasticity generates in ensembles of pyramidal cells with lateral inhibition a fundamental building block for that: probabilistic associations between neurons that represent through their firing current values of random variables. Furthermore, by combining such adaptive network motifs in a recursive manner the resulting network is enabled to extract statistical information from complex input streams, and to build an internal model for the distribution p (*) that generates the examples it receives. This holds even if p (*) contains higher-order moments. The analysis of this learning process is supported by a rigorous theoretical foundation. Furthermore, we show that the network can use the learnt internal model immediately for prediction, decision making, and other types of probabilistic inference.

  10. Learning Probabilistic Inference through Spike-Timing-Dependent Plasticity123

    PubMed Central

    Pecevski, Dejan

    2016-01-01

    Abstract Numerous experimental data show that the brain is able to extract information from complex, uncertain, and often ambiguous experiences. Furthermore, it can use such learnt information for decision making through probabilistic inference. Several models have been proposed that aim at explaining how probabilistic inference could be performed by networks of neurons in the brain. We propose here a model that can also explain how such neural network could acquire the necessary information for that from examples. We show that spike-timing-dependent plasticity in combination with intrinsic plasticity generates in ensembles of pyramidal cells with lateral inhibition a fundamental building block for that: probabilistic associations between neurons that represent through their firing current values of random variables. Furthermore, by combining such adaptive network motifs in a recursive manner the resulting network is enabled to extract statistical information from complex input streams, and to build an internal model for the distribution p* that generates the examples it receives. This holds even if p* contains higher-order moments. The analysis of this learning process is supported by a rigorous theoretical foundation. Furthermore, we show that the network can use the learnt internal model immediately for prediction, decision making, and other types of probabilistic inference. PMID:27419214

  11. The Rich Club of the C. elegans Neuronal Connectome

    PubMed Central

    Vértes, Petra E.; Ahnert, Sebastian E.; Schafer, William R.; Bullmore, Edward T.

    2013-01-01

    There is increasing interest in topological analysis of brain networks as complex systems, with researchers often using neuroimaging to represent the large-scale organization of nervous systems without precise cellular resolution. Here we used graph theory to investigate the neuronal connectome of the nematode worm Caenorhabditis elegans, which is defined anatomically at a cellular scale as 2287 synaptic connections between 279 neurons. We identified a small number of highly connected neurons as a rich club (N = 11) interconnected with high efficiency and high connection distance. Rich club neurons comprise almost exclusively the interneurons of the locomotor circuits, with known functional importance for coordinated movement. The rich club neurons are connector hubs, with high betweenness centrality, and many intermodular connections to nodes in different modules. On identifying the shortest topological paths (motifs) between pairs of peripheral neurons, the motifs that are found most frequently traverse the rich club. The rich club neurons are born early in development, before visible movement of the animal and before the main phase of developmental elongation of its body. We conclude that the high wiring cost of the globally integrative rich club of neurons in the C. elegans connectome is justified by the adaptive value of coordinated movement of the animal. The economical trade-off between physical cost and behavioral value of rich club organization in a cellular connectome confirms theoretical expectations and recapitulates comparable results from human neuroimaging on much larger scale networks, suggesting that this may be a general and scale-invariant principle of brain network organization. PMID:23575836

  12. Interplay between Graph Topology and Correlations of Third Order in Spiking Neuronal Networks.

    PubMed

    Jovanović, Stojan; Rotter, Stefan

    2016-06-01

    The study of processes evolving on networks has recently become a very popular research field, not only because of the rich mathematical theory that underpins it, but also because of its many possible applications, a number of them in the field of biology. Indeed, molecular signaling pathways, gene regulation, predator-prey interactions and the communication between neurons in the brain can be seen as examples of networks with complex dynamics. The properties of such dynamics depend largely on the topology of the underlying network graph. In this work, we want to answer the following question: Knowing network connectivity, what can be said about the level of third-order correlations that will characterize the network dynamics? We consider a linear point process as a model for pulse-coded, or spiking activity in a neuronal network. Using recent results from theory of such processes, we study third-order correlations between spike trains in such a system and explain which features of the network graph (i.e. which topological motifs) are responsible for their emergence. Comparing two different models of network topology-random networks of Erdős-Rényi type and networks with highly interconnected hubs-we find that, in random networks, the average measure of third-order correlations does not depend on the local connectivity properties, but rather on global parameters, such as the connection probability. This, however, ceases to be the case in networks with a geometric out-degree distribution, where topological specificities have a strong impact on average correlations.

  13. Chimera states in brain networks: Empirical neural vs. modular fractal connectivity

    NASA Astrophysics Data System (ADS)

    Chouzouris, Teresa; Omelchenko, Iryna; Zakharova, Anna; Hlinka, Jaroslav; Jiruska, Premysl; Schöll, Eckehard

    2018-04-01

    Complex spatiotemporal patterns, called chimera states, consist of coexisting coherent and incoherent domains and can be observed in networks of coupled oscillators. The interplay of synchrony and asynchrony in complex brain networks is an important aspect in studies of both the brain function and disease. We analyse the collective dynamics of FitzHugh-Nagumo neurons in complex networks motivated by its potential application to epileptology and epilepsy surgery. We compare two topologies: an empirical structural neural connectivity derived from diffusion-weighted magnetic resonance imaging and a mathematically constructed network with modular fractal connectivity. We analyse the properties of chimeras and partially synchronized states and obtain regions of their stability in the parameter planes. Furthermore, we qualitatively simulate the dynamics of epileptic seizures and study the influence of the removal of nodes on the network synchronizability, which can be useful for applications to epileptic surgery.

  14. Transforming Growth Factor β/Activin signaling in neurons increases susceptibility to starvation.

    PubMed

    Chng, Wen-Bin Alfred; Koch, Rafael; Li, Xiaoxue; Kondo, Shu; Nagoshi, Emi; Lemaitre, Bruno

    2017-01-01

    Animals rely on complex signaling network to mobilize its energy stores during starvation. We have previously shown that the sugar-responsive TGFβ/Activin pathway, activated through the TGFβ ligand Dawdle, plays a central role in shaping the post-prandial digestive competence in the Drosophila midgut. Nevertheless, little is known about the TGFβ/Activin signaling in sugar metabolism beyond the midgut. Here, we address the importance of Dawdle (Daw) after carbohydrate ingestion. We found that Daw expression is coupled to dietary glucose through the evolutionarily conserved Mio-Mlx transcriptional complex. In addition, Daw activates the TGFβ/Activin signaling in neuronal populations to regulate triglyceride and glycogen catabolism and energy homeostasis. Loss of those neurons depleted metabolic reserves and rendered flies susceptible to starvation.

  15. Natural lecithin promotes neural network complexity and activity

    PubMed Central

    Latifi, Shahrzad; Tamayol, Ali; Habibey, Rouhollah; Sabzevari, Reza; Kahn, Cyril; Geny, David; Eftekharpour, Eftekhar; Annabi, Nasim; Blau, Axel; Linder, Michel; Arab-Tehrany, Elmira

    2016-01-01

    Phospholipids in the brain cell membranes contain different polyunsaturated fatty acids (PUFAs), which are critical to nervous system function and structure. In particular, brain function critically depends on the uptake of the so-called “essential” fatty acids such as omega-3 (n-3) and omega-6 (n-6) PUFAs that cannot be readily synthesized by the human body. We extracted natural lecithin rich in various PUFAs from a marine source and transformed it into nanoliposomes. These nanoliposomes increased neurite outgrowth, network complexity and neural activity of cortical rat neurons in vitro. We also observed an upregulation of synapsin I (SYN1), which supports the positive role of lecithin in synaptogenesis, synaptic development and maturation. These findings suggest that lecithin nanoliposomes enhance neuronal development, which may have an impact on devising new lecithin delivery strategies for therapeutic applications. PMID:27228907

  16. Natural lecithin promotes neural network complexity and activity.

    PubMed

    Latifi, Shahrzad; Tamayol, Ali; Habibey, Rouhollah; Sabzevari, Reza; Kahn, Cyril; Geny, David; Eftekharpour, Eftekhar; Annabi, Nasim; Blau, Axel; Linder, Michel; Arab-Tehrany, Elmira

    2016-05-27

    Phospholipids in the brain cell membranes contain different polyunsaturated fatty acids (PUFAs), which are critical to nervous system function and structure. In particular, brain function critically depends on the uptake of the so-called "essential" fatty acids such as omega-3 (n-3) and omega-6 (n-6) PUFAs that cannot be readily synthesized by the human body. We extracted natural lecithin rich in various PUFAs from a marine source and transformed it into nanoliposomes. These nanoliposomes increased neurite outgrowth, network complexity and neural activity of cortical rat neurons in vitro. We also observed an upregulation of synapsin I (SYN1), which supports the positive role of lecithin in synaptogenesis, synaptic development and maturation. These findings suggest that lecithin nanoliposomes enhance neuronal development, which may have an impact on devising new lecithin delivery strategies for therapeutic applications.

  17. The characteristic patterns of neuronal avalanches in mice under anesthesia and at rest: An investigation using constrained artificial neural networks

    PubMed Central

    Knöpfel, Thomas; Leech, Robert

    2018-01-01

    Local perturbations within complex dynamical systems can trigger cascade-like events that spread across significant portions of the system. Cascades of this type have been observed across a broad range of scales in the brain. Studies of these cascades, known as neuronal avalanches, usually report the statistics of large numbers of avalanches, without probing the characteristic patterns produced by the avalanches themselves. This is partly due to limitations in the extent or spatiotemporal resolution of commonly used neuroimaging techniques. In this study, we overcome these limitations by using optical voltage (genetically encoded voltage indicators) imaging. This allows us to record cortical activity in vivo across an entire cortical hemisphere, at both high spatial (~30um) and temporal (~20ms) resolution in mice that are either in an anesthetized or awake state. We then use artificial neural networks to identify the characteristic patterns created by neuronal avalanches in our data. The avalanches in the anesthetized cortex are most accurately classified by an artificial neural network architecture that simultaneously connects spatial and temporal information. This is in contrast with the awake cortex, in which avalanches are most accurately classified by an architecture that treats spatial and temporal information separately, due to the increased levels of spatiotemporal complexity. This is in keeping with reports of higher levels of spatiotemporal complexity in the awake brain coinciding with features of a dynamical system operating close to criticality. PMID:29795654

  18. Neurons of self-defence: neuronal innervation of the exocrine defence glands in stick insects.

    PubMed

    Stolz, Konrad; von Bredow, Christoph-Rüdiger; von Bredow, Yvette M; Lakes-Harlan, Reinhard; Trenczek, Tina E; Strauß, Johannes

    2015-01-01

    Stick insects (Phasmatodea) use repellent chemical substances (allomones) for defence which are released from so-called defence glands in the prothorax. These glands differ in size between species, and are under neuronal control from the CNS. The detailed neural innervation and possible differences between species are not studied so far. Using axonal tracing, the neuronal innervation is investigated comparing four species. The aim is to document the complexity of defence gland innervation in peripheral nerves and central motoneurons in stick insects. In the species studied here, the defence gland is innervated by the intersegmental nerve complex (ISN) which is formed by three nerves from the prothoracic (T1) and suboesophageal ganglion (SOG), as well as a distinct suboesophageal nerve (Nervus anterior of the suboesophageal ganglion). In Carausius morosus and Sipyloidea sipylus, axonal tracing confirmed an innervation of the defence glands by this N. anterior SOG as well as N. anterior T1 and N. posterior SOG from the intersegmental nerve complex. In Peruphasma schultei, which has rather large defence glands, only the innervation by the N. anterior SOG was documented by axonal tracing. In the central nervous system of all species, 3-4 neuron types are identified by axonal tracing which send axons in the N. anterior SOG likely innervating the defence gland as well as adjacent muscles. These neurons are mainly suboesophageal neurons with one intersegmental neuron located in the prothoracic ganglion. The neuron types are conserved in the species studied, but the combination of neuron types is not identical. In addition, the central nervous system in S. sipylus contains one suboesophageal and one prothoracic neuron type with axons in the intersegmental nerve complex contacting the defence gland. Axonal tracing shows a very complex innervation pattern of the defence glands of Phasmatodea which contains different neurons in different nerves from two adjacent body segments. The gland size correlates to the size of a neuron soma in the suboesophageal ganglion, which likely controls gland contraction. In P. schultei, the innervation pattern appears simplified to the anterior suboesophageal nerve. Hence, some evolutionary changes are notable in a conserved neuronal network.

  19. Development of coherent neuronal activity patterns in mammalian cortical networks: common principles and local hetereogeneity.

    PubMed

    Egorov, Alexei V; Draguhn, Andreas

    2013-01-01

    Many mammals are born in a very immature state and develop their rich repertoire of behavioral and cognitive functions postnatally. This development goes in parallel with changes in the anatomical and functional organization of cortical structures which are involved in most complex activities. The emerging spatiotemporal activity patterns in multi-neuronal cortical networks may indeed form a direct neuronal correlate of systemic functions like perception, sensorimotor integration, decision making or memory formation. During recent years, several studies--mostly in rodents--have shed light on the ontogenesis of such highly organized patterns of network activity. While each local network has its own peculiar properties, some general rules can be derived. We therefore review and compare data from the developing hippocampus, neocortex and--as an intermediate region--entorhinal cortex. All cortices seem to follow a characteristic sequence starting with uncorrelated activity in uncoupled single neurons where transient activity seems to have mostly trophic effects. In rodents, before and shortly after birth, cortical networks develop weakly coordinated multineuronal discharges which have been termed synchronous plateau assemblies (SPAs). While these patterns rely mostly on electrical coupling by gap junctions, the subsequent increase in number and maturation of chemical synapses leads to the generation of large-scale coherent discharges. These patterns have been termed giant depolarizing potentials (GDPs) for predominantly GABA-induced events or early network oscillations (ENOs) for mostly glutamatergic bursts, respectively. During the third to fourth postnatal week, cortical areas reach their final activity patterns with distinct network oscillations and highly specific neuronal discharge sequences which support adult behavior. While some of the mechanisms underlying maturation of network activity have been elucidated much work remains to be done in order to fully understand the rules governing transition from immature to mature patterns of network activity. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  20. Network activity influences the subthreshold and spiking visual responses of pyramidal neurons in the three-layer turtle cortex.

    PubMed

    Wright, Nathaniel C; Wessel, Ralf

    2017-10-01

    A primary goal of systems neuroscience is to understand cortical function, typically by studying spontaneous and stimulus-modulated cortical activity. Mounting evidence suggests a strong and complex relationship exists between the ongoing and stimulus-modulated cortical state. To date, most work in this area has been based on spiking in populations of neurons. While advantageous in many respects, this approach is limited in scope: it records the activity of a minority of neurons and gives no direct indication of the underlying subthreshold dynamics. Membrane potential recordings can fill these gaps in our understanding, but stable recordings are difficult to obtain in vivo. Here, we recorded subthreshold cortical visual responses in the ex vivo turtle eye-attached whole brain preparation, which is ideally suited for such a study. We found that, in the absence of visual stimulation, the network was "synchronous"; neurons displayed network-mediated transitions between hyperpolarized (Down) and depolarized (Up) membrane potential states. The prevalence of these slow-wave transitions varied across turtles and recording sessions. Visual stimulation evoked similar Up states, which were on average larger and less reliable when the ongoing state was more synchronous. Responses were muted when immediately preceded by large, spontaneous Up states. Evoked spiking was sparse, highly variable across trials, and mediated by concerted synaptic inputs that were, in general, only very weakly correlated with inputs to nearby neurons. Together, these results highlight the multiplexed influence of the cortical network on the spontaneous and sensory-evoked activity of individual cortical neurons. NEW & NOTEWORTHY Most studies of cortical activity focus on spikes. Subthreshold membrane potential recordings can provide complementary insight, but stable recordings are difficult to obtain in vivo. Here, we recorded the membrane potentials of cortical neurons during ongoing and visually evoked activity. We observed a strong relationship between network and single-neuron evoked activity spanning multiple temporal scales. The membrane potential perspective of cortical dynamics thus highlights the influence of intrinsic network properties on visual processing. Copyright © 2017 the American Physiological Society.

  1. STDP-based spiking deep convolutional neural networks for object recognition.

    PubMed

    Kheradpisheh, Saeed Reza; Ganjtabesh, Mohammad; Thorpe, Simon J; Masquelier, Timothée

    2018-03-01

    Previous studies have shown that spike-timing-dependent plasticity (STDP) can be used in spiking neural networks (SNN) to extract visual features of low or intermediate complexity in an unsupervised manner. These studies, however, used relatively shallow architectures, and only one layer was trainable. Another line of research has demonstrated - using rate-based neural networks trained with back-propagation - that having many layers increases the recognition robustness, an approach known as deep learning. We thus designed a deep SNN, comprising several convolutional (trainable with STDP) and pooling layers. We used a temporal coding scheme where the most strongly activated neurons fire first, and less activated neurons fire later or not at all. The network was exposed to natural images. Thanks to STDP, neurons progressively learned features corresponding to prototypical patterns that were both salient and frequent. Only a few tens of examples per category were required and no label was needed. After learning, the complexity of the extracted features increased along the hierarchy, from edge detectors in the first layer to object prototypes in the last layer. Coding was very sparse, with only a few thousands spikes per image, and in some cases the object category could be reasonably well inferred from the activity of a single higher-order neuron. More generally, the activity of a few hundreds of such neurons contained robust category information, as demonstrated using a classifier on Caltech 101, ETH-80, and MNIST databases. We also demonstrate the superiority of STDP over other unsupervised techniques such as random crops (HMAX) or auto-encoders. Taken together, our results suggest that the combination of STDP with latency coding may be a key to understanding the way that the primate visual system learns, its remarkable processing speed and its low energy consumption. These mechanisms are also interesting for artificial vision systems, particularly for hardware solutions. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Morphology of Dbx1 respiratory neurons in the preBötzinger complex and reticular formation of neonatal mice.

    PubMed

    Akins, Victoria T; Weragalaarachchi, Krishanthi; Picardo, Maria Cristina D; Revill, Ann L; Del Negro, Christopher A

    2017-08-01

    The relationship between neuron morphology and function is a perennial issue in neuroscience. Information about synaptic integration, network connectivity, and the specific roles of neuronal subpopulations can be obtained through morphological analysis of key neurons within a microcircuit. Here we present morphologies of two classes of brainstem respiratory neurons. First, interneurons derived from Dbx1-expressing precursors (Dbx1 neurons) in the preBötzinger complex (preBötC) of the ventral medulla that generate the rhythm for inspiratory breathing movements. Second, Dbx1 neurons of the intermediate reticular formation that influence the motor pattern of pharyngeal and lingual movements during the inspiratory phase of the breathing cycle. We describe the image acquisition and subsequent digitization of morphologies of respiratory Dbx1 neurons from the preBötC and the intermediate reticular formation that were first recorded in vitro. These data can be analyzed comparatively to examine how morphology influences the roles of Dbx1 preBötC and Dbx1 reticular interneurons in respiration and can also be utilized to create morphologically accurate compartmental models for simulation and modeling of respiratory circuits.

  3. Pharmacological Tools to Study the Role of Astrocytes in Neural Network Functions.

    PubMed

    Peña-Ortega, Fernando; Rivera-Angulo, Ana Julia; Lorea-Hernández, Jonathan Julio

    2016-01-01

    Despite that astrocytes and microglia do not communicate by electrical impulses, they can efficiently communicate among them, with each other and with neurons, to participate in complex neural functions requiring broad cell-communication and long-lasting regulation of brain function. Glial cells express many receptors in common with neurons; secrete gliotransmitters as well as neurotrophic and neuroinflammatory factors, which allow them to modulate synaptic transmission and neural excitability. All these properties allow glial cells to influence the activity of neuronal networks. Thus, the incorporation of glial cell function into the understanding of nervous system dynamics will provide a more accurate view of brain function. Our current knowledge of glial cell biology is providing us with experimental tools to explore their participation in neural network modulation. In this chapter, we review some of the classical, as well as some recent, pharmacological tools developed for the study of astrocyte's influence in neural function. We also provide some examples of the use of these pharmacological agents to understand the role of astrocytes in neural network function and dysfunction.

  4. Defects formation and spiral waves in a network of neurons in presence of electromagnetic induction.

    PubMed

    Rostami, Zahra; Jafari, Sajad

    2018-04-01

    Complex anatomical and physiological structure of an excitable tissue (e.g., cardiac tissue) in the body can represent different electrical activities through normal or abnormal behavior. Abnormalities of the excitable tissue coming from different biological reasons can lead to formation of some defects. Such defects can cause some successive waves that may end up to some additional reorganizing beating behaviors like spiral waves or target waves. In this study, formation of defects and the resulting emitted waves in an excitable tissue are investigated. We have considered a square array network of neurons with nearest-neighbor connections to describe the excitable tissue. Fundamentally, electrophysiological properties of ion currents in the body are responsible for exhibition of electrical spatiotemporal patterns. More precisely, fluctuation of accumulated ions inside and outside of cell causes variable electrical and magnetic field. Considering undeniable mutual effects of electrical field and magnetic field, we have proposed the new Hindmarsh-Rose (HR) neuronal model for the local dynamics of each individual neuron in the network. In this new neuronal model, the influence of magnetic flow on membrane potential is defined. This improved model holds more bifurcation parameters. Moreover, the dynamical behavior of the tissue is investigated in different states of quiescent, spiking, bursting and even chaotic state. The resulting spatiotemporal patterns are represented and the time series of some sampled neurons are displayed, as well.

  5. Reverse engineering a mouse embryonic stem cell-specific transcriptional network reveals a new modulator of neuronal differentiation

    PubMed Central

    De Cegli, Rossella; Iacobacci, Simona; Flore, Gemma; Gambardella, Gennaro; Mao, Lei; Cutillo, Luisa; Lauria, Mario; Klose, Joachim; Illingworth, Elizabeth; Banfi, Sandro; di Bernardo, Diego

    2013-01-01

    Gene expression profiles can be used to infer previously unknown transcriptional regulatory interaction among thousands of genes, via systems biology ‘reverse engineering’ approaches. We ‘reverse engineered’ an embryonic stem (ES)-specific transcriptional network from 171 gene expression profiles, measured in ES cells, to identify master regulators of gene expression (‘hubs’). We discovered that E130012A19Rik (E13), highly expressed in mouse ES cells as compared with differentiated cells, was a central ‘hub’ of the network. We demonstrated that E13 is a protein-coding gene implicated in regulating the commitment towards the different neuronal subtypes and glia cells. The overexpression and knock-down of E13 in ES cell lines, undergoing differentiation into neurons and glia cells, caused a strong up-regulation of the glutamatergic neurons marker Vglut2 and a strong down-regulation of the GABAergic neurons marker GAD65 and of the radial glia marker Blbp. We confirmed E13 expression in the cerebral cortex of adult mice and during development. By immuno-based affinity purification, we characterized protein partners of E13, involved in the Polycomb complex. Our results suggest a role of E13 in regulating the division between glutamatergic projection neurons and GABAergic interneurons and glia cells possibly by epigenetic-mediated transcriptional regulation. PMID:23180766

  6. Shedding Light on Words and Sentences: Near-Infrared Spectroscopy in Language Research

    ERIC Educational Resources Information Center

    Rossi, Sonja; Telkemeyer, Silke; Wartenburger, Isabell; Obrig, Hellmuth

    2012-01-01

    Investigating the neuronal network underlying language processing may contribute to a better understanding of how the brain masters this complex cognitive function with surprising ease and how language is acquired at a fast pace in infancy. Modern neuroimaging methods permit to visualize the evolvement and the function of the language network. The…

  7. The cells of cajal-retzius: still a mystery one century after.

    PubMed

    Soriano, Eduardo; Del Río, José Antonio

    2005-05-05

    Cajal-Retzius (CR) cells are an enigmatic class of neurons located at the surface of the cerebral cortex, playing a major role in cortical development. In this review, we discuss several distinct features of these neurons and the mechanisms by which they regulate cortical development. Many CR cells likely have extracortical origin and undergo cell death during development. Recent genetic studies report unique patterns of gene expression in CR cells, which may help to explain the developmental processes in which they participate. Moreover, a number of studies indicate that CR cells, and their secreted gene product, reelin, are involved in neuronal migration by acting on two key partners, migrating neurons and radial glial cells. Emerging data show that these neurons are a critical part of an early and complex network of neural activity in layer I, supporting the notion that CR cells modulate cortical maturation. Given these key and complex developmental properties, it is therefore conceivable for CR cells to be implicated in the pathogenesis of a variety of neurological disorders.

  8. Downstream effects of hippocampal sharp wave ripple oscillations on medial entorhinal cortex layer V neurons in vitro.

    PubMed

    Roth, Fabian C; Beyer, Katinka M; Both, Martin; Draguhn, Andreas; Egorov, Alexei V

    2016-12-01

    The entorhinal cortex (EC) is a critical component of the medial temporal lobe (MTL) memory system. Local networks within the MTL express a variety of state-dependent network oscillations that are believed to organize neuronal activity during memory formation. The peculiar pattern of sharp wave-ripple complexes (SPW-R) entrains neurons by a very fast oscillation at ∼200 Hz in the hippocampal areas CA3 and CA1 and then propagates through the "output loop" into the EC. The precise mechanisms of SPW-R propagation and the resulting cellular input patterns in the mEC are, however, largely unknown. We therefore investigated the activity of layer V (LV) principal neurons of the medial EC (mEC) during SPW-R oscillations in horizontal mouse brain slices. Intracellular recordings in the mEC were combined with extracellular monitoring of propagating network activity. SPW-R in CA1 were regularly followed by negative field potential deflections in the mEC. Propagation of SPW-R activity from CA1 to the mEC was mostly monosynaptic and excitatory, such that synaptic input to mEC LV neurons directly reflected unit activity in CA1. Comparison with propagating network activity from CA3 to CA1 revealed a similar role of excitatory long-range connections for both regions. However, SPW-R-induced activity in CA1 involved strong recruitment of rhythmic synaptic inhibition and corresponding fast field oscillations, in contrast to the mEC. These differences between features of propagating SPW-R emphasize the differential processing of network activity by each local network of the hippocampal output loop. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  9. Collision detection in complex dynamic scenes using an LGMD-based visual neural network with feature enhancement.

    PubMed

    Yue, Shigang; Rind, F Claire

    2006-05-01

    The lobula giant movement detector (LGMD) is an identified neuron in the locust brain that responds most strongly to the images of an approaching object such as a predator. Its computational model can cope with unpredictable environments without using specific object recognition algorithms. In this paper, an LGMD-based neural network is proposed with a new feature enhancement mechanism to enhance the expanded edges of colliding objects via grouped excitation for collision detection with complex backgrounds. The isolated excitation caused by background detail will be filtered out by the new mechanism. Offline tests demonstrated the advantages of the presented LGMD-based neural network in complex backgrounds. Real time robotics experiments using the LGMD-based neural network as the only sensory system showed that the system worked reliably in a wide range of conditions; in particular, the robot was able to navigate in arenas with structured surrounds and complex backgrounds.

  10. Network feedback regulates motor output across a range of modulatory neuron activity

    PubMed Central

    Spencer, Robert M.

    2016-01-01

    Modulatory projection neurons alter network neuron synaptic and intrinsic properties to elicit multiple different outputs. Sensory and other inputs elicit a range of modulatory neuron activity that is further shaped by network feedback, yet little is known regarding how the impact of network feedback on modulatory neurons regulates network output across a physiological range of modulatory neuron activity. Identified network neurons, a fully described connectome, and a well-characterized, identified modulatory projection neuron enabled us to address this issue in the crab (Cancer borealis) stomatogastric nervous system. The modulatory neuron modulatory commissural neuron 1 (MCN1) activates and modulates two networks that generate rhythms via different cellular mechanisms and at distinct frequencies. MCN1 is activated at rates of 5–35 Hz in vivo and in vitro. Additionally, network feedback elicits MCN1 activity time-locked to motor activity. We asked how network activation, rhythm speed, and neuron activity levels are regulated by the presence or absence of network feedback across a physiological range of MCN1 activity rates. There were both similarities and differences in responses of the two networks to MCN1 activity. Many parameters in both networks were sensitive to network feedback effects on MCN1 activity. However, for most parameters, MCN1 activity rate did not determine the extent to which network output was altered by the addition of network feedback. These data demonstrate that the influence of network feedback on modulatory neuron activity is an important determinant of network output and feedback can be effective in shaping network output regardless of the extent of network modulation. PMID:27030739

  11. Neuronal networks and self-organizing maps: new computer techniques in the acoustic evaluation of the infant cry.

    PubMed

    Schönweiler, R; Kaese, S; Möller, S; Rinscheid, A; Ptok, M

    1996-12-05

    Neuronal networks are computer-based techniques for the evaluation and control of complex information systems and processes. So far, they have been used in engineering, telecommunications, artificial speech and speech recognition. A new approach in neuronal network is the self-organizing map (Kohonen map). In the phase of 'learning', the map adapts to the patterns of the primary signals. If, the phase of 'using the map', the input signal hits the field of the primary signals, it resembles them and is called a 'winner'. In our study, we recorded the cries of newborns and young infants using digital audio tape (DAT) and a high quality microphone. The cries were elicited by tactile stimuli wearing headphones. In 27 cases, delayed auditory feedback was presented to the children using a headphone and an additional three-head tape-recorder. Spectrographic characteristics of the cries were classified by 20-step bark spectra and then applied to the neuronal networks. It was possible to recognize similarities of different cries of the same children as well as interindividual differences, which are also audible to experienced listeners. Differences were obvious in profound hearing loss. We know much about the cries of both healthy and sick infants, but a reliable investigation regimen, which can be used for clinical routine purposes, has yet not been developed. If, in the future, it becomes possible to classify spectrographic characteristics automatically, even if they are not audible, neuronal networks may be helpful in the early diagnosis of infant diseases.

  12. Vulnerability-Based Critical Neurons, Synapses, and Pathways in the Caenorhabditis elegans Connectome

    PubMed Central

    Kim, Seongkyun; Kim, Hyoungkyu; Kralik, Jerald D.; Jeong, Jaeseung

    2016-01-01

    Determining the fundamental architectural design of complex nervous systems will lead to significant medical and technological advances. Yet it remains unclear how nervous systems evolved highly efficient networks with near optimal sharing of pathways that yet produce multiple distinct behaviors to reach the organism’s goals. To determine this, the nematode roundworm Caenorhabditis elegans is an attractive model system. Progress has been made in delineating the behavioral circuits of the C. elegans, however, many details are unclear, including the specific functions of every neuron and synapse, as well as the extent the behavioral circuits are separate and parallel versus integrative and serial. Network analysis provides a normative approach to help specify the network design. We investigated the vulnerability of the Caenorhabditis elegans connectome by performing computational experiments that (a) “attacked” 279 individual neurons and 2,990 weighted synaptic connections (composed of 6,393 chemical synapses and 890 electrical junctions) and (b) quantified the effects of each removal on global network properties that influence information processing. The analysis identified 12 critical neurons and 29 critical synapses for establishing fundamental network properties. These critical constituents were found to be control elements—i.e., those with the most influence over multiple underlying pathways. Additionally, the critical synapses formed into circuit-level pathways. These emergent pathways provide evidence for (a) the importance of backward locomotion, avoidance behavior, and social feeding behavior to the organism; (b) the potential roles of specific neurons whose functions have been unclear; and (c) both parallel and serial design elements in the connectome—i.e., specific evidence for a mixed architectural design. PMID:27540747

  13. Complex Network Analysis of CA3 Transcriptome Reveals Pathogenic and Compensatory Pathways in Refractory Temporal Lobe Epilepsy

    PubMed Central

    Bando, Silvia Yumi; Silva, Filipi Nascimento; Costa, Luciano da Fontoura; Silva, Alexandre V.; Pimentel-Silva, Luciana R.; Castro, Luiz HM.; Wen, Hung-Tzu; Amaro, Edson; Moreira-Filho, Carlos Alberto

    2013-01-01

    We previously described – studying transcriptional signatures of hippocampal CA3 explants – that febrile (FS) and afebrile (NFS) forms of refractory mesial temporal lobe epilepsy constitute two distinct genomic phenotypes. That network analysis was based on a limited number (hundreds) of differentially expressed genes (DE networks) among a large set of valid transcripts (close to two tens of thousands). Here we developed a methodology for complex network visualization (3D) and analysis that allows the categorization of network nodes according to distinct hierarchical levels of gene-gene connections (node degree) and of interconnection between node neighbors (concentric node degree). Hubs are highly connected nodes, VIPs have low node degree but connect only with hubs, and high-hubs have VIP status and high overall number of connections. Studying the whole set of CA3 valid transcripts we: i) obtained complete transcriptional networks (CO) for FS and NFS phenotypic groups; ii) examined how CO and DE networks are related; iii) characterized genomic and molecular mechanisms underlying FS and NFS phenotypes, identifying potential novel targets for therapeutic interventions. We found that: i) DE hubs and VIPs are evenly distributed inside the CO networks; ii) most DE hubs and VIPs are related to synaptic transmission and neuronal excitability whereas most CO hubs, VIPs and high hubs are related to neuronal differentiation, homeostasis and neuroprotection, indicating compensatory mechanisms. Complex network visualization and analysis is a useful tool for systems biology approaches to multifactorial diseases. Network centrality observed for hubs, VIPs and high hubs of CO networks, is consistent with the network disease model, where a group of nodes whose perturbation leads to a disease phenotype occupies a central position in the network. Conceivably, the chance for exerting therapeutic effects through the modulation of particular genes will be higher if these genes are highly interconnected in transcriptional networks. PMID:24278214

  14. Label-free optical detection of action potential in mammalian neurons (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Batabyal, Subrata; Satpathy, Sarmishtha; Bui, Loan; Kim, Young-Tae; Mohanty, Samarendra K.; Davé, Digant P.

    2017-02-01

    Electrophysiology techniques are the gold standard in neuroscience for studying functionality of a single neuron to a complex neuronal network. However, electrophysiology techniques are not flawless, they are invasive nature, procedures are cumbersome to implement with limited capability of being used as a high-throughput recording system. Also, long term studies of neuronal functionality with aid of electrophysiology is not feasible. Non-invasive stimulation and detection of neuronal electrical activity has been a long standing goal in neuroscience. Introduction of optogenetics has ushered in the era of non-invasive optical stimulation of neurons, which is revolutionizing neuroscience research. Optical detection of neuronal activity that is comparable to electro-physiology is still elusive. A number of optical techniques have been reported recording of neuronal electrical activity but none is capable of reliably measuring action potential spikes that is comparable to electro-physiology. Optical detection of action potential with voltage sensitive fluorescent reporters are potential alternatives to electrophysiology techniques. The heavily rely on secondary reporters, which are often toxic in nature with background fluorescence, with slow response and low SNR making them far from ideal. The detection of one shot (without averaging)-single action potential in a true label-free way has been elusive so far. In this report, we demonstrate the optical detection of single neuronal spike in a cultured mammalian neuronal network without using any exogenous labels. To the best of our knowledge, this is the first demonstration of label free optical detection of single action potentials in a mammalian neuronal network, which was achieved using a high-speed phase sensitive interferometer. We have carried out stimulation and inhibition of neuronal firing using Glutamate and Tetrodotoxin respectively to demonstrate the different outcome (stimulation and inhibition) revealed in optical signal. We hypothesize that the interrogating optical beam is modulated during neuronal firing by electro-motility driven membrane fluctuation in conjunction with electrical wave propagation in cellular system.

  15. Reconciling genetic evolution and the associative learning account of mirror neurons through data-acquisition mechanisms.

    PubMed

    Lotem, Arnon; Kolodny, Oren

    2014-04-01

    An associative learning account of mirror neurons should not preclude genetic evolution of its underlying mechanisms. On the contrary, an associative learning framework for cognitive development should seek heritable variation in the learning rules and in the data-acquisition mechanisms that construct associative networks, demonstrating how small genetic modifications of associative elements can give rise to the evolution of complex cognition.

  16. Arp2/3 complex–dependent actin networks constrain myosin II function in driving retrograde actin flow

    PubMed Central

    Yang, Qing; Zhang, Xiao-Feng; Pollard, Thomas D.

    2012-01-01

    The Arp2/3 complex nucleates actin filaments to generate networks at the leading edge of motile cells. Nonmuscle myosin II produces contractile forces involved in driving actin network translocation. We inhibited the Arp2/3 complex and/or myosin II with small molecules to investigate their respective functions in neuronal growth cone actin dynamics. Inhibition of the Arp2/3 complex with CK666 reduced barbed end actin assembly site density at the leading edge, disrupted actin veils, and resulted in veil retraction. Strikingly, retrograde actin flow rates increased with Arp2/3 complex inhibition; however, when myosin II activity was blocked, Arp2/3 complex inhibition now resulted in slowing of retrograde actin flow and veils no longer retracted. Retrograde flow rate increases induced by Arp2/3 complex inhibition were independent of Rho kinase activity. These results provide evidence that, although the Arp2/3 complex and myosin II are spatially segregated, actin networks assembled by the Arp2/3 complex can restrict myosin II–dependent contractility with consequent effects on growth cone motility. PMID:22711700

  17. Modeling spike-wave discharges by a complex network of neuronal oscillators.

    PubMed

    Medvedeva, Tatiana M; Sysoeva, Marina V; van Luijtelaar, Gilles; Sysoev, Ilya V

    2018-02-01

    The organization of neural networks and the mechanisms, which generate the highly stereotypical for absence epilepsy spike-wave discharges (SWDs) is heavily debated. Here we describe such a model which can both reproduce the characteristics of SWDs and dynamics of coupling between brain regions, relying mainly on properties of hierarchically organized networks of a large number of neuronal oscillators. We used a two level mesoscale model. The first level consists of three structures: the nervus trigeminus serving as an input, the thalamus and the somatosensory cortex; the second level of a group of nearby situated neurons belonging to one of three modeled structures. The model reproduces the main features of the transition from normal to epileptiformic activity and its spontaneous abortion: an increase in the oscillation amplitude, the emergence of the main frequency and its higher harmonics, and the ability to generate trains of seizures. The model was stable with respect to variations in the structure of couplings and to scaling. The analyzes of the interactions between model structures from their time series using Granger causality method showed that the model reproduced the preictal coupling increase detected previously from experimental data. SWDs can be generated by changes in network organization. It is proposed that a specific pathological architecture of couplings in the brain is necessary to allow the transition from normal to epileptiformic activity, next to by others modeled and reported factors referring to complex, intrinsic, and synaptic mechanisms. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. The C. elegans Connectome Consists of Homogenous Circuits with Defined Functional Roles

    PubMed Central

    Azulay, Aharon; Zaslaver, Alon

    2016-01-01

    A major goal of systems neuroscience is to decipher the structure-function relationship in neural networks. Here we study network functionality in light of the common-neighbor-rule (CNR) in which a pair of neurons is more likely to be connected the more common neighbors it shares. Focusing on the fully-mapped neural network of C. elegans worms, we establish that the CNR is an emerging property in this connectome. Moreover, sets of common neighbors form homogenous structures that appear in defined layers of the network. Simulations of signal propagation reveal their potential functional roles: signal amplification and short-term memory at the sensory/inter-neuron layer, and synchronized activity at the motoneuron layer supporting coordinated movement. A coarse-grained view of the neural network based on homogenous connected sets alone reveals a simple modular network architecture that is intuitive to understand. These findings provide a novel framework for analyzing larger, more complex, connectomes once these become available. PMID:27606684

  19. Modulation and detection of single neuron activity using spin transfer nano-oscillators

    NASA Astrophysics Data System (ADS)

    Algarin, Jose Miguel; Ramaswamy, Bharath; Venuti, Lucy; Swierzbinski, Matthew; Villar, Pablo; Chen, Yu-Jin; Krivorotov, Ilya; Weinberg, Irving N.; Herberholz, Jens; Araneda, Ricardo; Shapiro, Benjamin; Waks, Edo

    2017-09-01

    The brain is a complex network of interconnected circuits that exchange electrical signals with each other. These electrical signals provide insight on how neural circuits code information, and give rise to sensations, thoughts, emotions and actions. Currents methods to detect and modulate these electrical signals use implanted electrodes or optical fields with light sensitive dyes in the brain. These techniques require complex surgeries or suffer low resolution. In this talk we explore a new method to both image and stimulate single neurons using spintronics. We propose using a Spin Transfer Nano-Oscillators (STNOs) as a nanoscale sensor that converts neuronal action potentials to microwave field oscillations that can be detected wirelessly by magnetic induction. We will describe our recent proof-of-concept demonstration of both detection and wireless modulation of neuronal activity using STNOs. For detection we use electrodes to connect a STNO to a lateral giant crayfish neuron. When we stimulate the neuron, the STNO responds to the neuronal activity with a corresponding microwave signal. For modulation, we stimulate the STNOs wirelessly using an inductively coupled solenoid. The STNO rectifies the induced microwave signal to produce a direct voltage. This direct voltage from the STNO, when applied in the vicinity of a mammalian neuron, changes the frequency of electrical signals produced by the neuron.

  20. Network feedback regulates motor output across a range of modulatory neuron activity.

    PubMed

    Spencer, Robert M; Blitz, Dawn M

    2016-06-01

    Modulatory projection neurons alter network neuron synaptic and intrinsic properties to elicit multiple different outputs. Sensory and other inputs elicit a range of modulatory neuron activity that is further shaped by network feedback, yet little is known regarding how the impact of network feedback on modulatory neurons regulates network output across a physiological range of modulatory neuron activity. Identified network neurons, a fully described connectome, and a well-characterized, identified modulatory projection neuron enabled us to address this issue in the crab (Cancer borealis) stomatogastric nervous system. The modulatory neuron modulatory commissural neuron 1 (MCN1) activates and modulates two networks that generate rhythms via different cellular mechanisms and at distinct frequencies. MCN1 is activated at rates of 5-35 Hz in vivo and in vitro. Additionally, network feedback elicits MCN1 activity time-locked to motor activity. We asked how network activation, rhythm speed, and neuron activity levels are regulated by the presence or absence of network feedback across a physiological range of MCN1 activity rates. There were both similarities and differences in responses of the two networks to MCN1 activity. Many parameters in both networks were sensitive to network feedback effects on MCN1 activity. However, for most parameters, MCN1 activity rate did not determine the extent to which network output was altered by the addition of network feedback. These data demonstrate that the influence of network feedback on modulatory neuron activity is an important determinant of network output and feedback can be effective in shaping network output regardless of the extent of network modulation. Copyright © 2016 the American Physiological Society.

  1. An FPGA-Based Massively Parallel Neuromorphic Cortex Simulator

    PubMed Central

    Wang, Runchun M.; Thakur, Chetan S.; van Schaik, André

    2018-01-01

    This paper presents a massively parallel and scalable neuromorphic cortex simulator designed for simulating large and structurally connected spiking neural networks, such as complex models of various areas of the cortex. The main novelty of this work is the abstraction of a neuromorphic architecture into clusters represented by minicolumns and hypercolumns, analogously to the fundamental structural units observed in neurobiology. Without this approach, simulating large-scale fully connected networks needs prohibitively large memory to store look-up tables for point-to-point connections. Instead, we use a novel architecture, based on the structural connectivity in the neocortex, such that all the required parameters and connections can be stored in on-chip memory. The cortex simulator can be easily reconfigured for simulating different neural networks without any change in hardware structure by programming the memory. A hierarchical communication scheme allows one neuron to have a fan-out of up to 200 k neurons. As a proof-of-concept, an implementation on one Altera Stratix V FPGA was able to simulate 20 million to 2.6 billion leaky-integrate-and-fire (LIF) neurons in real time. We verified the system by emulating a simplified auditory cortex (with 100 million neurons). This cortex simulator achieved a low power dissipation of 1.62 μW per neuron. With the advent of commercially available FPGA boards, our system offers an accessible and scalable tool for the design, real-time simulation, and analysis of large-scale spiking neural networks. PMID:29692702

  2. An FPGA-Based Massively Parallel Neuromorphic Cortex Simulator.

    PubMed

    Wang, Runchun M; Thakur, Chetan S; van Schaik, André

    2018-01-01

    This paper presents a massively parallel and scalable neuromorphic cortex simulator designed for simulating large and structurally connected spiking neural networks, such as complex models of various areas of the cortex. The main novelty of this work is the abstraction of a neuromorphic architecture into clusters represented by minicolumns and hypercolumns, analogously to the fundamental structural units observed in neurobiology. Without this approach, simulating large-scale fully connected networks needs prohibitively large memory to store look-up tables for point-to-point connections. Instead, we use a novel architecture, based on the structural connectivity in the neocortex, such that all the required parameters and connections can be stored in on-chip memory. The cortex simulator can be easily reconfigured for simulating different neural networks without any change in hardware structure by programming the memory. A hierarchical communication scheme allows one neuron to have a fan-out of up to 200 k neurons. As a proof-of-concept, an implementation on one Altera Stratix V FPGA was able to simulate 20 million to 2.6 billion leaky-integrate-and-fire (LIF) neurons in real time. We verified the system by emulating a simplified auditory cortex (with 100 million neurons). This cortex simulator achieved a low power dissipation of 1.62 μW per neuron. With the advent of commercially available FPGA boards, our system offers an accessible and scalable tool for the design, real-time simulation, and analysis of large-scale spiking neural networks.

  3. Altered neuronal network and rescue in a human MECP2 duplication model

    PubMed Central

    Nageshappa, Savitha; Carromeu, Cassiano; Trujillo, Cleber A.; Mesci, Pinar; Espuny-Camacho, Ira; Pasciuto, Emanuela; Vanderhaeghen, Pierre; Verfaillie, Catherine; Raitano, Susanna; Kumar, Anujith; Carvalho, Claudia M.B.; Bagni, Claudia; Ramocki, Melissa B.; Araujo, Bruno H. S.; Torres, Laila B.; Lupski, James R.; Van Esch, Hilde; Muotri, Alysson R.

    2015-01-01

    Increased dosage of MeCP2 results in a dramatic neurodevelopmental phenotype with onset at birth. We generated induced pluripotent stem cells (iPSC) from patients with the MECP2 duplication syndrome (MECP2dup), carrying different duplication sizes, to study the impact of increased MeCP2 dosage in human neurons. We show that cortical neurons derived from these different MECP2dup iPSC lines have increase synaptogenesis and dendritic complexity. Additionally, using multi-electrodes arrays, we show that neuronal network synchronization was altered in MECP2dup-derived neurons. Given MeCP2 function at the epigenetic level, we tested if these alterations were reversible using a library of compounds with defined activity on epigenetic pathways. One histone deacetylase inhibitor, NCH-51, was validated as a potential clinical candidate. Interestingly, this compound has never been considered before as a therapeutic alternative for neurological disorders. Our model recapitulates early stages of the human MECP2 duplication syndrome and represents a promising cellular tool to facilitate therapeutic drug screening for severe neurodevelopmental disorders. PMID:26347316

  4. Alterations of cortical GABA neurons and network oscillations in schizophrenia.

    PubMed

    Gonzalez-Burgos, Guillermo; Hashimoto, Takanori; Lewis, David A

    2010-08-01

    The hypothesis that alterations of cortical inhibitory gamma-aminobutyric acid (GABA) neurons are a central element in the pathology of schizophrenia has emerged from a series of postmortem studies. How such abnormalities may contribute to the clinical features of schizophrenia has been substantially informed by a convergence with basic neuroscience studies revealing complex details of GABA neuron function in the healthy brain. Importantly, activity of the parvalbumin-containing class of GABA neurons has been linked to the production of cortical network oscillations. Furthermore, growing knowledge supports the concept that gamma band oscillations (30-80 Hz) are an essential mechanism for cortical information transmission and processing. Herein we review recent studies further indicating that inhibition from parvalbumin-positive GABA neurons is necessary to produce gamma oscillations in cortical circuits; provide an update on postmortem studies documenting that deficits in the expression of glutamic acid decarboxylase67, which accounts for most GABA synthesis in the cortex, are widely observed in schizophrenia; and describe studies using novel, noninvasive approaches directly assessing potential relations between alterations in GABA, oscillations, and cognitive function in schizophrenia.

  5. MicroRNA-181 promotes synaptogenesis and attenuates axonal outgrowth in cortical neurons

    PubMed Central

    Kos, Aron; Olde Loohuis, Nikkie; Meinhardt, Julia; van Bokhoven, Hans; Kaplan, Barry B; Martens, Gerard; Aschrafi, Armaz

    2016-01-01

    MicroRNAs (miRs) are non-coding gene transcripts abundantly expressed in both the developing and adult mammalian brain. They act as important modulators of complex gene regulatory networks during neuronal development and plasticity. miR-181c is highly abundant in cerebellar cortex and its expression is increased in autism patients as well as in an animal model of autism. To systematically identify putative targets of miR-181c, we repressed this miR in growing cortical neurons and found over 70 differentially expressed target genes using transcriptome profiling. Pathway analysis showed that the miR-181c-modulated genes converge on signaling cascades relevant to neurite and synapse developmental processes. To experimentally examine the significance of these data, we inhibited miR-181c during rat cortical neuronal maturation in vitro; this loss-of miR-181c function resulted in enhanced neurite sprouting and reduced synaptogenesis. Collectively, our findings suggest that miR-181c is a modulator of gene networks associated with cortical neuronal maturation. PMID:27017280

  6. Dynamical responses to external stimuli for both cases of excitatory and inhibitory synchronization in a complex neuronal network.

    PubMed

    Kim, Sang-Yoon; Lim, Woochang

    2017-10-01

    For studying how dynamical responses to external stimuli depend on the synaptic-coupling type, we consider two types of excitatory and inhibitory synchronization (i.e., synchronization via synaptic excitation and inhibition) in complex small-world networks of excitatory regular spiking (RS) pyramidal neurons and inhibitory fast spiking (FS) interneurons. For both cases of excitatory and inhibitory synchronization, effects of synaptic couplings on dynamical responses to external time-periodic stimuli S ( t ) (applied to a fraction of neurons) are investigated by varying the driving amplitude A of S ( t ). Stimulated neurons are phase-locked to external stimuli for both cases of excitatory and inhibitory couplings. On the other hand, the stimulation effect on non-stimulated neurons depends on the type of synaptic coupling. The external stimulus S ( t ) makes a constructive effect on excitatory non-stimulated RS neurons (i.e., it causes external phase lockings in the non-stimulated sub-population), while S ( t ) makes a destructive effect on inhibitory non-stimulated FS interneurons (i.e., it breaks up original inhibitory synchronization in the non-stimulated sub-population). As results of these different effects of S ( t ), the type and degree of dynamical response (e.g., synchronization enhancement or suppression), characterized by the dynamical response factor [Formula: see text] (given by the ratio of synchronization degree in the presence and absence of stimulus), are found to vary in a distinctly different way, depending on the synaptic-coupling type. Furthermore, we also measure the matching degree between the dynamics of the two sub-populations of stimulated and non-stimulated neurons in terms of a "cross-correlation" measure [Formula: see text]. With increasing A , based on [Formula: see text], we discuss the cross-correlations between the two sub-populations, affecting the dynamical responses to S ( t ).

  7. Effect of edge pruning on structural controllability and observability of complex networks

    PubMed Central

    Mengiste, Simachew Abebe; Aertsen, Ad; Kumar, Arvind

    2015-01-01

    Controllability and observability of complex systems are vital concepts in many fields of science. The network structure of the system plays a crucial role in determining its controllability and observability. Because most naturally occurring complex systems show dynamic changes in their network connectivity, it is important to understand how perturbations in the connectivity affect the controllability of the system. To this end, we studied the control structure of different types of artificial, social and biological neuronal networks (BNN) as their connections were progressively pruned using four different pruning strategies. We show that the BNNs are more similar to scale-free networks than to small-world networks, when comparing the robustness of their control structure to structural perturbations. We introduce a new graph descriptor, ‘the cardinality curve’, to quantify the robustness of the control structure of a network to progressive edge pruning. Knowing the susceptibility of control structures to different pruning methods could help design strategies to destroy the control structures of dangerous networks such as epidemic networks. On the other hand, it could help make useful networks more resistant to edge attacks. PMID:26674854

  8. Synaptic transmission at functionally identified synapses in the enteric nervous system: roles for both ionotropic and metabotropic receptors.

    PubMed

    Gwynne, R M; Bornstein, J C

    2007-03-01

    Digestion and absorption of nutrients and the secretion and reabsorption of fluid in the gastrointestinal tract are regulated by neurons of the enteric nervous system (ENS), the extensive peripheral nerve network contained within the intestinal wall. The ENS is an important physiological model for the study of neural networks since it is both complex and accessible. At least 20 different neurochemically and functionally distinct classes of enteric neurons have been identified in the guinea pig ileum. These neurons express a wide range of ionotropic and metabotropic receptors. Synaptic potentials mediated by ionotropic receptors such as the nicotinic acetylcholine receptor, P2X purinoceptors and 5-HT(3) receptors are seen in many enteric neurons. However, prominent synaptic potentials mediated by metabotropic receptors, like the P2Y(1) receptor and the NK(1) receptor, are also seen in these neurons. Studies of synaptic transmission between the different neuron classes within the enteric neural pathways have shown that both ionotropic and metabotropic synaptic potentials play major roles at distinct synapses within simple reflex pathways. However, there are still functional synapses at which no known transmitter or receptor has been identified. This review describes the identified roles for both ionotropic and metabotropic neurotransmission at functionally defined synapses within the guinea pig ileum ENS. It is concluded that metabotropic synaptic potentials act as primary transmitters at some synapses. It is suggested identification of the interactions between different synaptic potentials in the production of complex behaviours will require the use of well validated computer models of the enteric neural circuitry.

  9. Cultured Cortical Neurons Can Perform Blind Source Separation According to the Free-Energy Principle

    PubMed Central

    Isomura, Takuya; Kotani, Kiyoshi; Jimbo, Yasuhiko

    2015-01-01

    Blind source separation is the computation underlying the cocktail party effect––a partygoer can distinguish a particular talker’s voice from the ambient noise. Early studies indicated that the brain might use blind source separation as a signal processing strategy for sensory perception and numerous mathematical models have been proposed; however, it remains unclear how the neural networks extract particular sources from a complex mixture of inputs. We discovered that neurons in cultures of dissociated rat cortical cells could learn to represent particular sources while filtering out other signals. Specifically, the distinct classes of neurons in the culture learned to respond to the distinct sources after repeating training stimulation. Moreover, the neural network structures changed to reduce free energy, as predicted by the free-energy principle, a candidate unified theory of learning and memory, and by Jaynes’ principle of maximum entropy. This implicit learning can only be explained by some form of Hebbian plasticity. These results are the first in vitro (as opposed to in silico) demonstration of neural networks performing blind source separation, and the first formal demonstration of neuronal self-organization under the free energy principle. PMID:26690814

  10. A regulatory network to segregate the identity of neuronal subtypes.

    PubMed

    Lee, Seunghee; Lee, Bora; Joshi, Kaumudi; Pfaff, Samuel L; Lee, Jae W; Lee, Soo-Kyung

    2008-06-01

    Spinal motor neurons (MNs) and V2 interneurons (V2-INs) are specified by two related LIM-complexes, MN-hexamer and V2-tetramer, respectively. Here we show how multiple parallel and complementary feedback loops are integrated to assign these two cell fates accurately. While MN-hexamer response elements (REs) are specific to MN-hexamer, V2-tetramer-REs can bind both LIM-complexes. In embryonic MNs, however, two factors cooperatively suppress the aberrant activation of V2-tetramer-REs. First, LMO4 blocks V2-tetramer assembly. Second, MN-hexamer induces a repressor, Hb9, which binds V2-tetramer-REs and suppresses their activation. V2-INs use a similar approach; V2-tetramer induces a repressor, Chx10, which binds MN-hexamer-REs and blocks their activation. Thus, our study uncovers a regulatory network to segregate related cell fates, which involves reciprocal feedforward gene regulatory loops.

  11. Artificial brains. A million spiking-neuron integrated circuit with a scalable communication network and interface.

    PubMed

    Merolla, Paul A; Arthur, John V; Alvarez-Icaza, Rodrigo; Cassidy, Andrew S; Sawada, Jun; Akopyan, Filipp; Jackson, Bryan L; Imam, Nabil; Guo, Chen; Nakamura, Yutaka; Brezzo, Bernard; Vo, Ivan; Esser, Steven K; Appuswamy, Rathinakumar; Taba, Brian; Amir, Arnon; Flickner, Myron D; Risk, William P; Manohar, Rajit; Modha, Dharmendra S

    2014-08-08

    Inspired by the brain's structure, we have developed an efficient, scalable, and flexible non-von Neumann architecture that leverages contemporary silicon technology. To demonstrate, we built a 5.4-billion-transistor chip with 4096 neurosynaptic cores interconnected via an intrachip network that integrates 1 million programmable spiking neurons and 256 million configurable synapses. Chips can be tiled in two dimensions via an interchip communication interface, seamlessly scaling the architecture to a cortexlike sheet of arbitrary size. The architecture is well suited to many applications that use complex neural networks in real time, for example, multiobject detection and classification. With 400-pixel-by-240-pixel video input at 30 frames per second, the chip consumes 63 milliwatts. Copyright © 2014, American Association for the Advancement of Science.

  12. Cell Autonomy and Synchrony of Suprachiasmatic Nucleus Circadian Oscillators

    PubMed Central

    Mohawk, Jennifer A.; Takahashi, Joseph S.

    2013-01-01

    The suprachiasmatic nucleus (SCN) of the hypothalamus is the site of the master circadian pacemaker in mammals. The individual cells of the SCN are capable of functioning independently from one another and therefore must form a cohesive circadian network through intercellular coupling. The network properties of the SCN lead to coordination of circadian rhythms among its neurons and neuronal subpopulations. There is increasing evidence for multiple interconnected oscillators within the SCN, and in this Review, we will highlight recent advances in our understanding of the complex organization and function of the cellular and network-level SCN clock. Understanding the way in which synchrony is achieved between cells in the SCN will provide insight into the means by which this important nucleus orchestrates circadian rhythms throughout the organism. PMID:21665298

  13. Exact solutions for rate and synchrony in recurrent networks of coincidence detectors.

    PubMed

    Mikula, Shawn; Niebur, Ernst

    2008-11-01

    We provide analytical solutions for mean firing rates and cross-correlations of coincidence detector neurons in recurrent networks with excitatory or inhibitory connectivity, with rate-modulated steady-state spiking inputs. We use discrete-time finite-state Markov chains to represent network state transition probabilities, which are subsequently used to derive exact analytical solutions for mean firing rates and cross-correlations. As illustrated in several examples, the method can be used for modeling cortical microcircuits and clarifying single-neuron and population coding mechanisms. We also demonstrate that increasing firing rates do not necessarily translate into increasing cross-correlations, though our results do support the contention that firing rates and cross-correlations are likely to be coupled. Our analytical solutions underscore the complexity of the relationship between firing rates and cross-correlations.

  14. Polarity-specific high-level information propagation in neural networks.

    PubMed

    Lin, Yen-Nan; Chang, Po-Yen; Hsiao, Pao-Yueh; Lo, Chung-Chuan

    2014-01-01

    Analyzing the connectome of a nervous system provides valuable information about the functions of its subsystems. Although much has been learned about the architectures of neural networks in various organisms by applying analytical tools developed for general networks, two distinct and functionally important properties of neural networks are often overlooked. First, neural networks are endowed with polarity at the circuit level: Information enters a neural network at input neurons, propagates through interneurons, and leaves via output neurons. Second, many functions of nervous systems are implemented by signal propagation through high-level pathways involving multiple and often recurrent connections rather than by the shortest paths between nodes. In the present study, we analyzed two neural networks: the somatic nervous system of Caenorhabditis elegans (C. elegans) and the partial central complex network of Drosophila, in light of these properties. Specifically, we quantified high-level propagation in the vertical and horizontal directions: the former characterizes how signals propagate from specific input nodes to specific output nodes and the latter characterizes how a signal from a specific input node is shared by all output nodes. We found that the two neural networks are characterized by very efficient vertical and horizontal propagation. In comparison, classic small-world networks show a trade-off between vertical and horizontal propagation; increasing the rewiring probability improves the efficiency of horizontal propagation but worsens the efficiency of vertical propagation. Our result provides insights into how the complex functions of natural neural networks may arise from a design that allows them to efficiently transform and combine input signals.

  15. Polarity-specific high-level information propagation in neural networks

    PubMed Central

    Lin, Yen-Nan; Chang, Po-Yen; Hsiao, Pao-Yueh; Lo, Chung-Chuan

    2014-01-01

    Analyzing the connectome of a nervous system provides valuable information about the functions of its subsystems. Although much has been learned about the architectures of neural networks in various organisms by applying analytical tools developed for general networks, two distinct and functionally important properties of neural networks are often overlooked. First, neural networks are endowed with polarity at the circuit level: Information enters a neural network at input neurons, propagates through interneurons, and leaves via output neurons. Second, many functions of nervous systems are implemented by signal propagation through high-level pathways involving multiple and often recurrent connections rather than by the shortest paths between nodes. In the present study, we analyzed two neural networks: the somatic nervous system of Caenorhabditis elegans (C. elegans) and the partial central complex network of Drosophila, in light of these properties. Specifically, we quantified high-level propagation in the vertical and horizontal directions: the former characterizes how signals propagate from specific input nodes to specific output nodes and the latter characterizes how a signal from a specific input node is shared by all output nodes. We found that the two neural networks are characterized by very efficient vertical and horizontal propagation. In comparison, classic small-world networks show a trade-off between vertical and horizontal propagation; increasing the rewiring probability improves the efficiency of horizontal propagation but worsens the efficiency of vertical propagation. Our result provides insights into how the complex functions of natural neural networks may arise from a design that allows them to efficiently transform and combine input signals. PMID:24672472

  16. A mathematical analysis of the effects of Hebbian learning rules on the dynamics and structure of discrete-time random recurrent neural networks.

    PubMed

    Siri, Benoît; Berry, Hugues; Cessac, Bruno; Delord, Bruno; Quoy, Mathias

    2008-12-01

    We present a mathematical analysis of the effects of Hebbian learning in random recurrent neural networks, with a generic Hebbian learning rule, including passive forgetting and different timescales, for neuronal activity and learning dynamics. Previous numerical work has reported that Hebbian learning drives the system from chaos to a steady state through a sequence of bifurcations. Here, we interpret these results mathematically and show that these effects, involving a complex coupling between neuronal dynamics and synaptic graph structure, can be analyzed using Jacobian matrices, which introduce both a structural and a dynamical point of view on neural network evolution. Furthermore, we show that sensitivity to a learned pattern is maximal when the largest Lyapunov exponent is close to 0. We discuss how neural networks may take advantage of this regime of high functional interest.

  17. Brain modularity controls the critical behavior of spontaneous activity.

    PubMed

    Russo, R; Herrmann, H J; de Arcangelis, L

    2014-03-13

    The human brain exhibits a complex structure made of scale-free highly connected modules loosely interconnected by weaker links to form a small-world network. These features appear in healthy patients whereas neurological diseases often modify this structure. An important open question concerns the role of brain modularity in sustaining the critical behaviour of spontaneous activity. Here we analyse the neuronal activity of a model, successful in reproducing on non-modular networks the scaling behaviour observed in experimental data, on a modular network implementing the main statistical features measured in human brain. We show that on a modular network, regardless the strength of the synaptic connections or the modular size and number, activity is never fully scale-free. Neuronal avalanches can invade different modules which results in an activity depression, hindering further avalanche propagation. Critical behaviour is solely recovered if inter-module connections are added, modifying the modular into a more random structure.

  18. Memory replay in balanced recurrent networks

    PubMed Central

    Chenkov, Nikolay; Sprekeler, Henning; Kempter, Richard

    2017-01-01

    Complex patterns of neural activity appear during up-states in the neocortex and sharp waves in the hippocampus, including sequences that resemble those during prior behavioral experience. The mechanisms underlying this replay are not well understood. How can small synaptic footprints engraved by experience control large-scale network activity during memory retrieval and consolidation? We hypothesize that sparse and weak synaptic connectivity between Hebbian assemblies are boosted by pre-existing recurrent connectivity within them. To investigate this idea, we connect sequences of assemblies in randomly connected spiking neuronal networks with a balance of excitation and inhibition. Simulations and analytical calculations show that recurrent connections within assemblies allow for a fast amplification of signals that indeed reduces the required number of inter-assembly connections. Replay can be evoked by small sensory-like cues or emerge spontaneously by activity fluctuations. Global—potentially neuromodulatory—alterations of neuronal excitability can switch between network states that favor retrieval and consolidation. PMID:28135266

  19. Familiarity Detection is an Intrinsic Property of Cortical Microcircuits with Bidirectional Synaptic Plasticity.

    PubMed

    Zhang, Xiaoyu; Ju, Han; Penney, Trevor B; VanDongen, Antonius M J

    2017-01-01

    Humans instantly recognize a previously seen face as "familiar." To deepen our understanding of familiarity-novelty detection, we simulated biologically plausible neural network models of generic cortical microcircuits consisting of spiking neurons with random recurrent synaptic connections. NMDA receptor (NMDAR)-dependent synaptic plasticity was implemented to allow for unsupervised learning and bidirectional modifications. Network spiking activity evoked by sensory inputs consisting of face images altered synaptic efficacy, which resulted in the network responding more strongly to a previously seen face than a novel face. Network size determined how many faces could be accurately recognized as familiar. When the simulated model became sufficiently complex in structure, multiple familiarity traces could be retained in the same network by forming partially-overlapping subnetworks that differ slightly from each other, thereby resulting in a high storage capacity. Fisher's discriminant analysis was applied to identify critical neurons whose spiking activity predicted familiar input patterns. Intriguingly, as sensory exposure was prolonged, the selected critical neurons tended to appear at deeper layers of the network model, suggesting recruitment of additional circuits in the network for incremental information storage. We conclude that generic cortical microcircuits with bidirectional synaptic plasticity have an intrinsic ability to detect familiar inputs. This ability does not require a specialized wiring diagram or supervision and can therefore be expected to emerge naturally in developing cortical circuits.

  20. Stochastic Computations in Cortical Microcircuit Models

    PubMed Central

    Maass, Wolfgang

    2013-01-01

    Experimental data from neuroscience suggest that a substantial amount of knowledge is stored in the brain in the form of probability distributions over network states and trajectories of network states. We provide a theoretical foundation for this hypothesis by showing that even very detailed models for cortical microcircuits, with data-based diverse nonlinear neurons and synapses, have a stationary distribution of network states and trajectories of network states to which they converge exponentially fast from any initial state. We demonstrate that this convergence holds in spite of the non-reversibility of the stochastic dynamics of cortical microcircuits. We further show that, in the presence of background network oscillations, separate stationary distributions emerge for different phases of the oscillation, in accordance with experimentally reported phase-specific codes. We complement these theoretical results by computer simulations that investigate resulting computation times for typical probabilistic inference tasks on these internally stored distributions, such as marginalization or marginal maximum-a-posteriori estimation. Furthermore, we show that the inherent stochastic dynamics of generic cortical microcircuits enables them to quickly generate approximate solutions to difficult constraint satisfaction problems, where stored knowledge and current inputs jointly constrain possible solutions. This provides a powerful new computing paradigm for networks of spiking neurons, that also throws new light on how networks of neurons in the brain could carry out complex computational tasks such as prediction, imagination, memory recall and problem solving. PMID:24244126

  1. Familiarity Detection is an Intrinsic Property of Cortical Microcircuits with Bidirectional Synaptic Plasticity

    PubMed Central

    2017-01-01

    Abstract Humans instantly recognize a previously seen face as “familiar.” To deepen our understanding of familiarity-novelty detection, we simulated biologically plausible neural network models of generic cortical microcircuits consisting of spiking neurons with random recurrent synaptic connections. NMDA receptor (NMDAR)-dependent synaptic plasticity was implemented to allow for unsupervised learning and bidirectional modifications. Network spiking activity evoked by sensory inputs consisting of face images altered synaptic efficacy, which resulted in the network responding more strongly to a previously seen face than a novel face. Network size determined how many faces could be accurately recognized as familiar. When the simulated model became sufficiently complex in structure, multiple familiarity traces could be retained in the same network by forming partially-overlapping subnetworks that differ slightly from each other, thereby resulting in a high storage capacity. Fisher’s discriminant analysis was applied to identify critical neurons whose spiking activity predicted familiar input patterns. Intriguingly, as sensory exposure was prolonged, the selected critical neurons tended to appear at deeper layers of the network model, suggesting recruitment of additional circuits in the network for incremental information storage. We conclude that generic cortical microcircuits with bidirectional synaptic plasticity have an intrinsic ability to detect familiar inputs. This ability does not require a specialized wiring diagram or supervision and can therefore be expected to emerge naturally in developing cortical circuits. PMID:28534043

  2. Mean-field equations for neuronal networks with arbitrary degree distributions.

    PubMed

    Nykamp, Duane Q; Friedman, Daniel; Shaker, Sammy; Shinn, Maxwell; Vella, Michael; Compte, Albert; Roxin, Alex

    2017-04-01

    The emergent dynamics in networks of recurrently coupled spiking neurons depends on the interplay between single-cell dynamics and network topology. Most theoretical studies on network dynamics have assumed simple topologies, such as connections that are made randomly and independently with a fixed probability (Erdös-Rényi network) (ER) or all-to-all connected networks. However, recent findings from slice experiments suggest that the actual patterns of connectivity between cortical neurons are more structured than in the ER random network. Here we explore how introducing additional higher-order statistical structure into the connectivity can affect the dynamics in neuronal networks. Specifically, we consider networks in which the number of presynaptic and postsynaptic contacts for each neuron, the degrees, are drawn from a joint degree distribution. We derive mean-field equations for a single population of homogeneous neurons and for a network of excitatory and inhibitory neurons, where the neurons can have arbitrary degree distributions. Through analysis of the mean-field equations and simulation of networks of integrate-and-fire neurons, we show that such networks have potentially much richer dynamics than an equivalent ER network. Finally, we relate the degree distributions to so-called cortical motifs.

  3. Mean-field equations for neuronal networks with arbitrary degree distributions

    NASA Astrophysics Data System (ADS)

    Nykamp, Duane Q.; Friedman, Daniel; Shaker, Sammy; Shinn, Maxwell; Vella, Michael; Compte, Albert; Roxin, Alex

    2017-04-01

    The emergent dynamics in networks of recurrently coupled spiking neurons depends on the interplay between single-cell dynamics and network topology. Most theoretical studies on network dynamics have assumed simple topologies, such as connections that are made randomly and independently with a fixed probability (Erdös-Rényi network) (ER) or all-to-all connected networks. However, recent findings from slice experiments suggest that the actual patterns of connectivity between cortical neurons are more structured than in the ER random network. Here we explore how introducing additional higher-order statistical structure into the connectivity can affect the dynamics in neuronal networks. Specifically, we consider networks in which the number of presynaptic and postsynaptic contacts for each neuron, the degrees, are drawn from a joint degree distribution. We derive mean-field equations for a single population of homogeneous neurons and for a network of excitatory and inhibitory neurons, where the neurons can have arbitrary degree distributions. Through analysis of the mean-field equations and simulation of networks of integrate-and-fire neurons, we show that such networks have potentially much richer dynamics than an equivalent ER network. Finally, we relate the degree distributions to so-called cortical motifs.

  4. A Neuronal Network Model for Pitch Selectivity and Representation

    PubMed Central

    Huang, Chengcheng; Rinzel, John

    2016-01-01

    Pitch is a perceptual correlate of periodicity. Sounds with distinct spectra can elicit the same pitch. Despite the importance of pitch perception, understanding the cellular mechanism of pitch perception is still a major challenge and a mechanistic model of pitch is lacking. A multi-stage neuronal network model is developed for pitch frequency estimation using biophysically-based, high-resolution coincidence detector neurons. The neuronal units respond only to highly coincident input among convergent auditory nerve fibers across frequency channels. Their selectivity for only very fast rising slopes of convergent input enables these slope-detectors to distinguish the most prominent coincidences in multi-peaked input time courses. Pitch can then be estimated from the first-order interspike intervals of the slope-detectors. The regular firing pattern of the slope-detector neurons are similar for sounds sharing the same pitch despite the distinct timbres. The decoded pitch strengths also correlate well with the salience of pitch perception as reported by human listeners. Therefore, our model can serve as a neural representation for pitch. Our model performs successfully in estimating the pitch of missing fundamental complexes and reproducing the pitch variation with respect to the frequency shift of inharmonic complexes. It also accounts for the phase sensitivity of pitch perception in the cases of Schroeder phase, alternating phase and random phase relationships. Moreover, our model can also be applied to stochastic sound stimuli, iterated-ripple-noise, and account for their multiple pitch perceptions. PMID:27378900

  5. A Neuronal Network Model for Pitch Selectivity and Representation.

    PubMed

    Huang, Chengcheng; Rinzel, John

    2016-01-01

    Pitch is a perceptual correlate of periodicity. Sounds with distinct spectra can elicit the same pitch. Despite the importance of pitch perception, understanding the cellular mechanism of pitch perception is still a major challenge and a mechanistic model of pitch is lacking. A multi-stage neuronal network model is developed for pitch frequency estimation using biophysically-based, high-resolution coincidence detector neurons. The neuronal units respond only to highly coincident input among convergent auditory nerve fibers across frequency channels. Their selectivity for only very fast rising slopes of convergent input enables these slope-detectors to distinguish the most prominent coincidences in multi-peaked input time courses. Pitch can then be estimated from the first-order interspike intervals of the slope-detectors. The regular firing pattern of the slope-detector neurons are similar for sounds sharing the same pitch despite the distinct timbres. The decoded pitch strengths also correlate well with the salience of pitch perception as reported by human listeners. Therefore, our model can serve as a neural representation for pitch. Our model performs successfully in estimating the pitch of missing fundamental complexes and reproducing the pitch variation with respect to the frequency shift of inharmonic complexes. It also accounts for the phase sensitivity of pitch perception in the cases of Schroeder phase, alternating phase and random phase relationships. Moreover, our model can also be applied to stochastic sound stimuli, iterated-ripple-noise, and account for their multiple pitch perceptions.

  6. Statistical mechanics of complex neural systems and high dimensional data

    NASA Astrophysics Data System (ADS)

    Advani, Madhu; Lahiri, Subhaneil; Ganguli, Surya

    2013-03-01

    Recent experimental advances in neuroscience have opened new vistas into the immense complexity of neuronal networks. This proliferation of data challenges us on two parallel fronts. First, how can we form adequate theoretical frameworks for understanding how dynamical network processes cooperate across widely disparate spatiotemporal scales to solve important computational problems? Second, how can we extract meaningful models of neuronal systems from high dimensional datasets? To aid in these challenges, we give a pedagogical review of a collection of ideas and theoretical methods arising at the intersection of statistical physics, computer science and neurobiology. We introduce the interrelated replica and cavity methods, which originated in statistical physics as powerful ways to quantitatively analyze large highly heterogeneous systems of many interacting degrees of freedom. We also introduce the closely related notion of message passing in graphical models, which originated in computer science as a distributed algorithm capable of solving large inference and optimization problems involving many coupled variables. We then show how both the statistical physics and computer science perspectives can be applied in a wide diversity of contexts to problems arising in theoretical neuroscience and data analysis. Along the way we discuss spin glasses, learning theory, illusions of structure in noise, random matrices, dimensionality reduction and compressed sensing, all within the unified formalism of the replica method. Moreover, we review recent conceptual connections between message passing in graphical models, and neural computation and learning. Overall, these ideas illustrate how statistical physics and computer science might provide a lens through which we can uncover emergent computational functions buried deep within the dynamical complexities of neuronal networks.

  7. Functional Interactions between Mammalian Respiratory Rhythmogenic and Premotor Circuitry

    PubMed Central

    Song, Hanbing; Hayes, John A.; Vann, Nikolas C.; Wang, Xueying; LaMar, M. Drew

    2016-01-01

    Breathing in mammals depends on rhythms that originate from the preBötzinger complex (preBötC) of the ventral medulla and a network of brainstem and spinal premotor neurons. The rhythm-generating core of the preBötC, as well as some premotor circuits, consist of interneurons derived from Dbx1-expressing precursors (Dbx1 neurons), but the structure and function of these networks remain incompletely understood. We previously developed a cell-specific detection and laser ablation system to interrogate respiratory network structure and function in a slice model of breathing that retains the preBötC, the respiratory-related hypoglossal (XII) motor nucleus and XII premotor circuits. In spontaneously rhythmic slices, cumulative ablation of Dbx1 preBötC neurons decreased XII motor output by ∼50% after ∼15 cell deletions, and then decelerated and terminated rhythmic function altogether as the tally increased to ∼85 neurons. In contrast, cumulatively deleting Dbx1 XII premotor neurons decreased motor output monotonically but did not affect frequency nor stop XII output regardless of the ablation tally. Here, we couple an existing preBötC model with a premotor population in several topological configurations to investigate which one may replicate the laser ablation experiments best. If the XII premotor population is a “small-world” network (rich in local connections with sparse long-range connections among constituent premotor neurons) and connected with the preBötC such that the total number of incoming synapses remains fixed, then the in silico system successfully replicates the in vitro laser ablation experiments. This study proposes a feasible configuration for circuits consisting of Dbx1-derived interneurons that generate inspiratory rhythm and motor pattern. SIGNIFICANCE STATEMENT To produce a breathing-related motor pattern, a brainstem core oscillator circuit projects to a population of premotor interneurons, but the assemblage of this network remains incompletely understood. Here we applied network modeling and numerical simulation to discover respiratory circuit configurations that successfully replicate photonic cell ablation experiments targeting either the core oscillator or premotor network, respectively. If premotor neurons are interconnected in a so-called “small-world” network with a fixed number of incoming synapses balanced between premotor and rhythmogenic neurons, then our simulations match their experimental benchmarks. These results provide a framework of experimentally testable predictions regarding the rudimentary structure and function of respiratory rhythm- and pattern-generating circuits in the brainstem of mammals. PMID:27383596

  8. Bifurcation of synchronous oscillations into torus in a system of two reciprocally inhibitory silicon neurons: experimental observation and modeling.

    PubMed

    Bondarenko, Vladimir E; Cymbalyuk, Gennady S; Patel, Girish; Deweerth, Stephen P; Calabrese, Ronald L

    2004-12-01

    Oscillatory activity in the central nervous system is associated with various functions, like motor control, memory formation, binding, and attention. Quasiperiodic oscillations are rarely discussed in the neurophysiological literature yet they may play a role in the nervous system both during normal function and disease. Here we use a physical system and a model to explore scenarios for how quasiperiodic oscillations might arise in neuronal networks. An oscillatory system of two mutually inhibitory neuronal units is a ubiquitous network module found in nervous systems and is called a half-center oscillator. Previously we created a half-center oscillator of two identical oscillatory silicon (analog Very Large Scale Integration) neurons and developed a mathematical model describing its dynamics. In the mathematical model, we have shown that an in-phase limit cycle becomes unstable through a subcritical torus bifurcation. However, the existence of this torus bifurcation in experimental silicon two-neuron system was not rigorously demonstrated or investigated. Here we demonstrate the torus predicted by the model for the silicon implementation of a half-center oscillator using complex time series analysis, including bifurcation diagrams, mapping techniques, correlation functions, amplitude spectra, and correlation dimensions, and we investigate how the properties of the quasiperiodic oscillations depend on the strengths of coupling between the silicon neurons. The potential advantages and disadvantages of quasiperiodic oscillations (torus) for biological neural systems and artificial neural networks are discussed.

  9. Remodeling Functional Connectivity in Multiple Sclerosis: A Challenging Therapeutic Approach.

    PubMed

    Stampanoni Bassi, Mario; Gilio, Luana; Buttari, Fabio; Maffei, Pierpaolo; Marfia, Girolama A; Restivo, Domenico A; Centonze, Diego; Iezzi, Ennio

    2017-01-01

    Neurons in the central nervous system are organized in functional units interconnected to form complex networks. Acute and chronic brain damage disrupts brain connectivity producing neurological signs and/or symptoms. In several neurological diseases, particularly in Multiple Sclerosis (MS), structural imaging studies cannot always demonstrate a clear association between lesion site and clinical disability, originating the "clinico-radiological paradox." The discrepancy between structural damage and disability can be explained by a complex network perspective. Both brain networks architecture and synaptic plasticity may play important roles in modulating brain networks efficiency after brain damage. In particular, long-term potentiation (LTP) may occur in surviving neurons to compensate network disconnection. In MS, inflammatory cytokines dramatically interfere with synaptic transmission and plasticity. Importantly, in addition to acute and chronic structural damage, inflammation could contribute to reduce brain networks efficiency in MS leading to worse clinical recovery after a relapse and worse disease progression. These evidence suggest that removing inflammation should represent the main therapeutic target in MS; moreover, as synaptic plasticity is particularly altered by inflammation, specific strategies aimed at promoting LTP mechanisms could be effective for enhancing clinical recovery. Modulation of plasticity with different non-invasive brain stimulation (NIBS) techniques has been used to promote recovery of MS symptoms. Better knowledge of features inducing brain disconnection in MS is crucial to design specific strategies to promote recovery and use NIBS with an increasingly tailored approach.

  10. Inter-synaptic learning of combination rules in a cortical network model

    PubMed Central

    Lavigne, Frédéric; Avnaïm, Francis; Dumercy, Laurent

    2014-01-01

    Selecting responses in working memory while processing combinations of stimuli depends strongly on their relations stored in long-term memory. However, the learning of XOR-like combinations of stimuli and responses according to complex rules raises the issue of the non-linear separability of the responses within the space of stimuli. One proposed solution is to add neurons that perform a stage of non-linear processing between the stimuli and responses, at the cost of increasing the network size. Based on the non-linear integration of synaptic inputs within dendritic compartments, we propose here an inter-synaptic (IS) learning algorithm that determines the probability of potentiating/depressing each synapse as a function of the co-activity of the other synapses within the same dendrite. The IS learning is effective with random connectivity and without either a priori wiring or additional neurons. Our results show that IS learning generates efficacy values that are sufficient for the processing of XOR-like combinations, on the basis of the sole correlational structure of the stimuli and responses. We analyze the types of dendrites involved in terms of the number of synapses from pre-synaptic neurons coding for the stimuli and responses. The synaptic efficacy values obtained show that different dendrites specialize in the detection of different combinations of stimuli. The resulting behavior of the cortical network model is analyzed as a function of inter-synaptic vs. Hebbian learning. Combinatorial priming effects show that the retrospective activity of neurons coding for the stimuli trigger XOR-like combination-selective prospective activity of neurons coding for the expected response. The synergistic effects of inter-synaptic learning and of mixed-coding neurons are simulated. The results show that, although each mechanism is sufficient by itself, their combined effects improve the performance of the network. PMID:25221529

  11. Making sense out of spinal cord somatosensory development

    PubMed Central

    Seal, Rebecca P.

    2016-01-01

    The spinal cord integrates and relays somatosensory input, leading to complex motor responses. Research over the past couple of decades has identified transcription factor networks that function during development to define and instruct the generation of diverse neuronal populations within the spinal cord. A number of studies have now started to connect these developmentally defined populations with their roles in somatosensory circuits. Here, we review our current understanding of how neuronal diversity in the dorsal spinal cord is generated and we discuss the logic underlying how these neurons form the basis of somatosensory circuits. PMID:27702783

  12. Global cluster synchronization in nonlinearly coupled community networks with heterogeneous coupling delays.

    PubMed

    Tseng, Jui-Pin

    2017-02-01

    This investigation establishes the global cluster synchronization of complex networks with a community structure based on an iterative approach. The units comprising the network are described by differential equations, and can be non-autonomous and involve time delays. In addition, units in the different communities can be governed by different equations. The coupling configuration of the network is rather general. The coupling terms can be non-diffusive, nonlinear, asymmetric, and with heterogeneous coupling delays. Based on this approach, both delay-dependent and delay-independent criteria for global cluster synchronization are derived. We implement the present approach for a nonlinearly coupled neural network with heterogeneous coupling delays. Two numerical examples are given to show that neural networks can behave in a variety of new collective ways under the synchronization criteria. These examples also demonstrate that neural networks remain synchronized in spite of coupling delays between neurons across different communities; however, they may lose synchrony if the coupling delays between the neurons within the same community are too large, such that the synchronization criteria are violated. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. A Functionally Conserved Gene Regulatory Network Module Governing Olfactory Neuron Diversity.

    PubMed

    Li, Qingyun; Barish, Scott; Okuwa, Sumie; Maciejewski, Abigail; Brandt, Alicia T; Reinhold, Dominik; Jones, Corbin D; Volkan, Pelin Cayirlioglu

    2016-01-01

    Sensory neuron diversity is required for organisms to decipher complex environmental cues. In Drosophila, the olfactory environment is detected by 50 different olfactory receptor neuron (ORN) classes that are clustered in combinations within distinct sensilla subtypes. Each sensilla subtype houses stereotypically clustered 1-4 ORN identities that arise through asymmetric divisions from a single multipotent sensory organ precursor (SOP). How each class of SOPs acquires a unique differentiation potential that accounts for ORN diversity is unknown. Previously, we reported a critical component of SOP diversification program, Rotund (Rn), increases ORN diversity by generating novel developmental trajectories from existing precursors within each independent sensilla type lineages. Here, we show that Rn, along with BarH1/H2 (Bar), Bric-à-brac (Bab), Apterous (Ap) and Dachshund (Dac), constitutes a transcription factor (TF) network that patterns the developing olfactory tissue. This network was previously shown to pattern the segmentation of the leg, which suggests that this network is functionally conserved. In antennal imaginal discs, precursors with diverse ORN differentiation potentials are selected from concentric rings defined by unique combinations of these TFs along the proximodistal axis of the developing antennal disc. The combinatorial code that demarcates each precursor field is set up by cross-regulatory interactions among different factors within the network. Modifications of this network lead to predictable changes in the diversity of sensilla subtypes and ORN pools. In light of our data, we propose a molecular map that defines each unique SOP fate. Our results highlight the importance of the early prepatterning gene regulatory network as a modulator of SOP and terminally differentiated ORN diversity. Finally, our model illustrates how conserved developmental strategies are used to generate neuronal diversity.

  14. Decreased pyramidal neuron size in Brodmann areas 44 and 45 in patients with autism.

    PubMed

    Jacot-Descombes, Sarah; Uppal, Neha; Wicinski, Bridget; Santos, Micaela; Schmeidler, James; Giannakopoulos, Panteleimon; Heinsen, Helmut; Heinsein, Helmut; Schmitz, Christoph; Hof, Patrick R

    2012-07-01

    Autism is a neurodevelopmental disorder characterized by deficits in social interaction and social communication, as well as by the presence of repetitive and stereotyped behaviors and interests. Brodmann areas 44 and 45 in the inferior frontal cortex, which are involved in language processing, imitation function, and sociality processing networks, have been implicated in this complex disorder. Using a stereologic approach, this study aims to explore the presence of neuropathological differences in areas 44 and 45 in patients with autism compared to age- and hemisphere-matched controls. Based on previous evidence in the fusiform gyrus, we expected to find a decrease in the number and size of pyramidal neurons as well as an increase in volume of layers III, V, and VI in patients with autism. We observed significantly smaller pyramidal neurons in patients with autism compared to controls, although there was no difference in pyramidal neuron numbers or layer volumes. The reduced pyramidal neuron size suggests that a certain degree of dysfunction of areas 44 and 45 plays a role in the pathology of autism. Our results also support previous studies that have shown specific cellular neuropathology in autism with regionally specific reduction in neuron size, and provide further evidence for the possible involvement of the mirror neuron system, as well as impairment of neuronal networks relevant to communication and social behaviors, in this disorder.

  15. Analyzing neuronal networks using discrete-time dynamics

    NASA Astrophysics Data System (ADS)

    Ahn, Sungwoo; Smith, Brian H.; Borisyuk, Alla; Terman, David

    2010-05-01

    We develop mathematical techniques for analyzing detailed Hodgkin-Huxley like models for excitatory-inhibitory neuronal networks. Our strategy for studying a given network is to first reduce it to a discrete-time dynamical system. The discrete model is considerably easier to analyze, both mathematically and computationally, and parameters in the discrete model correspond directly to parameters in the original system of differential equations. While these networks arise in many important applications, a primary focus of this paper is to better understand mechanisms that underlie temporally dynamic responses in early processing of olfactory sensory information. The models presented here exhibit several properties that have been described for olfactory codes in an insect’s Antennal Lobe. These include transient patterns of synchronization and decorrelation of sensory inputs. By reducing the model to a discrete system, we are able to systematically study how properties of the dynamics, including the complex structure of the transients and attractors, depend on factors related to connectivity and the intrinsic and synaptic properties of cells within the network.

  16. Dynamical estimation of neuron and network properties III: network analysis using neuron spike times.

    PubMed

    Knowlton, Chris; Meliza, C Daniel; Margoliash, Daniel; Abarbanel, Henry D I

    2014-06-01

    Estimating the behavior of a network of neurons requires accurate models of the individual neurons along with accurate characterizations of the connections among them. Whereas for a single cell, measurements of the intracellular voltage are technically feasible and sufficient to characterize a useful model of its behavior, making sufficient numbers of simultaneous intracellular measurements to characterize even small networks is infeasible. This paper builds on prior work on single neurons to explore whether knowledge of the time of spiking of neurons in a network, once the nodes (neurons) have been characterized biophysically, can provide enough information to usefully constrain the functional architecture of the network: the existence of synaptic links among neurons and their strength. Using standardized voltage and synaptic gating variable waveforms associated with a spike, we demonstrate that the functional architecture of a small network of model neurons can be established.

  17. Alterations of neurochemical expression of the coeliac-superior mesenteric ganglion complex (CSMG) neurons supplying the prepyloric region of the porcine stomach following partial stomach resection.

    PubMed

    Palus, Katarzyna; Całka, Jarosław

    2016-03-01

    The purpose of the present study was to determine the response of the porcine coeliac-superior mesenteric ganglion complex (CSMG) neurons projecting to the prepyloric area of the porcine stomach to peripheral neuronal damage following partial stomach resection. To identify the sympathetic neurons innervating the studied area of stomach, the neuronal retrograde tracer Fast Blue (FB) was applied to control and partial stomach resection (RES) groups. On the 22nd day after FB injection, following laparotomy, the partial resection of the previously FB-injected stomach prepyloric area was performed in animals of RES group. On the 28th day, all animals were re-anaesthetized and euthanized. The CSMG complex was then collected and processed for double-labeling immunofluorescence. In control animals, retrograde-labelled perikarya were immunoreactive to tyrosine hydroxylase (TH), dopamine β-hydroxylase (DβH), neuropeptide Y (NPY) and galanin (GAL). Partial stomach resection decreased the numbers of FB-positive neurons immunopositive for TH and DβH. However, the strong increase of NPY and GAL expression, as well as de novo-synthesis of neuronal nitric oxide synthase (nNOS) and leu5-Enkephalin (LENK) was noted in studied neurons. Furthermore, FB-positive neurons in all pigs were surrounded by a network of cocaine- and amphetamine-regulated transcript peptide (CART)-, calcitonin gene-related peptide (CGRP)-, and substance P (SP)-, vasoactive intestinal peptide (VIP)-, LENK- and nNOS- immunoreactive nerve fibers. This may suggest neuroprotective contribution of these neurotransmitters in traumatic responses of sympathetic neurons to peripheral axonal damage. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Mechanisms Underlying Adaptation of Respiratory Network Activity to Modulatory Stimuli in the Mouse Embryo.

    PubMed

    Chevalier, Marc; De Sa, Rafaël; Cardoit, Laura; Thoby-Brisson, Muriel

    2016-01-01

    Breathing is a rhythmic behavior that requires organized contractions of respiratory effector muscles. This behavior must adapt to constantly changing conditions in order to ensure homeostasis, proper body oxygenation, and CO2/pH regulation. Respiratory rhythmogenesis is controlled by neural networks located in the brainstem. One area considered to be essential for generating the inspiratory phase of the respiratory rhythm is the preBötzinger complex (preBötC). Rhythmogenesis emerges from this network through the interplay between the activation of intrinsic cellular properties (pacemaker properties) and intercellular synaptic connections. Respiratory activity continuously changes under the impact of numerous modulatory substances depending on organismal needs and environmental conditions. The preBötC network has been shown to become active during the last third of gestation. But only little is known regarding the modulation of inspiratory rhythmicity at embryonic stages and even less on a possible role of pacemaker neurons in this functional flexibility during the prenatal period. By combining electrophysiology and calcium imaging performed on embryonic brainstem slice preparations, we provide evidence showing that embryonic inspiratory pacemaker neurons are already intrinsically sensitive to neuromodulation and external conditions (i.e., temperature) affecting respiratory network activity, suggesting a potential role of pacemaker neurons in mediating rhythm adaptation to modulatory stimuli in the embryo.

  19. Mechanisms Underlying Adaptation of Respiratory Network Activity to Modulatory Stimuli in the Mouse Embryo

    PubMed Central

    Chevalier, Marc; De Sa, Rafaël; Cardoit, Laura; Thoby-Brisson, Muriel

    2016-01-01

    Breathing is a rhythmic behavior that requires organized contractions of respiratory effector muscles. This behavior must adapt to constantly changing conditions in order to ensure homeostasis, proper body oxygenation, and CO2/pH regulation. Respiratory rhythmogenesis is controlled by neural networks located in the brainstem. One area considered to be essential for generating the inspiratory phase of the respiratory rhythm is the preBötzinger complex (preBötC). Rhythmogenesis emerges from this network through the interplay between the activation of intrinsic cellular properties (pacemaker properties) and intercellular synaptic connections. Respiratory activity continuously changes under the impact of numerous modulatory substances depending on organismal needs and environmental conditions. The preBötC network has been shown to become active during the last third of gestation. But only little is known regarding the modulation of inspiratory rhythmicity at embryonic stages and even less on a possible role of pacemaker neurons in this functional flexibility during the prenatal period. By combining electrophysiology and calcium imaging performed on embryonic brainstem slice preparations, we provide evidence showing that embryonic inspiratory pacemaker neurons are already intrinsically sensitive to neuromodulation and external conditions (i.e., temperature) affecting respiratory network activity, suggesting a potential role of pacemaker neurons in mediating rhythm adaptation to modulatory stimuli in the embryo. PMID:27239348

  20. Neuronal and behavioural modulations by pathway-selective optogenetic stimulation of the primate oculomotor system

    PubMed Central

    Inoue, Ken-ichi; Takada, Masahiko; Matsumoto, Masayuki

    2015-01-01

    Optogenetics enables temporally and spatially precise control of neuronal activity in vivo. One of the key advantages of optogenetics is that it can be used to control the activity of targeted neural pathways that connect specific brain regions. While such pathway-selective optogenetic control is a popular tool in rodents, attempts at modulating behaviour using pathway-selective optogenetics have not yet been successful in primates. Here we develop a methodology for pathway-selective optogenetics in macaque monkeys, focusing on the pathway from the frontal eye field (FEF) to the superior colliculus (SC), part of the complex oculomotor network. We find that the optogenetic stimulation of FEF projections to the SC modulates SC neuron activity and is sufficient to evoke saccadic eye movements towards the response field corresponding to the stimulation site. Thus, our results demonstrate the feasibility of using pathway-selective optogenetics to elucidate neural network function in primates. PMID:26387804

  1. Cell diversity and network dynamics in photosensitive human brain organoids

    PubMed Central

    Quadrato, Giorgia; Nguyen, Tuan; Macosko, Evan Z.; Sherwood, John L.; Yang, Sung Min; Berger, Daniel; Maria, Natalie; Scholvin, Jorg; Goldman, Melissa; Kinney, Justin; Boyden, Edward S.; Lichtman, Jeff; Williams, Ziv M.; McCarroll, Steven A.; Arlotta, Paola

    2017-01-01

    In vitro models of the developing brain such as 3D brain organoids offer an unprecedented opportunity to study aspects of human brain development and disease. However, it remains undefined what cells are generated within organoids and to what extent they recapitulate the regional complexity, cellular diversity, and circuit functionality of the brain. Here, we analyzed gene expression in over 80,000 individual cells isolated from 31 human brain organoids. We find that organoids can generate a broad diversity of cells, which are related to endogenous classes, including cells from the cerebral cortex and the retina. Organoids could be developed over extended periods (over 9 months) enabling unprecedented levels of maturity including the formation of dendritic spines and of spontaneously-active neuronal networks. Finally, neuronal activity within organoids could be controlled using light stimulation of photoreceptor-like cells, which may offer ways to probe the functionality of human neuronal circuits using physiological sensory stimuli. PMID:28445462

  2. Cell diversity and network dynamics in photosensitive human brain organoids.

    PubMed

    Quadrato, Giorgia; Nguyen, Tuan; Macosko, Evan Z; Sherwood, John L; Min Yang, Sung; Berger, Daniel R; Maria, Natalie; Scholvin, Jorg; Goldman, Melissa; Kinney, Justin P; Boyden, Edward S; Lichtman, Jeff W; Williams, Ziv M; McCarroll, Steven A; Arlotta, Paola

    2017-05-04

    In vitro models of the developing brain such as three-dimensional brain organoids offer an unprecedented opportunity to study aspects of human brain development and disease. However, the cells generated within organoids and the extent to which they recapitulate the regional complexity, cellular diversity and circuit functionality of the brain remain undefined. Here we analyse gene expression in over 80,000 individual cells isolated from 31 human brain organoids. We find that organoids can generate a broad diversity of cells, which are related to endogenous classes, including cells from the cerebral cortex and the retina. Organoids could be developed over extended periods (more than 9 months), allowing for the establishment of relatively mature features, including the formation of dendritic spines and spontaneously active neuronal networks. Finally, neuronal activity within organoids could be controlled using light stimulation of photosensitive cells, which may offer a way to probe the functionality of human neuronal circuits using physiological sensory stimuli.

  3. Neuronal and behavioural modulations by pathway-selective optogenetic stimulation of the primate oculomotor system.

    PubMed

    Inoue, Ken-ichi; Takada, Masahiko; Matsumoto, Masayuki

    2015-09-21

    Optogenetics enables temporally and spatially precise control of neuronal activity in vivo. One of the key advantages of optogenetics is that it can be used to control the activity of targeted neural pathways that connect specific brain regions. While such pathway-selective optogenetic control is a popular tool in rodents, attempts at modulating behaviour using pathway-selective optogenetics have not yet been successful in primates. Here we develop a methodology for pathway-selective optogenetics in macaque monkeys, focusing on the pathway from the frontal eye field (FEF) to the superior colliculus (SC), part of the complex oculomotor network. We find that the optogenetic stimulation of FEF projections to the SC modulates SC neuron activity and is sufficient to evoke saccadic eye movements towards the response field corresponding to the stimulation site. Thus, our results demonstrate the feasibility of using pathway-selective optogenetics to elucidate neural network function in primates.

  4. Complex Dynamics of Delay-Coupled Neural Networks

    NASA Astrophysics Data System (ADS)

    Mao, Xiaochen

    2016-09-01

    This paper reveals the complicated dynamics of a delay-coupled system that consists of a pair of sub-networks and multiple bidirectional couplings. Time delays are introduced into the internal connections and network-couplings, respectively. The stability and instability of the coupled network are discussed. The sufficient conditions for the existence of oscillations are given. Case studies of numerical simulations are given to validate the analytical results. Interesting and complicated neuronal activities are observed numerically, such as rest states, periodic oscillations, multiple switches of rest states and oscillations, and the coexistence of different types of oscillations.

  5. Simulation of Code Spectrum and Code Flow of Cultured Neuronal Networks.

    PubMed

    Tamura, Shinichi; Nishitani, Yoshi; Hosokawa, Chie; Miyoshi, Tomomitsu; Sawai, Hajime

    2016-01-01

    It has been shown that, in cultured neuronal networks on a multielectrode, pseudorandom-like sequences (codes) are detected, and they flow with some spatial decay constant. Each cultured neuronal network is characterized by a specific spectrum curve. That is, we may consider the spectrum curve as a "signature" of its associated neuronal network that is dependent on the characteristics of neurons and network configuration, including the weight distribution. In the present study, we used an integrate-and-fire model of neurons with intrinsic and instantaneous fluctuations of characteristics for performing a simulation of a code spectrum from multielectrodes on a 2D mesh neural network. We showed that it is possible to estimate the characteristics of neurons such as the distribution of number of neurons around each electrode and their refractory periods. Although this process is a reverse problem and theoretically the solutions are not sufficiently guaranteed, the parameters seem to be consistent with those of neurons. That is, the proposed neural network model may adequately reflect the behavior of a cultured neuronal network. Furthermore, such prospect is discussed that code analysis will provide a base of communication within a neural network that will also create a base of natural intelligence.

  6. Identification of neuronal network properties from the spectral analysis of calcium imaging signals in neuronal cultures.

    PubMed

    Tibau, Elisenda; Valencia, Miguel; Soriano, Jordi

    2013-01-01

    Neuronal networks in vitro are prominent systems to study the development of connections in living neuronal networks and the interplay between connectivity, activity and function. These cultured networks show a rich spontaneous activity that evolves concurrently with the connectivity of the underlying network. In this work we monitor the development of neuronal cultures, and record their activity using calcium fluorescence imaging. We use spectral analysis to characterize global dynamical and structural traits of the neuronal cultures. We first observe that the power spectrum can be used as a signature of the state of the network, for instance when inhibition is active or silent, as well as a measure of the network's connectivity strength. Second, the power spectrum identifies prominent developmental changes in the network such as GABAA switch. And third, the analysis of the spatial distribution of the spectral density, in experiments with a controlled disintegration of the network through CNQX, an AMPA-glutamate receptor antagonist in excitatory neurons, reveals the existence of communities of strongly connected, highly active neurons that display synchronous oscillations. Our work illustrates the interest of spectral analysis for the study of in vitro networks, and its potential use as a network-state indicator, for instance to compare healthy and diseased neuronal networks.

  7. Somatosensory neuron types identified by high-coverage single-cell RNA-sequencing and functional heterogeneity

    PubMed Central

    Li, Chang-Lin; Li, Kai-Cheng; Wu, Dan; Chen, Yan; Luo, Hao; Zhao, Jing-Rong; Wang, Sa-Shuang; Sun, Ming-Ming; Lu, Ying-Jin; Zhong, Yan-Qing; Hu, Xu-Ye; Hou, Rui; Zhou, Bei-Bei; Bao, Lan; Xiao, Hua-Sheng; Zhang, Xu

    2016-01-01

    Sensory neurons are distinguished by distinct signaling networks and receptive characteristics. Thus, sensory neuron types can be defined by linking transcriptome-based neuron typing with the sensory phenotypes. Here we classify somatosensory neurons of the mouse dorsal root ganglion (DRG) by high-coverage single-cell RNA-sequencing (10 950 ± 1 218 genes per neuron) and neuron size-based hierarchical clustering. Moreover, single DRG neurons responding to cutaneous stimuli are recorded using an in vivo whole-cell patch clamp technique and classified by neuron-type genetic markers. Small diameter DRG neurons are classified into one type of low-threshold mechanoreceptor and five types of mechanoheat nociceptors (MHNs). Each of the MHN types is further categorized into two subtypes. Large DRG neurons are categorized into four types, including neurexophilin 1-expressing MHNs and mechanical nociceptors (MNs) expressing BAI1-associated protein 2-like 1 (Baiap2l1). Mechanoreceptors expressing trafficking protein particle complex 3-like and Baiap2l1-marked MNs are subdivided into two subtypes each. These results provide a new system for cataloging somatosensory neurons and their transcriptome databases. PMID:26691752

  8. Exact Solutions for Rate and Synchrony in Recurrent Networks of Coincidence Detectors

    PubMed Central

    Mikula, Shawn; Niebur, Ernst

    2009-01-01

    We provide analytical solutions for mean firing rates and cross-correlations of coincidence detector neurons in recurrent networks with excitatory or inhibitory connectivity with rate-modulated steady-state spiking inputs. We use discrete-time finite-state Markov chains to represent network state transition probabilities, which are subsequently used to derive exact analytical solutions for mean firing rates and cross-correlations. As illustrated in several examples, the method can be used for modeling cortical microcircuits and clarifying single-neuron and population coding mechanisms. We also demonstrate that increasing firing rates do not necessarily translate into increasing cross-correlations, though our results do support the contention that firing rates and cross-correlations are likely to be coupled. Our analytical solutions underscore the complexity of the relationship between firing rates and cross-correlations. PMID:18439133

  9. Neuronal Ensemble Synchrony during Human Focal Seizures

    PubMed Central

    Ahmed, Omar J.; Harrison, Matthew T.; Eskandar, Emad N.; Cosgrove, G. Rees; Madsen, Joseph R.; Blum, Andrew S.; Potter, N. Stevenson; Hochberg, Leigh R.; Cash, Sydney S.

    2014-01-01

    Seizures are classically characterized as the expression of hypersynchronous neural activity, yet the true degree of synchrony in neuronal spiking (action potentials) during human seizures remains a fundamental question. We quantified the temporal precision of spike synchrony in ensembles of neocortical neurons during seizures in people with pharmacologically intractable epilepsy. Two seizure types were analyzed: those characterized by sustained gamma (∼40–60 Hz) local field potential (LFP) oscillations or by spike-wave complexes (SWCs; ∼3 Hz). Fine (<10 ms) temporal synchrony was rarely present during gamma-band seizures, where neuronal spiking remained highly irregular and asynchronous. In SWC seizures, phase locking of neuronal spiking to the SWC spike phase induced synchrony at a coarse 50–100 ms level. In addition, transient fine synchrony occurred primarily during the initial ∼20 ms period of the SWC spike phase and varied across subjects and seizures. Sporadic coherence events between neuronal population spike counts and LFPs were observed during SWC seizures in high (∼80 Hz) gamma-band and during high-frequency oscillations (∼130 Hz). Maximum entropy models of the joint neuronal spiking probability, constrained only on single neurons' nonstationary coarse spiking rates and local network activation, explained most of the fine synchrony in both seizure types. Our findings indicate that fine neuronal ensemble synchrony occurs mostly during SWC, not gamma-band, seizures, and primarily during the initial phase of SWC spikes. Furthermore, these fine synchrony events result mostly from transient increases in overall neuronal network spiking rates, rather than changes in precise spiking correlations between specific pairs of neurons. PMID:25057195

  10. Computational properties of networks of synchronous groups of spiking neurons.

    PubMed

    Dayhoff, Judith E

    2007-09-01

    We demonstrate a model in which synchronously firing ensembles of neurons are networked to produce computational results. Each ensemble is a group of biological integrate-and-fire spiking neurons, with probabilistic interconnections between groups. An analogy is drawn in which each individual processing unit of an artificial neural network corresponds to a neuronal group in a biological model. The activation value of a unit in the artificial neural network corresponds to the fraction of active neurons, synchronously firing, in a biological neuronal group. Weights of the artificial neural network correspond to the product of the interconnection density between groups, the group size of the presynaptic group, and the postsynaptic potential heights in the synchronous group model. All three of these parameters can modulate connection strengths between neuronal groups in the synchronous group models. We give an example of nonlinear classification (XOR) and a function approximation example in which the capability of the artificial neural network can be captured by a neural network model with biological integrate-and-fire neurons configured as a network of synchronously firing ensembles of such neurons. We point out that the general function approximation capability proven for feedforward artificial neural networks appears to be approximated by networks of neuronal groups that fire in synchrony, where the groups comprise integrate-and-fire neurons. We discuss the advantages of this type of model for biological systems, its possible learning mechanisms, and the associated timing relationships.

  11. Network reconfiguration and neuronal plasticity in rhythm-generating networks.

    PubMed

    Koch, Henner; Garcia, Alfredo J; Ramirez, Jan-Marino

    2011-12-01

    Neuronal networks are highly plastic and reconfigure in a state-dependent manner. The plasticity at the network level emerges through multiple intrinsic and synaptic membrane properties that imbue neurons and their interactions with numerous nonlinear properties. These properties are continuously regulated by neuromodulators and homeostatic mechanisms that are critical to maintain not only network stability and also adapt networks in a short- and long-term manner to changes in behavioral, developmental, metabolic, and environmental conditions. This review provides concrete examples from neuronal networks in invertebrates and vertebrates, and illustrates that the concepts and rules that govern neuronal networks and behaviors are universal.

  12. Network control principles predict neuron function in the Caenorhabditis elegans connectome

    PubMed Central

    Chew, Yee Lian; Walker, Denise S.; Schafer, William R.; Barabási, Albert-László

    2017-01-01

    Recent studies on the controllability of complex systems offer a powerful mathematical framework to systematically explore the structure-function relationship in biological, social and technological networks1–3. Despite theoretical advances, we lack direct experimental proof of the validity of these widely used control principles. Here we fill this gap by applying a control framework to the connectome of the nematode C. elegans4–6, allowing us to predict the involvement of each C. elegans neuron in locomotor behaviours. We predict that control of the muscles or motor neurons requires twelve neuronal classes, which include neuronal groups previously implicated in locomotion by laser ablation7–13, as well as one previously uncharacterised neuron, PDB. We validate this prediction experimentally, finding that the ablation of PDB leads to a significant loss of dorsoventral polarity in large body bends. Importantly, control principles also allow us to investigate the involvement of individual neurons within each neuronal class. For example, we predict that, within the class of DD motor neurons, only three (DD04, DD05, or DD06) should affect locomotion when ablated individually. This prediction is also confirmed, with single-cell ablations of DD04 or DD05, but not DD02 or DD03, specifically affecting posterior body movements. Our predictions are robust to deletions of weak connections, missing connections, and rewired connections in the current connectome, indicating the potential applicability of this analytical framework to larger and less well-characterised connectomes. PMID:29045391

  13. Network control principles predict neuron function in the Caenorhabditis elegans connectome

    NASA Astrophysics Data System (ADS)

    Yan, Gang; Vértes, Petra E.; Towlson, Emma K.; Chew, Yee Lian; Walker, Denise S.; Schafer, William R.; Barabási, Albert-László

    2017-10-01

    Recent studies on the controllability of complex systems offer a powerful mathematical framework to systematically explore the structure-function relationship in biological, social, and technological networks. Despite theoretical advances, we lack direct experimental proof of the validity of these widely used control principles. Here we fill this gap by applying a control framework to the connectome of the nematode Caenorhabditis elegans, allowing us to predict the involvement of each C. elegans neuron in locomotor behaviours. We predict that control of the muscles or motor neurons requires 12 neuronal classes, which include neuronal groups previously implicated in locomotion by laser ablation, as well as one previously uncharacterized neuron, PDB. We validate this prediction experimentally, finding that the ablation of PDB leads to a significant loss of dorsoventral polarity in large body bends. Importantly, control principles also allow us to investigate the involvement of individual neurons within each neuronal class. For example, we predict that, within the class of DD motor neurons, only three (DD04, DD05, or DD06) should affect locomotion when ablated individually. This prediction is also confirmed; single cell ablations of DD04 or DD05 specifically affect posterior body movements, whereas ablations of DD02 or DD03 do not. Our predictions are robust to deletions of weak connections, missing connections, and rewired connections in the current connectome, indicating the potential applicability of this analytical framework to larger and less well-characterized connectomes.

  14. Network control principles predict neuron function in the Caenorhabditis elegans connectome.

    PubMed

    Yan, Gang; Vértes, Petra E; Towlson, Emma K; Chew, Yee Lian; Walker, Denise S; Schafer, William R; Barabási, Albert-László

    2017-10-26

    Recent studies on the controllability of complex systems offer a powerful mathematical framework to systematically explore the structure-function relationship in biological, social, and technological networks. Despite theoretical advances, we lack direct experimental proof of the validity of these widely used control principles. Here we fill this gap by applying a control framework to the connectome of the nematode Caenorhabditis elegans, allowing us to predict the involvement of each C. elegans neuron in locomotor behaviours. We predict that control of the muscles or motor neurons requires 12 neuronal classes, which include neuronal groups previously implicated in locomotion by laser ablation, as well as one previously uncharacterized neuron, PDB. We validate this prediction experimentally, finding that the ablation of PDB leads to a significant loss of dorsoventral polarity in large body bends. Importantly, control principles also allow us to investigate the involvement of individual neurons within each neuronal class. For example, we predict that, within the class of DD motor neurons, only three (DD04, DD05, or DD06) should affect locomotion when ablated individually. This prediction is also confirmed; single cell ablations of DD04 or DD05 specifically affect posterior body movements, whereas ablations of DD02 or DD03 do not. Our predictions are robust to deletions of weak connections, missing connections, and rewired connections in the current connectome, indicating the potential applicability of this analytical framework to larger and less well-characterized connectomes.

  15. Intrinsic Cellular Properties and Connectivity Density Determine Variable Clustering Patterns in Randomly Connected Inhibitory Neural Networks

    PubMed Central

    Rich, Scott; Booth, Victoria; Zochowski, Michal

    2016-01-01

    The plethora of inhibitory interneurons in the hippocampus and cortex play a pivotal role in generating rhythmic activity by clustering and synchronizing cell firing. Results of our simulations demonstrate that both the intrinsic cellular properties of neurons and the degree of network connectivity affect the characteristics of clustered dynamics exhibited in randomly connected, heterogeneous inhibitory networks. We quantify intrinsic cellular properties by the neuron's current-frequency relation (IF curve) and Phase Response Curve (PRC), a measure of how perturbations given at various phases of a neurons firing cycle affect subsequent spike timing. We analyze network bursting properties of networks of neurons with Type I or Type II properties in both excitability and PRC profile; Type I PRCs strictly show phase advances and IF curves that exhibit frequencies arbitrarily close to zero at firing threshold while Type II PRCs display both phase advances and delays and IF curves that have a non-zero frequency at threshold. Type II neurons whose properties arise with or without an M-type adaptation current are considered. We analyze network dynamics under different levels of cellular heterogeneity and as intrinsic cellular firing frequency and the time scale of decay of synaptic inhibition are varied. Many of the dynamics exhibited by these networks diverge from the predictions of the interneuron network gamma (ING) mechanism, as well as from results in all-to-all connected networks. Our results show that randomly connected networks of Type I neurons synchronize into a single cluster of active neurons while networks of Type II neurons organize into two mutually exclusive clusters segregated by the cells' intrinsic firing frequencies. Networks of Type II neurons containing the adaptation current behave similarly to networks of either Type I or Type II neurons depending on network parameters; however, the adaptation current creates differences in the cluster dynamics compared to those in networks of Type I or Type II neurons. To understand these results, we compute neuronal PRCs calculated with a perturbation matching the profile of the synaptic current in our networks. Differences in profiles of these PRCs across the different neuron types reveal mechanisms underlying the divergent network dynamics. PMID:27812323

  16. Unveiling causal activity of complex networks

    NASA Astrophysics Data System (ADS)

    Williams-García, Rashid V.; Beggs, John M.; Ortiz, Gerardo

    2017-07-01

    We introduce a novel tool for analyzing complex network dynamics, allowing for cascades of causally-related events, which we call causal webs (c-webs), to be separated from other non-causally-related events. This tool shows that traditionally-conceived avalanches may contain mixtures of spatially-distinct but temporally-overlapping cascades of events, and dynamical disorder or noise. In contrast, c-webs separate these components, unveiling previously hidden features of the network and dynamics. We apply our method to mouse cortical data with resulting statistics which demonstrate for the first time that neuronal avalanches are not merely composed of causally-related events. The original version of this article was uploaded to the arXiv on March 17th, 2016 [1].

  17. Aberrant within- and between-network connectivity of the mirror neuron system network and the mentalizing network in first episode psychosis.

    PubMed

    Choe, Eugenie; Lee, Tae Young; Kim, Minah; Hur, Ji-Won; Yoon, Youngwoo Bryan; Cho, Kang-Ik K; Kwon, Jun Soo

    2018-03-26

    It has been suggested that the mentalizing network and the mirror neuron system network support important social cognitive processes that are impaired in schizophrenia. However, the integrity and interaction of these two networks have not been sufficiently studied, and their effects on social cognition in schizophrenia remain unclear. Our study included 26 first-episode psychosis (FEP) patients and 26 healthy controls. We utilized resting-state functional connectivity to examine the a priori-defined mirror neuron system network and the mentalizing network and to assess the within- and between-network connectivities of the networks in FEP patients. We also assessed the correlation between resting-state functional connectivity measures and theory of mind performance. FEP patients showed altered within-network connectivity of the mirror neuron system network, and aberrant between-network connectivity between the mirror neuron system network and the mentalizing network. The within-network connectivity of the mirror neuron system network was noticeably correlated with theory of mind task performance in FEP patients. The integrity and interaction of the mirror neuron system network and the mentalizing network may be altered during the early stages of psychosis. Additionally, this study suggests that alterations in the integrity of the mirror neuron system network are highly related to deficient theory of mind in schizophrenia, and this problem would be present from the early stage of psychosis. Copyright © 2018 Elsevier B.V. All rights reserved.

  18. Spontaneous neuronal activity as a self-organized critical phenomenon

    NASA Astrophysics Data System (ADS)

    de Arcangelis, L.; Herrmann, H. J.

    2013-01-01

    Neuronal avalanches are a novel mode of activity in neuronal networks, experimentally found in vitro and in vivo, and exhibit a robust critical behaviour. Avalanche activity can be modelled within the self-organized criticality framework, including threshold firing, refractory period and activity-dependent synaptic plasticity. The size and duration distributions confirm that the system acts in a critical state, whose scaling behaviour is very robust. Next, we discuss the temporal organization of neuronal avalanches. This is given by the alternation between states of high and low activity, named up and down states, leading to a balance between excitation and inhibition controlled by a single parameter. During these periods both the single neuron state and the network excitability level, keeping memory of past activity, are tuned by homeostatic mechanisms. Finally, we verify if a system with no characteristic response can ever learn in a controlled and reproducible way. Learning in the model occurs via plastic adaptation of synaptic strengths by a non-uniform negative feedback mechanism. Learning is a truly collective process and the learning dynamics exhibits universal features. Even complex rules can be learned provided that the plastic adaptation is sufficiently slow.

  19. Complex Networks in Psychological Models

    NASA Astrophysics Data System (ADS)

    Wedemann, R. S.; Carvalho, L. S. A. V. D.; Donangelo, R.

    We develop schematic, self-organizing, neural-network models to describe mechanisms associated with mental processes, by a neurocomputational substrate. These models are examples of real world complex networks with interesting general topological structures. Considering dopaminergic signal-to-noise neuronal modulation in the central nervous system, we propose neural network models to explain development of cortical map structure and dynamics of memory access, and unify different mental processes into a single neurocomputational substrate. Based on our neural network models, neurotic behavior may be understood as an associative memory process in the brain, and the linguistic, symbolic associative process involved in psychoanalytic working-through can be mapped onto a corresponding process of reconfiguration of the neural network. The models are illustrated through computer simulations, where we varied dopaminergic modulation and observed the self-organizing emergent patterns at the resulting semantic map, interpreting them as different manifestations of mental functioning, from psychotic through to normal and neurotic behavior, and creativity.

  20. Integrated workflows for spiking neuronal network simulations

    PubMed Central

    Antolík, Ján; Davison, Andrew P.

    2013-01-01

    The increasing availability of computational resources is enabling more detailed, realistic modeling in computational neuroscience, resulting in a shift toward more heterogeneous models of neuronal circuits, and employment of complex experimental protocols. This poses a challenge for existing tool chains, as the set of tools involved in a typical modeler's workflow is expanding concomitantly, with growing complexity in the metadata flowing between them. For many parts of the workflow, a range of tools is available; however, numerous areas lack dedicated tools, while integration of existing tools is limited. This forces modelers to either handle the workflow manually, leading to errors, or to write substantial amounts of code to automate parts of the workflow, in both cases reducing their productivity. To address these issues, we have developed Mozaik: a workflow system for spiking neuronal network simulations written in Python. Mozaik integrates model, experiment and stimulation specification, simulation execution, data storage, data analysis and visualization into a single automated workflow, ensuring that all relevant metadata are available to all workflow components. It is based on several existing tools, including PyNN, Neo, and Matplotlib. It offers a declarative way to specify models and recording configurations using hierarchically organized configuration files. Mozaik automatically records all data together with all relevant metadata about the experimental context, allowing automation of the analysis and visualization stages. Mozaik has a modular architecture, and the existing modules are designed to be extensible with minimal programming effort. Mozaik increases the productivity of running virtual experiments on highly structured neuronal networks by automating the entire experimental cycle, while increasing the reliability of modeling studies by relieving the user from manual handling of the flow of metadata between the individual workflow stages. PMID:24368902

  1. Integrated workflows for spiking neuronal network simulations.

    PubMed

    Antolík, Ján; Davison, Andrew P

    2013-01-01

    The increasing availability of computational resources is enabling more detailed, realistic modeling in computational neuroscience, resulting in a shift toward more heterogeneous models of neuronal circuits, and employment of complex experimental protocols. This poses a challenge for existing tool chains, as the set of tools involved in a typical modeler's workflow is expanding concomitantly, with growing complexity in the metadata flowing between them. For many parts of the workflow, a range of tools is available; however, numerous areas lack dedicated tools, while integration of existing tools is limited. This forces modelers to either handle the workflow manually, leading to errors, or to write substantial amounts of code to automate parts of the workflow, in both cases reducing their productivity. To address these issues, we have developed Mozaik: a workflow system for spiking neuronal network simulations written in Python. Mozaik integrates model, experiment and stimulation specification, simulation execution, data storage, data analysis and visualization into a single automated workflow, ensuring that all relevant metadata are available to all workflow components. It is based on several existing tools, including PyNN, Neo, and Matplotlib. It offers a declarative way to specify models and recording configurations using hierarchically organized configuration files. Mozaik automatically records all data together with all relevant metadata about the experimental context, allowing automation of the analysis and visualization stages. Mozaik has a modular architecture, and the existing modules are designed to be extensible with minimal programming effort. Mozaik increases the productivity of running virtual experiments on highly structured neuronal networks by automating the entire experimental cycle, while increasing the reliability of modeling studies by relieving the user from manual handling of the flow of metadata between the individual workflow stages.

  2. Relationship between inter-stimulus-intervals and intervals of autonomous activities in a neuronal network.

    PubMed

    Ito, Hidekatsu; Minoshima, Wataru; Kudoh, Suguru N

    2015-08-01

    To investigate relationships between neuronal network activity and electrical stimulus, we analyzed autonomous activity before and after electrical stimulus. Recordings of autonomous activity were performed using dissociated culture of rat hippocampal neurons on a multi-electrodes array (MEA) dish. Single stimulus and pared stimuli were applied to a cultured neuronal network. Single stimulus was applied every 1 min, and paired stimuli was performed by two sequential stimuli every 1 min. As a result, the patterns of synchronized activities of a neuronal network were changed after stimulus. Especially, long range synchronous activities were induced by paired stimuli. When 1 s inter-stimulus-intervals (ISI) and 1.5 s ISI paired stimuli are applied to a neuronal network, relatively long range synchronous activities expressed in case of 1.5 s ISI. Temporal synchronous activity of neuronal network is changed according to inter-stimulus-intervals (ISI) of electrical stimulus. In other words, dissociated neuronal network can maintain given information in temporal pattern and a certain type of an information maintenance mechanism was considered to be implemented in a semi-artificial dissociated neuronal network. The result is useful toward manipulation technology of neuronal activity in a brain system.

  3. LHX2 Interacts with the NuRD Complex and Regulates Cortical Neuron Subtype Determinants Fezf2 and Sox11.

    PubMed

    Muralidharan, Bhavana; Khatri, Zeba; Maheshwari, Upasana; Gupta, Ritika; Roy, Basabdatta; Pradhan, Saurabh J; Karmodiya, Krishanpal; Padmanabhan, Hari; Shetty, Ashwin S; Balaji, Chinthapalli; Kolthur-Seetharam, Ullas; Macklis, Jeffrey D; Galande, Sanjeev; Tole, Shubha

    2017-01-04

    In the developing cerebral cortex, sequential transcriptional programs take neuroepithelial cells from proliferating progenitors to differentiated neurons with unique molecular identities. The regulatory changes that occur in the chromatin of the progenitors are not well understood. During deep layer neurogenesis, we show that transcription factor LHX2 binds to distal regulatory elements of Fezf2 and Sox11, critical determinants of neuron subtype identity in the mouse neocortex. We demonstrate that LHX2 binds to the nucleosome remodeling and histone deacetylase histone remodeling complex subunits LSD1, HDAC2, and RBBP4, which are proximal regulators of the epigenetic state of chromatin. When LHX2 is absent, active histone marks at the Fezf2 and Sox11 loci are increased. Loss of LHX2 produces an increase, and overexpression of LHX2 causes a decrease, in layer 5 Fezf2 and CTIP2-expressing neurons. Our results provide mechanistic insight into how LHX2 acts as a necessary and sufficient regulator of genes that control cortical neuronal subtype identity. The functional complexity of the cerebral cortex arises from an array of distinct neuronal subtypes with unique connectivity patterns that are produced from common progenitors. This study reveals that transcription factor LHX2 regulates the numbers of specific cortical output neuron subtypes by controlling the genes that are required to produce them. Loss or increase in LHX2 during neurogenesis is sufficient to increase or decrease, respectively, a particular subcerebrally projecting population. Mechanistically, LHX2 interacts with chromatin modifying protein complexes to edit the chromatin landscape of its targets Fezf2 and Sox11, which regulates their expression and consequently the identities of the neurons produced. Thus, LHX2 is a key component of the control network for producing neurons that will participate in cortical circuitry. Copyright © 2017 Muralidharan et al.

  4. Predicting protein complex geometries with a neural network.

    PubMed

    Chae, Myong-Ho; Krull, Florian; Lorenzen, Stephan; Knapp, Ernst-Walter

    2010-03-01

    A major challenge of the protein docking problem is to define scoring functions that can distinguish near-native protein complex geometries from a large number of non-native geometries (decoys) generated with noncomplexed protein structures (unbound docking). In this study, we have constructed a neural network that employs the information from atom-pair distance distributions of a large number of decoys to predict protein complex geometries. We found that docking prediction can be significantly improved using two different types of polar hydrogen atoms. To train the neural network, 2000 near-native decoys of even distance distribution were used for each of the 185 considered protein complexes. The neural network normalizes the information from different protein complexes using an additional protein complex identity input neuron for each complex. The parameters of the neural network were determined such that they mimic a scoring funnel in the neighborhood of the native complex structure. The neural network approach avoids the reference state problem, which occurs in deriving knowledge-based energy functions for scoring. We show that a distance-dependent atom pair potential performs much better than a simple atom-pair contact potential. We have compared the performance of our scoring function with other empirical and knowledge-based scoring functions such as ZDOCK 3.0, ZRANK, ITScore-PP, EMPIRE, and RosettaDock. In spite of the simplicity of the method and its functional form, our neural network-based scoring function achieves a reasonable performance in rigid-body unbound docking of proteins. Proteins 2010. (c) 2009 Wiley-Liss, Inc.

  5. Imaging Neural Activity Using Thy1-GCaMP Transgenic mice

    PubMed Central

    Chen, Qian; Cichon, Joseph; Wang, Wenting; Qiu, Li; Lee, Seok-Jin R.; Campbell, Nolan R.; DeStefino, Nicholas; Goard, Michael J.; Fu, Zhanyan; Yasuda, Ryohei; Looger, Loren L.; Arenkiel, Benjamin R.; Gan, Wen-Biao; Feng, Guoping

    2014-01-01

    Summary The ability to chronically monitor neuronal activity in the living brain is essential for understanding the organization and function of the nervous system. The genetically encoded green fluorescent protein based calcium sensor GCaMP provides a powerful tool for detecting calcium transients in neuronal somata, processes, and synapses that are triggered by neuronal activities. Here we report the generation and characterization of transgenic mice that express improved GCaMPs in various neuronal subpopulations under the control of the Thy1 promoter. In vitro and in vivo studies show that calcium transients induced by spontaneous and stimulus-evoked neuronal activities can be readily detected at the level of individual cells and synapses in acute brain slices, as well as chronically in awake behaving animals. These GCaMP transgenic mice allow investigation of activity patterns in defined neuronal populations in the living brain, and will greatly facilitate dissecting complex structural and functional relationships of neural networks. PMID:23083733

  6. Central auditory neurons have composite receptive fields.

    PubMed

    Kozlov, Andrei S; Gentner, Timothy Q

    2016-02-02

    High-level neurons processing complex, behaviorally relevant signals are sensitive to conjunctions of features. Characterizing the receptive fields of such neurons is difficult with standard statistical tools, however, and the principles governing their organization remain poorly understood. Here, we demonstrate multiple distinct receptive-field features in individual high-level auditory neurons in a songbird, European starling, in response to natural vocal signals (songs). We then show that receptive fields with similar characteristics can be reproduced by an unsupervised neural network trained to represent starling songs with a single learning rule that enforces sparseness and divisive normalization. We conclude that central auditory neurons have composite receptive fields that can arise through a combination of sparseness and normalization in neural circuits. Our results, along with descriptions of random, discontinuous receptive fields in the central olfactory neurons in mammals and insects, suggest general principles of neural computation across sensory systems and animal classes.

  7. Molecular dynamics in an optical trap of glutamate receptors labeled with quantum-dots on living neurons

    NASA Astrophysics Data System (ADS)

    Kishimoto, Tatsunori; Maezawa, Yasuyo; Kudoh, Suguru N.; Taguchi, Takahisa; Hosokawa, Chie

    2017-04-01

    Molecular dynamics of glutamate receptor, which is major neurotransmitter receptor at excitatory synapse located on neuron, is essential for synaptic plasticity in the complex neuronal networks. Here we studied molecular dynamics in an optical trap of AMPA-type glutamate receptor (AMPAR) labeled with quantum-dot (QD) on living neuronal cells with fluorescence imaging and fluorescence correlation spectroscopy (FCS). When a 1064-nm laser beam for optical trapping was focused on QD-AMPARs located on neuronal cells, the fluorescence intensity of QD-AMPARs gradually increased at the focal spot. Using single-particle tracking of QD-AMPARs on neurons, the average diffusion coefficient decreased in an optical trap. Moreover, the decay time obtained from FCS analysis increased with the laser power and the initial assembling state of AMPARs depended on culturing day, suggesting that the motion of QD-AMPAR was constrained in an optical trap.

  8. Contrast normalization contributes to a biologically-plausible model of receptive-field development in primary visual cortex (V1)

    PubMed Central

    Willmore, Ben D.B.; Bulstrode, Harry; Tolhurst, David J.

    2012-01-01

    Neuronal populations in the primary visual cortex (V1) of mammals exhibit contrast normalization. Neurons that respond strongly to simple visual stimuli – such as sinusoidal gratings – respond less well to the same stimuli when they are presented as part of a more complex stimulus which also excites other, neighboring neurons. This phenomenon is generally attributed to generalized patterns of inhibitory connections between nearby V1 neurons. The Bienenstock, Cooper and Munro (BCM) rule is a neural network learning rule that, when trained on natural images, produces model neurons which, individually, have many tuning properties in common with real V1 neurons. However, when viewed as a population, a BCM network is very different from V1 – each member of the BCM population tends to respond to the same dominant features of visual input, producing an incomplete, highly redundant code for visual information. Here, we demonstrate that, by adding contrast normalization into the BCM rule, we arrive at a neurally-plausible Hebbian learning rule that can learn an efficient sparse, overcomplete representation that is a better model for stimulus selectivity in V1. This suggests that one role of contrast normalization in V1 is to guide the neonatal development of receptive fields, so that neurons respond to different features of visual input. PMID:22230381

  9. The Role of Astrocytes in the Generation, Migration, and Integration of New Neurons in the Adult Olfactory Bulb

    PubMed Central

    Gengatharan, Archana; Bammann, Rodrigo R.; Saghatelyan, Armen

    2016-01-01

    In mammals, new neurons in the adult olfactory bulb originate from a pool of neural stem cells in the subventricular zone of the lateral ventricles. Adult-born cells play an important role in odor information processing by adjusting the neuronal network to changing environmental conditions. Olfactory bulb neurogenesis is supported by several non-neuronal cells. In this review, we focus on the role of astroglial cells in the generation, migration, integration, and survival of new neurons in the adult forebrain. In the subventricular zone, neural stem cells with astrocytic properties display regional and temporal specificity when generating different neuronal subtypes. Non-neurogenic astrocytes contribute to the establishment and maintenance of the neurogenic niche. Neuroblast chains migrate through the rostral migratory stream ensheathed by astrocytic processes. Astrocytes play an important regulatory role in neuroblast migration and also assist in the development of a vasculature scaffold in the migratory stream that is essential for neuroblast migration in the postnatal brain. In the olfactory bulb, astrocytes help to modulate the network through a complex release of cytokines, regulate blood flow, and provide metabolic support, which may promote the integration and survival of new neurons. Astrocytes thus play a pivotal role in various processes of adult olfactory bulb neurogenesis, and it is likely that many other functions of these glial cells will emerge in the near future. PMID:27092050

  10. Human embryonic stem cell-derived neurons adopt and regulate the activity of an established neural network

    PubMed Central

    Weick, Jason P.; Liu, Yan; Zhang, Su-Chun

    2011-01-01

    Whether hESC-derived neurons can fully integrate with and functionally regulate an existing neural network remains unknown. Here, we demonstrate that hESC-derived neurons receive unitary postsynaptic currents both in vitro and in vivo and adopt the rhythmic firing behavior of mouse cortical networks via synaptic integration. Optical stimulation of hESC-derived neurons expressing Channelrhodopsin-2 elicited both inhibitory and excitatory postsynaptic currents and triggered network bursting in mouse neurons. Furthermore, light stimulation of hESC-derived neurons transplanted to the hippocampus of adult mice triggered postsynaptic currents in host pyramidal neurons in acute slice preparations. Thus, hESC-derived neurons can participate in and modulate neural network activity through functional synaptic integration, suggesting they are capable of contributing to neural network information processing both in vitro and in vivo. PMID:22106298

  11. A neural network technique for remeshing of bone microstructure.

    PubMed

    Fischer, Anath; Holdstein, Yaron

    2012-01-01

    Today, there is major interest within the biomedical community in developing accurate noninvasive means for the evaluation of bone microstructure and bone quality. Recent improvements in 3D imaging technology, among them development of micro-CT and micro-MRI scanners, allow in-vivo 3D high-resolution scanning and reconstruction of large specimens or even whole bone models. Thus, the tendency today is to evaluate bone features using 3D assessment techniques rather than traditional 2D methods. For this purpose, high-quality meshing methods are required. However, the 3D meshes produced from current commercial systems usually are of low quality with respect to analysis and rapid prototyping. 3D model reconstruction of bone is difficult due to the complexity of bone microstructure. The small bone features lead to a great deal of neighborhood ambiguity near each vertex. The relatively new neural network method for mesh reconstruction has the potential to create or remesh 3D models accurately and quickly. A neural network (NN), which resembles an artificial intelligence (AI) algorithm, is a set of interconnected neurons, where each neuron is capable of making an autonomous arithmetic calculation. Moreover, each neuron is affected by its surrounding neurons through the structure of the network. This paper proposes an extension of the growing neural gas (GNN) neural network technique for remeshing a triangular manifold mesh that represents bone microstructure. This method has the advantage of reconstructing the surface of a genus-n freeform object without a priori knowledge regarding the original object, its topology, or its shape.

  12. The advantage of flexible neuronal tunings in neural network models for motor learning

    PubMed Central

    Marongelli, Ellisha N.; Thoroughman, Kurt A.

    2013-01-01

    Human motor adaptation to novel environments is often modeled by a basis function network that transforms desired movement properties into estimated forces. This network employs a layer of nodes that have fixed broad tunings that generalize across the input domain. Learning is achieved by updating the weights of these nodes in response to training experience. This conventional model is unable to account for rapid flexibility observed in human spatial generalization during motor adaptation. However, added plasticity in the widths of the basis function tunings can achieve this flexibility, and several neurophysiological experiments have revealed flexibility in tunings of sensorimotor neurons. We found a model, Locally Weighted Projection Regression (LWPR), which uniquely possesses the structure of a basis function network in which both the weights and tuning widths of the nodes are updated incrementally during adaptation. We presented this LWPR model with training functions of different spatial complexities and monitored incremental updates to receptive field widths. An inverse pattern of dependence of receptive field adaptation on experienced error became evident, underlying both a relationship between generalization and complexity, and a unique behavior in which generalization always narrows after a sudden switch in environmental complexity. These results implicate a model that is flexible in both basis function widths and weights, like LWPR, as a viable alternative model for human motor adaptation that can account for previously observed plasticity in spatial generalization. This theory can be tested by using the behaviors observed in our experiments as novel hypotheses in human studies. PMID:23888141

  13. Optogenetic stimulation of multiwell MEA plates for neural and cardiac applications

    NASA Astrophysics Data System (ADS)

    Clements, Isaac P.; Millard, Daniel C.; Nicolini, Anthony M.; Preyer, Amanda J.; Grier, Robert; Heckerling, Andrew; Blum, Richard A.; Tyler, Phillip; McSweeney, K. M.; Lu, Yi-Fan; Hall, Diana; Ross, James D.

    2016-03-01

    Microelectrode array (MEA) technology enables advanced drug screening and "disease-in-a-dish" modeling by measuring the electrical activity of cultured networks of neural or cardiac cells. Recent developments in human stem cell technologies, advancements in genetic models, and regulatory initiatives for drug screening have increased the demand for MEA-based assays. In response, Axion Biosystems previously developed a multiwell MEA platform, providing up to 96 MEA culture wells arrayed into a standard microplate format. Multiwell MEA-based assays would be further enhanced by optogenetic stimulation, which enables selective excitation and inhibition of targeted cell types. This capability for selective control over cell culture states would allow finer pacing and probing of cell networks for more reliable and complete characterization of complex network dynamics. Here we describe a system for independent optogenetic stimulation of each well of a 48-well MEA plate. The system enables finely graded control of light delivery during simultaneous recording of network activity in each well. Using human induced pluripotent stem cell (hiPSC) derived cardiomyocytes and rodent primary neuronal cultures, we demonstrate high channel-count light-based excitation and suppression in several proof-of-concept experimental models. Our findings demonstrate advantages of combining multiwell optical stimulation and MEA recording for applications including cardiac safety screening, neural toxicity assessment, and advanced characterization of complex neuronal diseases.

  14. Reservoir Computing Properties of Neural Dynamics in Prefrontal Cortex

    PubMed Central

    Procyk, Emmanuel; Dominey, Peter Ford

    2016-01-01

    Primates display a remarkable ability to adapt to novel situations. Determining what is most pertinent in these situations is not always possible based only on the current sensory inputs, and often also depends on recent inputs and behavioral outputs that contribute to internal states. Thus, one can ask how cortical dynamics generate representations of these complex situations. It has been observed that mixed selectivity in cortical neurons contributes to represent diverse situations defined by a combination of the current stimuli, and that mixed selectivity is readily obtained in randomly connected recurrent networks. In this context, these reservoir networks reproduce the highly recurrent nature of local cortical connectivity. Recombining present and past inputs, random recurrent networks from the reservoir computing framework generate mixed selectivity which provides pre-coded representations of an essentially universal set of contexts. These representations can then be selectively amplified through learning to solve the task at hand. We thus explored their representational power and dynamical properties after training a reservoir to perform a complex cognitive task initially developed for monkeys. The reservoir model inherently displayed a dynamic form of mixed selectivity, key to the representation of the behavioral context over time. The pre-coded representation of context was amplified by training a feedback neuron to explicitly represent this context, thereby reproducing the effect of learning and allowing the model to perform more robustly. This second version of the model demonstrates how a hybrid dynamical regime combining spatio-temporal processing of reservoirs, and input driven attracting dynamics generated by the feedback neuron, can be used to solve a complex cognitive task. We compared reservoir activity to neural activity of dorsal anterior cingulate cortex of monkeys which revealed similar network dynamics. We argue that reservoir computing is a pertinent framework to model local cortical dynamics and their contribution to higher cognitive function. PMID:27286251

  15. Plasticity in single neuron and circuit computations

    NASA Astrophysics Data System (ADS)

    Destexhe, Alain; Marder, Eve

    2004-10-01

    Plasticity in neural circuits can result from alterations in synaptic strength or connectivity, as well as from changes in the excitability of the neurons themselves. To better understand the role of plasticity in the brain, we need to establish how brain circuits work and the kinds of computations that different circuit structures achieve. By linking theoretical and experimental studies, we are beginning to reveal the consequences of plasticity mechanisms for network dynamics, in both simple invertebrate circuits and the complex circuits of mammalian cerebral cortex.

  16. Polarized skylight navigation in insects: model and electrophysiology of e-vector coding by neurons in the central complex.

    PubMed

    Sakura, Midori; Lambrinos, Dimitrios; Labhart, Thomas

    2008-02-01

    Many insects exploit skylight polarization for visual compass orientation or course control. As found in crickets, the peripheral visual system (optic lobe) contains three types of polarization-sensitive neurons (POL neurons), which are tuned to different ( approximately 60 degrees diverging) e-vector orientations. Thus each e-vector orientation elicits a specific combination of activities among the POL neurons coding any e-vector orientation by just three neural signals. In this study, we hypothesize that in the presumed orientation center of the brain (central complex) e-vector orientation is population-coded by a set of "compass neurons." Using computer modeling, we present a neural network model transforming the signal triplet provided by the POL neurons to compass neuron activities coding e-vector orientation by a population code. Using intracellular electrophysiology and cell marking, we present evidence that neurons with the response profile of the presumed compass neurons do indeed exist in the insect brain: each of these compass neuron-like (CNL) cells is activated by a specific e-vector orientation only and otherwise remains silent. Morphologically, CNL cells are tangential neurons extending from the lateral accessory lobe to the lower division of the central body. Surpassing the modeled compass neurons in performance, CNL cells are insensitive to the degree of polarization of the stimulus between 99% and at least down to 18% polarization and thus largely disregard variations of skylight polarization due to changing solar elevations or atmospheric conditions. This suggests that the polarization vision system includes a gain control circuit keeping the output activity at a constant level.

  17. Non-linear blend coding in the moth antennal lobe emerges from random glomerular networks

    PubMed Central

    Capurro, Alberto; Baroni, Fabiano; Olsson, Shannon B.; Kuebler, Linda S.; Karout, Salah; Hansson, Bill S.; Pearce, Timothy C.

    2012-01-01

    Neural responses to odor blends often exhibit non-linear interactions to blend components. The first olfactory processing center in insects, the antennal lobe (AL), exhibits a complex network connectivity. We attempt to determine if non-linear blend interactions can arise purely as a function of the AL network connectivity itself, without necessitating additional factors such as competitive ligand binding at the periphery or intrinsic cellular properties. To assess this, we compared blend interactions among responses from single neurons recorded intracellularly in the AL of the moth Manduca sexta with those generated using a population-based computational model constructed from the morphologically based connectivity pattern of projection neurons (PNs) and local interneurons (LNs) with randomized connection probabilities from which we excluded detailed intrinsic neuronal properties. The model accurately predicted most of the proportions of blend interaction types observed in the physiological data. Our simulations also indicate that input from LNs is important in establishing both the type of blend interaction and the nature of the neuronal response (excitation or inhibition) exhibited by AL neurons. For LNs, the only input that significantly impacted the blend interaction type was received from other LNs, while for PNs the input from olfactory sensory neurons and other PNs contributed agonistically with the LN input to shape the AL output. Our results demonstrate that non-linear blend interactions can be a natural consequence of AL connectivity, and highlight the importance of lateral inhibition as a key feature of blend coding to be addressed in future experimental and computational studies. PMID:22529799

  18. The Role of Rab Proteins in Neuronal Cells and in the Trafficking of Neurotrophin Receptors

    PubMed Central

    Bucci, Cecilia; Alifano, Pietro; Cogli, Laura

    2014-01-01

    Neurotrophins are a family of proteins that are important for neuronal development, neuronal survival and neuronal functions. Neurotrophins exert their role by binding to their receptors, the Trk family of receptor tyrosine kinases (TrkA, TrkB, and TrkC) and p75NTR, a member of the tumor necrosis factor (TNF) receptor superfamily. Binding of neurotrophins to receptors triggers a complex series of signal transduction events, which are able to induce neuronal differentiation but are also responsible for neuronal maintenance and neuronal functions. Rab proteins are small GTPases localized to the cytosolic surface of specific intracellular compartments and are involved in controlling vesicular transport. Rab proteins, acting as master regulators of the membrane trafficking network, play a central role in both trafficking and signaling pathways of neurotrophin receptors. Axonal transport represents the Achilles' heel of neurons, due to the long-range distance that molecules, organelles and, in particular, neurotrophin-receptor complexes have to cover. Indeed, alterations of axonal transport and, specifically, of axonal trafficking of neurotrophin receptors are responsible for several human neurodegenerative diseases, such as Huntington’s disease, Alzheimer’s disease, amyotrophic lateral sclerosis and some forms of Charcot-Marie-Tooth disease. In this review, we will discuss the link between Rab proteins and neurotrophin receptor trafficking and their influence on downstream signaling pathways. PMID:25295627

  19. [Neuroscientific basic in addiction].

    PubMed

    Johann-Ridinger, Monika

    2014-10-01

    The growing evidence of Neuroscience leads to a better understanding of cerebral processes in cases of acute or chronic intake of psychotropic substances (ps). Predominantly, structures of the "reward system" contributed to the development of addiction. Chronic consumption of ps provides changing in brain equilibrium and leads to adaptations in the brain architecture. In this article, the complex responses of neurons and neuronal networks are presented in cases of chronic intake of ps. The alterations affect the cognitive, emotional and behavioral processings and influence learning and stress regulation. In summary, all cerebral adaptations are integrated in a complex model of biological, psychological and social factors and therefore, addiction arises as a consequence of combination of individual protecting and risk factors.

  20. Three-dimensional neural cultures produce networks that mimic native brain activity.

    PubMed

    Bourke, Justin L; Quigley, Anita F; Duchi, Serena; O'Connell, Cathal D; Crook, Jeremy M; Wallace, Gordon G; Cook, Mark J; Kapsa, Robert M I

    2018-02-01

    Development of brain function is critically dependent on neuronal networks organized through three dimensions. Culture of central nervous system neurons has traditionally been limited to two dimensions, restricting growth patterns and network formation to a single plane. Here, with the use of multichannel extracellular microelectrode arrays, we demonstrate that neurons cultured in a true three-dimensional environment recapitulate native neuronal network formation and produce functional outcomes more akin to in vivo neuronal network activity. Copyright © 2017 John Wiley & Sons, Ltd.

  1. Neural coding in graphs of bidirectional associative memories.

    PubMed

    Bouchain, A David; Palm, Günther

    2012-01-24

    In the last years we have developed large neural network models for the realization of complex cognitive tasks in a neural network architecture that resembles the network of the cerebral cortex. We have used networks of several cortical modules that contain two populations of neurons (one excitatory, one inhibitory). The excitatory populations in these so-called "cortical networks" are organized as a graph of Bidirectional Associative Memories (BAMs), where edges of the graph correspond to BAMs connecting two neural modules and nodes of the graph correspond to excitatory populations with associative feedback connections (and inhibitory interneurons). The neural code in each of these modules consists essentially of the firing pattern of the excitatory population, where mainly it is the subset of active neurons that codes the contents to be represented. The overall activity can be used to distinguish different properties of the patterns that are represented which we need to distinguish and control when performing complex tasks like language understanding with these cortical networks. The most important pattern properties or situations are: exactly fitting or matching input, incomplete information or partially matching pattern, superposition of several patterns, conflicting information, and new information that is to be learned. We show simple simulations of these situations in one area or module and discuss how to distinguish these situations based on the overall internal activation of the module. This article is part of a Special Issue entitled "Neural Coding". Copyright © 2011 Elsevier B.V. All rights reserved.

  2. Chaotic, informational and synchronous behaviour of multiplex networks

    NASA Astrophysics Data System (ADS)

    Baptista, M. S.; Szmoski, R. M.; Pereira, R. F.; Pinto, S. E. De Souza

    2016-03-01

    The understanding of the relationship between topology and behaviour in interconnected networks would allow to charac- terise and predict behaviour in many real complex networks since both are usually not simultaneously known. Most previous studies have focused on the relationship between topology and synchronisation. In this work, we provide analytical formulas that shows how topology drives complex behaviour: chaos, information, and weak or strong synchronisation; in multiplex net- works with constant Jacobian. We also study this relationship numerically in multiplex networks of Hindmarsh-Rose neurons. Whereas behaviour in the analytically tractable network is a direct but not trivial consequence of the spectra of eigenvalues of the Laplacian matrix, where behaviour may strongly depend on the break of symmetry in the topology of interconnections, in Hindmarsh-Rose neural networks the nonlinear nature of the chemical synapses breaks the elegant mathematical connec- tion between the spectra of eigenvalues of the Laplacian matrix and the behaviour of the network, creating networks whose behaviour strongly depends on the nature (chemical or electrical) of the inter synapses.

  3. VEGF Triggers the Activation of Cofilin and the Arp2/3 Complex within the Growth Cone

    PubMed Central

    Schlau, Matthias; Terheyden-Keighley, Daniel; Theis, Verena; Mannherz, Hans Georg; Theiss, Carsten

    2018-01-01

    A crucial neuronal structure for the development and regeneration of neuronal networks is the axonal growth cone. Affected by different guidance cues, it grows in a predetermined direction to reach its final destination. One of those cues is the vascular endothelial growth factor (VEGF), which was identified as a positive effector for growth cone movement. These positive effects are mainly mediated by a reorganization of the actin network. This study shows that VEGF triggers a tight colocalization of cofilin and the Arp2/3 complex to the actin cytoskeleton within chicken dorsal root ganglia (DRG). Live cell imaging after microinjection of GFP (green fluorescent protein)-cofilin and RFP (red fluorescent protein)-LifeAct revealed that both labeled proteins rapidly redistributed within growth cones, and showed a congruent distribution pattern after VEGF supplementation. Disruption of signaling upstream of cofilin via blocking LIM-kinase (LIMK) activity resulted in growth cones displaying regressive growth behavior. Microinjection of GFP-p16b (a subunit of the Arp2/3 complex) and RFP-LifeAct revealed that both proteins redistributed into lamellipodia of the growth cone within minutes after VEGF stimulation. Disruption of the signaling to the Arp2/3 complex in the presence of VEGF by inhibition of N-WASP (neuronal Wiskott–Aldrich–Scott protein) caused retraction of growth cones. Hence, cofilin and the Arp2/3 complex appear to be downstream effector proteins of VEGF signaling to the actin cytoskeleton of DRG growth cones. Our data suggest that VEGF simultaneously affects different pathways for signaling to the actin cytoskeleton, since activation of cofilin occurs via inhibition of LIMK, whereas activation of Arp2/3 is achieved by stimulation of N-WASP. PMID:29382077

  4. Coexistence of intermittencies in the neuronal network of the epileptic brain

    NASA Astrophysics Data System (ADS)

    Koronovskii, Alexey A.; Hramov, Alexander E.; Grubov, Vadim V.; Moskalenko, Olga I.; Sitnikova, Evgenia; Pavlov, Alexey N.

    2016-03-01

    Intermittent behavior occurs widely in nature. At present, several types of intermittencies are known and well-studied. However, consideration of intermittency has usually been limited to the analysis of cases when only one certain type of intermittency takes place. In this paper, we report on the temporal behavior of the complex neuronal network in the epileptic brain, when two types of intermittent behavior coexist and alternate with each other. We prove the presence of this phenomenon in physiological experiments with WAG/Rij rats being the model living system of absence epilepsy. In our paper, the deduced theoretical law for distributions of the lengths of laminar phases prescribing the power law with a degree of -2 agrees well with the experimental neurophysiological data.

  5. Hybrid multiphoton volumetric functional imaging of large-scale bioengineered neuronal networks

    NASA Astrophysics Data System (ADS)

    Dana, Hod; Marom, Anat; Paluch, Shir; Dvorkin, Roman; Brosh, Inbar; Shoham, Shy

    2014-06-01

    Planar neural networks and interfaces serve as versatile in vitro models of central nervous system physiology, but adaptations of related methods to three dimensions (3D) have met with limited success. Here, we demonstrate for the first time volumetric functional imaging in a bioengineered neural tissue growing in a transparent hydrogel with cortical cellular and synaptic densities, by introducing complementary new developments in nonlinear microscopy and neural tissue engineering. Our system uses a novel hybrid multiphoton microscope design combining a 3D scanning-line temporal-focusing subsystem and a conventional laser-scanning multiphoton microscope to provide functional and structural volumetric imaging capabilities: dense microscopic 3D sampling at tens of volumes per second of structures with mm-scale dimensions containing a network of over 1,000 developing cells with complex spontaneous activity patterns. These developments open new opportunities for large-scale neuronal interfacing and for applications of 3D engineered networks ranging from basic neuroscience to the screening of neuroactive substances.

  6. Solving Constraint Satisfaction Problems with Networks of Spiking Neurons

    PubMed Central

    Jonke, Zeno; Habenschuss, Stefan; Maass, Wolfgang

    2016-01-01

    Network of neurons in the brain apply—unlike processors in our current generation of computer hardware—an event-based processing strategy, where short pulses (spikes) are emitted sparsely by neurons to signal the occurrence of an event at a particular point in time. Such spike-based computations promise to be substantially more power-efficient than traditional clocked processing schemes. However, it turns out to be surprisingly difficult to design networks of spiking neurons that can solve difficult computational problems on the level of single spikes, rather than rates of spikes. We present here a new method for designing networks of spiking neurons via an energy function. Furthermore, we show how the energy function of a network of stochastically firing neurons can be shaped in a transparent manner by composing the networks of simple stereotypical network motifs. We show that this design approach enables networks of spiking neurons to produce approximate solutions to difficult (NP-hard) constraint satisfaction problems from the domains of planning/optimization and verification/logical inference. The resulting networks employ noise as a computational resource. Nevertheless, the timing of spikes plays an essential role in their computations. Furthermore, networks of spiking neurons carry out for the Traveling Salesman Problem a more efficient stochastic search for good solutions compared with stochastic artificial neural networks (Boltzmann machines) and Gibbs sampling. PMID:27065785

  7. Flexible timing by temporal scaling of cortical responses

    PubMed Central

    Wang, Jing; Narain, Devika; Hosseini, Eghbal A.; Jazayeri, Mehrdad

    2017-01-01

    Musicians can perform at different tempos, speakers can control the cadence of their speech, and children can flexibly vary their temporal expectations of events. To understand the neural basis of such flexibility, we recorded from the medial frontal cortex of nonhuman primates trained to produce different time intervals with different effectors. Neural responses were heterogeneous, nonlinear and complex, and exhibited a remarkable form of temporal invariance: firing rate profiles were temporally scaled to match the produced intervals. Recording from downstream neurons in the caudate and thalamic neurons projecting to the medial frontal cortex indicated that this phenomenon originates within cortical networks. Recurrent neural network models trained to perform the task revealed that temporal scaling emerges from nonlinearities in the network and degree of scaling is controlled by the strength of external input. These findings demonstrate a simple and general mechanism for conferring temporal flexibility upon sensorimotor and cognitive functions. PMID:29203897

  8. Hybrid Scheme for Modeling Local Field Potentials from Point-Neuron Networks.

    PubMed

    Hagen, Espen; Dahmen, David; Stavrinou, Maria L; Lindén, Henrik; Tetzlaff, Tom; van Albada, Sacha J; Grün, Sonja; Diesmann, Markus; Einevoll, Gaute T

    2016-12-01

    With rapidly advancing multi-electrode recording technology, the local field potential (LFP) has again become a popular measure of neuronal activity in both research and clinical applications. Proper understanding of the LFP requires detailed mathematical modeling incorporating the anatomical and electrophysiological features of neurons near the recording electrode, as well as synaptic inputs from the entire network. Here we propose a hybrid modeling scheme combining efficient point-neuron network models with biophysical principles underlying LFP generation by real neurons. The LFP predictions rely on populations of network-equivalent multicompartment neuron models with layer-specific synaptic connectivity, can be used with an arbitrary number of point-neuron network populations, and allows for a full separation of simulated network dynamics and LFPs. We apply the scheme to a full-scale cortical network model for a ∼1 mm 2 patch of primary visual cortex, predict laminar LFPs for different network states, assess the relative LFP contribution from different laminar populations, and investigate effects of input correlations and neuron density on the LFP. The generic nature of the hybrid scheme and its public implementation in hybridLFPy form the basis for LFP predictions from other and larger point-neuron network models, as well as extensions of the current application with additional biological detail. © The Author 2016. Published by Oxford University Press.

  9. The new challenges of multiplex networks: Measures and models

    NASA Astrophysics Data System (ADS)

    Battiston, Federico; Nicosia, Vincenzo; Latora, Vito

    2017-02-01

    What do societies, the Internet, and the human brain have in common? They are all examples of complex relational systems, whose emerging behaviours are largely determined by the non-trivial networks of interactions among their constituents, namely individuals, computers, or neurons, rather than only by the properties of the units themselves. In the last two decades, network scientists have proposed models of increasing complexity to better understand real-world systems. Only recently we have realised that multiplexity, i.e. the coexistence of several types of interactions among the constituents of a complex system, is responsible for substantial qualitative and quantitative differences in the type and variety of behaviours that a complex system can exhibit. As a consequence, multilayer and multiplex networks have become a hot topic in complexity science. Here we provide an overview of some of the measures proposed so far to characterise the structure of multiplex networks, and a selection of models aiming at reproducing those structural properties and quantifying their statistical significance. Focusing on a subset of relevant topics, this brief review is a quite comprehensive introduction to the most basic tools for the analysis of multiplex networks observed in the real-world. The wide applicability of multiplex networks as a framework to model complex systems in different fields, from biology to social sciences, and the colloquial tone of the paper will make it an interesting read for researchers working on both theoretical and experimental analysis of networked systems.

  10. Computational model of electrically coupled, intrinsically distinct pacemaker neurons.

    PubMed

    Soto-Treviño, Cristina; Rabbah, Pascale; Marder, Eve; Nadim, Farzan

    2005-07-01

    Electrical coupling between neurons with similar properties is often studied. Nonetheless, the role of electrical coupling between neurons with widely different intrinsic properties also occurs, but is less well understood. Inspired by the pacemaker group of the crustacean pyloric network, we developed a multicompartment, conductance-based model of a small network of intrinsically distinct, electrically coupled neurons. In the pyloric network, a small intrinsically bursting neuron, through gap junctions, drives 2 larger, tonically spiking neurons to reliably burst in-phase with it. Each model neuron has 2 compartments, one responsible for spike generation and the other for producing a slow, large-amplitude oscillation. We illustrate how these compartments interact and determine the dynamics of the model neurons. Our model captures the dynamic oscillation range measured from the isolated and coupled biological neurons. At the network level, we explore the range of coupling strengths for which synchronous bursting oscillations are possible. The spatial segregation of ionic currents significantly enhances the ability of the 2 neurons to burst synchronously, and the oscillation range of the model pacemaker network depends not only on the strength of the electrical synapse but also on the identity of the neuron receiving inputs. We also compare the activity of the electrically coupled, distinct neurons with that of a network of coupled identical bursting neurons. For small to moderate coupling strengths, the network of identical elements, when receiving asymmetrical inputs, can have a smaller dynamic range of oscillation than that of its constituent neurons in isolation.

  11. Mind Operational Semantics and Brain Operational Architectonics: A Putative Correspondence

    PubMed Central

    Benedetti, Giulio; Marchetti, Giorgio; Fingelkurts, Alexander A; Fingelkurts, Andrew A

    2010-01-01

    Despite allowing for the unprecedented visualization of brain functional activity, modern neurobiological techniques have not yet been able to provide satisfactory answers to important questions about the relationship between brain and mind. The aim of this paper is to show how two different but complementary approaches, Mind Operational Semantics (OS) and Brain Operational Architectonics (OA), can help bridge the gap between a specific kind of mental activity—the higher-order reflective thought or linguistic thought—and brain. The fundamental notion that allows the two different approaches to be jointly used under a common framework is that of operation. According to OS, which is based on introspection and linguistic data, the meanings of words can be analyzed in terms of elemental mental operations (EOMC), amongst which those of attention play a key role. Linguistic thought is made possible by special kinds of elements, which OS calls “correlators”, which have the function of tying together the other elements of thought, which OS calls “correlata” (a "correlational network” that is, a sentence, is so formed). Therefore, OS conceives of linguistic thought as a hierarchy of operations of increasing complexity. Likewise, according to OA, which is based on the joint analysis of cognitive and electromagnetic data (EEG and MEG), every conscious phenomenon is brought to existence by the joint operations of many functional and transient neuronal assemblies in the brain. According to OA, the functioning of the brain is always operational (made up of operations), and its structure is characterized by a hierarchy of operations of increasing complexity: single neurons, single assemblies of neurons, synchronized neuronal assemblies or Operational Modules (OM), integrated or complex OMs. The authors put forward the hypothesis that the whole level of OS’s description (EOMC, correlators, and correlational networks) corresponds to the level of OMs (or set of them) of different complexity within OA’s theory: EOMC could correspond to simple OMs, correlators to complex OMs and the correlational network to a set of simple and complex OMs. Finally, a set of experiments is proposed to verify the putative correspondence between OS and OA and prove the existence of an integrated continuum between brain and mind. PMID:21113277

  12. Shaping Neuronal Network Activity by Presynaptic Mechanisms

    PubMed Central

    Ashery, Uri

    2015-01-01

    Neuronal microcircuits generate oscillatory activity, which has been linked to basic functions such as sleep, learning and sensorimotor gating. Although synaptic release processes are well known for their ability to shape the interaction between neurons in microcircuits, most computational models do not simulate the synaptic transmission process directly and hence cannot explain how changes in synaptic parameters alter neuronal network activity. In this paper, we present a novel neuronal network model that incorporates presynaptic release mechanisms, such as vesicle pool dynamics and calcium-dependent release probability, to model the spontaneous activity of neuronal networks. The model, which is based on modified leaky integrate-and-fire neurons, generates spontaneous network activity patterns, which are similar to experimental data and robust under changes in the model's primary gain parameters such as excitatory postsynaptic potential and connectivity ratio. Furthermore, it reliably recreates experimental findings and provides mechanistic explanations for data obtained from microelectrode array recordings, such as network burst termination and the effects of pharmacological and genetic manipulations. The model demonstrates how elevated asynchronous release, but not spontaneous release, synchronizes neuronal network activity and reveals that asynchronous release enhances utilization of the recycling vesicle pool to induce the network effect. The model further predicts a positive correlation between vesicle priming at the single-neuron level and burst frequency at the network level; this prediction is supported by experimental findings. Thus, the model is utilized to reveal how synaptic release processes at the neuronal level govern activity patterns and synchronization at the network level. PMID:26372048

  13. Connexin-Dependent Neuroglial Networking as a New Therapeutic Target.

    PubMed

    Charvériat, Mathieu; Naus, Christian C; Leybaert, Luc; Sáez, Juan C; Giaume, Christian

    2017-01-01

    Astrocytes and neurons dynamically interact during physiological processes, and it is now widely accepted that they are both organized in plastic and tightly regulated networks. Astrocytes are connected through connexin-based gap junction channels, with brain region specificities, and those networks modulate neuronal activities, such as those involved in sleep-wake cycle, cognitive, or sensory functions. Additionally, astrocyte domains have been involved in neurogenesis and neuronal differentiation during development; they participate in the "tripartite synapse" with both pre-synaptic and post-synaptic neurons by tuning down or up neuronal activities through the control of neuronal synaptic strength. Connexin-based hemichannels are also involved in those regulations of neuronal activities, however, this feature will not be considered in the present review. Furthermore, neuronal processes, transmitting electrical signals to chemical synapses, stringently control astroglial connexin expression, and channel functions. Long-range energy trafficking toward neurons through connexin-coupled astrocytes and plasticity of those networks are hence largely dependent on neuronal activity. Such reciprocal interactions between neurons and astrocyte networks involve neurotransmitters, cytokines, endogenous lipids, and peptides released by neurons but also other brain cell types, including microglial and endothelial cells. Over the past 10 years, knowledge about neuroglial interactions has widened and now includes effects of CNS-targeting drugs such as antidepressants, antipsychotics, psychostimulants, or sedatives drugs as potential modulators of connexin function and thus astrocyte networking activity. In physiological situations, neuroglial networking is consequently resulting from a two-way interaction between astrocyte gap junction-mediated networks and those made by neurons. As both cell types are modulated by CNS drugs we postulate that neuroglial networking may emerge as new therapeutic targets in neurological and psychiatric disorders.

  14. Inhibitory Network Interactions Shape the Auditory Processing of Natural Communication Signals in the Songbird Auditory Forebrain

    PubMed Central

    Pinaud, Raphael; Terleph, Thomas A.; Tremere, Liisa A.; Phan, Mimi L.; Dagostin, André A.; Leão, Ricardo M.; Mello, Claudio V.; Vicario, David S.

    2008-01-01

    The role of GABA in the central processing of complex auditory signals is not fully understood. We have studied the involvement of GABAA-mediated inhibition in the processing of birdsong, a learned vocal communication signal requiring intact hearing for its development and maintenance. We focused on caudomedial nidopallium (NCM), an area analogous to parts of the mammalian auditory cortex with selective responses to birdsong. We present evidence that GABAA-mediated inhibition plays a pronounced role in NCM's auditory processing of birdsong. Using immunocytochemistry, we show that approximately half of NCM's neurons are GABAergic. Whole cell patch-clamp recordings in a slice preparation demonstrate that, at rest, spontaneously active GABAergic synapses inhibit excitatory inputs onto NCM neurons via GABAA receptors. Multi-electrode electrophysiological recordings in awake birds show that local blockade of GABAA-mediated inhibition in NCM markedly affects the temporal pattern of song-evoked responses in NCM without modifications in frequency tuning. Surprisingly, this blockade increases the phasic and largely suppresses the tonic response component, reflecting dynamic relationships of inhibitory networks that could include disinhibition. Thus processing of learned natural communication sounds in songbirds, and possibly other vocal learners, may depend on complex interactions of inhibitory networks. PMID:18480371

  15. Metabotropic glutamate receptors activate dendritic calcium waves and TRPM channels which drive rhythmic respiratory patterns in mice

    PubMed Central

    Mironov, S L

    2008-01-01

    Respiration in vertebrates is generated by a compact network which is located in the lower brainstem but cellular mechanisms which underlie persistent oscillatory activity of the respiratory network are yet unknown. Using two-photon imaging and patch-clamp recordings in functional brainstem preparations of mice containing pre-Bötzinger complex (preBötC), we examined the actions of metabotropic glutamate receptors (mGluR1/5) on the respiratory patterns. The agonist DHPG potentiated and antagonist LY367385 depressed respiration-related activities. In the inspiratory neurons, we observed rhythmic activation of non-selective channels which had a conductance of 24 pS. Their activity was enhanced with membrane depolarization and after elevation of calcium from the cytoplasmic side of the membrane. They were activated by a non-hydrolysable PIP2 analogue and blocked by flufenamate, ATP4− and Gd3+. All these properties correspond well to those of TRPM4 channels. Calcium imaging of functional slices revealed rhythmic transients in small clusters of neurons present in a network. Calcium transients in the soma were preceded by the waves in dendrites which were dependent on mGluR activation. Initiation and propagation of waves required calcium influx and calcium release from internal stores. Calcium waves activated TPRM4-like channels in the soma and promoted generation of inspiratory bursts. Simulations of activity of neurons communicated via dendritic calcium waves showed emerging activity within neuronal clusters and its synchronization between the clusters. The experimental and theoretical data provide a subcellular basis for a recently proposed group-pacemaker hypothesis and describe a novel mechanism of rhythm generation in neuronal networks. PMID:18308826

  16. Exercise-induced neuronal plasticity in central autonomic networks: role in cardiovascular control.

    PubMed

    Michelini, Lisete C; Stern, Javier E

    2009-09-01

    It is now well established that brain plasticity is an inherent property not only of the developing but also of the adult brain. Numerous beneficial effects of exercise, including improved memory, cognitive function and neuroprotection, have been shown to involve an important neuroplastic component. However, whether major adaptive cardiovascular adjustments during exercise, needed to ensure proper blood perfusion of peripheral tissues, also require brain neuroplasticity, is presently unknown. This review will critically evaluate current knowledge on proposed mechanisms that are likely to underlie the continuous resetting of baroreflex control of heart rate during/after exercise and following exercise training. Accumulating evidence indicates that not only somatosensory afferents (conveyed by skeletal muscle receptors, baroreceptors and/or cardiopulmonary receptors) but also projections arising from central command neurons (in particular, peptidergic hypothalamic pre-autonomic neurons) converge into the nucleus tractus solitarii (NTS) in the dorsal brainstem, to co-ordinate complex cardiovascular adaptations during dynamic exercise. This review focuses in particular on a reciprocally interconnected network between the NTS and the hypothalamic paraventricular nucleus (PVN), which is proposed to act as a pivotal anatomical and functional substrate underlying integrative feedforward and feedback cardiovascular adjustments during exercise. Recent findings supporting neuroplastic adaptive changes within the NTS-PVN reciprocal network (e.g. remodelling of afferent inputs, structural and functional neuronal plasticity and changes in neurotransmitter content) will be discussed within the context of their role as important underlying cellular mechanisms supporting the tonic activation and improved efficacy of these central pathways in response to circulatory demand at rest and during exercise, both in sedentary and in trained individuals. We hope this review will stimulate more comprehensive studies aimed at understanding cellular and molecular mechanisms within CNS neuronal networks that contribute to exercise-induced neuroplasticity and cardiovascular adjustments.

  17. Constrained synaptic connectivity in functional mammalian neuronal networks grown on patterned surfaces.

    PubMed

    Wyart, Claire; Ybert, Christophe; Bourdieu, Laurent; Herr, Catherine; Prinz, Christelle; Chatenay, Didier

    2002-06-30

    The use of ordered neuronal networks in vitro is a promising approach to study the development and the activity of small neuronal assemblies. However, in previous attempts, sufficient growth control and physiological maturation of neurons could not be achieved. Here we describe an original protocol in which polylysine patterns confine the adhesion of cellular bodies to prescribed spots and the neuritic growth to thin lines. Hippocampal neurons in these networks are maintained healthy in serum free medium up to 5 weeks in vitro. Electrophysiology and immunochemistry show that neurons exhibit mature excitatory and inhibitory synapses and calcium imaging reveals spontaneous activity of neurons in isolated networks. We demonstrate that neurons in these geometrical networks form functional synapses preferentially to their first neighbors. We have, therefore, established a simple and robust protocol to constrain both the location of neuronal cell bodies and their pattern of connectivity. Moreover, the long term maintenance of the geometry and the physiology of the networks raises the possibility of new applications for systematic screening of pharmacological agents and for electronic to neuron devices.

  18. A PDF/NPF neuropeptide signaling circuitry of male Drosophila melanogaster controls rival-induced prolonged mating.

    PubMed

    Kim, Woo Jae; Jan, Lily Yeh; Jan, Yuh Nung

    2013-12-04

    A primary function of males for many species involves mating with females for reproduction. Drosophila melanogaster males respond to the presence of other males by prolonging mating duration to increase the chance of passing on their genes. To understand the basis of such complex behaviors, we examine the genetic network and neural circuits that regulate rival-induced Longer-Mating-Duration (LMD). Here, we identify a small subset of clock neurons in the male brain that regulate LMD via neuropeptide signaling. LMD requires the function of pigment-dispersing factor (PDF) in four s-LNv neurons and its receptor PDFR in two LNd neurons per hemisphere, as well as the function of neuropeptide F (NPF) in two neurons within the sexually dimorphic LNd region and its receptor NPFR1 in four s-LNv neurons per hemisphere. Moreover, rival exposure modifies the neuronal activities of a subset of clock neurons involved in neuropeptide signaling for LMD. Copyright © 2013 Elsevier Inc. All rights reserved.

  19. A PDF/NPF neuropeptide signaling circuitry of male Drosophila melanogaster controls rival-induced prolonged mating

    PubMed Central

    Kim, Woo Jae; Jan, Lily Yeh; Jan, Yuh Nung

    2013-01-01

    SUMMARY A primary function of males for many species involves mating with females for reproduction. Drosophila melanogaster males respond to the presence of other males by prolonging mating duration to increase the chance of passing on their genes. To understand the basis of such complex behaviors, we examine the genetic network and neural circuits that regulate rival-induced longer mating duration (LMD). Here we identify a small subset of clock neurons in the male brain that regulate LMD via neuropeptide signaling. LMD requires the function of pigment-dispersing factor (PDF) in four s-LNv neurons and its receptor PDFR in two LNd neurons per hemisphere, as well as the function of neuropeptide F (NPF) in two neurons within the sexually dimorphic LNd region and its receptor NPFR1 in four s-LNv neurons per hemisphere. Moreover, rival exposure modifies the neuronal activities of a subset of clock neurons involved in neuropeptide signaling for LMD. PMID:24314729

  20. The neural dynamics of song syntax in songbirds

    NASA Astrophysics Data System (ADS)

    Jin, Dezhe

    2010-03-01

    Songbird is ``the hydrogen atom'' of the neuroscience of complex, learned vocalizations such as human speech. Songs of Bengalese finch consist of sequences of syllables. While syllables are temporally stereotypical, syllable sequences can vary and follow complex, probabilistic syntactic rules, which are rudimentarily similar to grammars in human language. Songbird brain is accessible to experimental probes, and is understood well enough to construct biologically constrained, predictive computational models. In this talk, I will discuss the structure and dynamics of neural networks underlying the stereotypy of the birdsong syllables and the flexibility of syllable sequences. Recent experiments and computational models suggest that a syllable is encoded in a chain network of projection neurons in premotor nucleus HVC (proper name). Precisely timed spikes propagate along the chain, driving vocalization of the syllable through downstream nuclei. Through a computational model, I show that that variable syllable sequences can be generated through spike propagations in a network in HVC in which the syllable-encoding chain networks are connected into a branching chain pattern. The neurons mutually inhibit each other through the inhibitory HVC interneurons, and are driven by external inputs from nuclei upstream of HVC. At a branching point that connects the final group of a chain to the first groups of several chains, the spike activity selects one branch to continue the propagation. The selection is probabilistic, and is due to the winner-take-all mechanism mediated by the inhibition and noise. The model predicts that the syllable sequences statistically follow partially observable Markov models. Experimental results supporting this and other predictions of the model will be presented. We suggest that the syntax of birdsong syllable sequences is embedded in the connection patterns of HVC projection neurons.

  1. Sequentially switching cell assemblies in random inhibitory networks of spiking neurons in the striatum.

    PubMed

    Ponzi, Adam; Wickens, Jeff

    2010-04-28

    The striatum is composed of GABAergic medium spiny neurons with inhibitory collaterals forming a sparse random asymmetric network and receiving an excitatory glutamatergic cortical projection. Because the inhibitory collaterals are sparse and weak, their role in striatal network dynamics is puzzling. However, here we show by simulation of a striatal inhibitory network model composed of spiking neurons that cells form assemblies that fire in sequential coherent episodes and display complex identity-temporal spiking patterns even when cortical excitation is simply constant or fluctuating noisily. Strongly correlated large-scale firing rate fluctuations on slow behaviorally relevant timescales of hundreds of milliseconds are shown by members of the same assembly whereas members of different assemblies show strong negative correlation, and we show how randomly connected spiking networks can generate this activity. Cells display highly irregular spiking with high coefficients of variation, broadly distributed low firing rates, and interspike interval distributions that are consistent with exponentially tailed power laws. Although firing rates vary coherently on slow timescales, precise spiking synchronization is absent in general. Our model only requires the minimal but striatally realistic assumptions of sparse to intermediate random connectivity, weak inhibitory synapses, and sufficient cortical excitation so that some cells are depolarized above the firing threshold during up states. Our results are in good qualitative agreement with experimental studies, consistent with recently determined striatal anatomy and physiology, and support a new view of endogenously generated metastable state switching dynamics of the striatal network underlying its information processing operations.

  2. Simplicity and efficiency of integrate-and-fire neuron models.

    PubMed

    Plesser, Hans E; Diesmann, Markus

    2009-02-01

    Lovelace and Cios (2008) recently proposed a very simple spiking neuron (VSSN) model for simulations of large neuronal networks as an efficient replacement for the integrate-and-fire neuron model. We argue that the VSSN model falls behind key advances in neuronal network modeling over the past 20 years, in particular, techniques that permit simulators to compute the state of the neuron without repeated summation over the history of input spikes and to integrate the subthreshold dynamics exactly. State-of-the-art solvers for networks of integrate-and-fire model neurons are substantially more efficient than the VSSN simulator and allow routine simulations of networks of some 10(5) neurons and 10(9) connections on moderate computer clusters.

  3. Eye evolution at high resolution: the neuron as a unit of homology.

    PubMed

    Erclik, Ted; Hartenstein, Volker; McInnes, Roderick R; Lipshitz, Howard D

    2009-08-01

    Based on differences in morphology, photoreceptor-type usage and lens composition it has been proposed that complex eyes have evolved independently many times. The remarkable observation that different eye types rely on a conserved network of genes (including Pax6/eyeless) for their formation has led to the revised proposal that disparate complex eye types have evolved from a shared and simpler prototype. Did this ancestral eye already contain the neural circuitry required for image processing? And what were the evolutionary events that led to the formation of complex visual systems, such as those found in vertebrates and insects? The recent identification of unexpected cell-type homologies between neurons in the vertebrate and Drosophila visual systems has led to two proposed models for the evolution of complex visual systems from a simple prototype. The first, as an extension of the finding that the neurons of the vertebrate retina share homologies with both insect (rhabdomeric) and vertebrate (ciliary) photoreceptor cell types, suggests that the vertebrate retina is a composite structure, made up of neurons that have evolved from two spatially separate ancestral photoreceptor populations. The second model, based largely on the conserved role for the Vsx homeobox genes in photoreceptor-target neuron development, suggests that the last common ancestor of vertebrates and flies already possessed a relatively sophisticated visual system that contained a mixture of rhabdomeric and ciliary photoreceptors as well as their first- and second-order target neurons. The vertebrate retina and fly visual system would have subsequently evolved by elaborating on this ancestral neural circuit. Here we present evidence for these two cell-type homology-based models and discuss their implications.

  4. An evaluation of Bayesian techniques for controlling model complexity and selecting inputs in a neural network for short-term load forecasting.

    PubMed

    Hippert, Henrique S; Taylor, James W

    2010-04-01

    Artificial neural networks have frequently been proposed for electricity load forecasting because of their capabilities for the nonlinear modelling of large multivariate data sets. Modelling with neural networks is not an easy task though; two of the main challenges are defining the appropriate level of model complexity, and choosing the input variables. This paper evaluates techniques for automatic neural network modelling within a Bayesian framework, as applied to six samples containing daily load and weather data for four different countries. We analyse input selection as carried out by the Bayesian 'automatic relevance determination', and the usefulness of the Bayesian 'evidence' for the selection of the best structure (in terms of number of neurones), as compared to methods based on cross-validation. Copyright 2009 Elsevier Ltd. All rights reserved.

  5. Network and neuronal membrane properties in hybrid networks reciprocally regulate selectivity to rapid thalamocortical inputs.

    PubMed

    Pesavento, Michael J; Pinto, David J

    2012-11-01

    Rapidly changing environments require rapid processing from sensory inputs. Varying deflection velocities of a rodent's primary facial vibrissa cause varying temporal neuronal activity profiles within the ventral posteromedial thalamic nucleus. Local neuron populations in a single somatosensory layer 4 barrel transform sparsely coded input into a spike count based on the input's temporal profile. We investigate this transformation by creating a barrel-like hybrid network with whole cell recordings of in vitro neurons from a cortical slice preparation, embedding the biological neuron in the simulated network by presenting virtual synaptic conductances via a conductance clamp. Utilizing the hybrid network, we examine the reciprocal network properties (local excitatory and inhibitory synaptic convergence) and neuronal membrane properties (input resistance) by altering the barrel population response to diverse thalamic input. In the presence of local network input, neurons are more selective to thalamic input timing; this arises from strong feedforward inhibition. Strongly inhibitory (damping) network regimes are more selective to timing and less selective to the magnitude of input but require stronger initial input. Input selectivity relies heavily on the different membrane properties of excitatory and inhibitory neurons. When inhibitory and excitatory neurons had identical membrane properties, the sensitivity of in vitro neurons to temporal vs. magnitude features of input was substantially reduced. Increasing the mean leak conductance of the inhibitory cells decreased the network's temporal sensitivity, whereas increasing excitatory leak conductance enhanced magnitude sensitivity. Local network synapses are essential in shaping thalamic input, and differing membrane properties of functional classes reciprocally modulate this effect.

  6. Population coding in sparsely connected networks of noisy neurons.

    PubMed

    Tripp, Bryan P; Orchard, Jeff

    2012-01-01

    This study examines the relationship between population coding and spatial connection statistics in networks of noisy neurons. Encoding of sensory information in the neocortex is thought to require coordinated neural populations, because individual cortical neurons respond to a wide range of stimuli, and exhibit highly variable spiking in response to repeated stimuli. Population coding is rooted in network structure, because cortical neurons receive information only from other neurons, and because the information they encode must be decoded by other neurons, if it is to affect behavior. However, population coding theory has often ignored network structure, or assumed discrete, fully connected populations (in contrast with the sparsely connected, continuous sheet of the cortex). In this study, we modeled a sheet of cortical neurons with sparse, primarily local connections, and found that a network with this structure could encode multiple internal state variables with high signal-to-noise ratio. However, we were unable to create high-fidelity networks by instantiating connections at random according to spatial connection probabilities. In our models, high-fidelity networks required additional structure, with higher cluster factors and correlations between the inputs to nearby neurons.

  7. Constructing Precisely Computing Networks with Biophysical Spiking Neurons.

    PubMed

    Schwemmer, Michael A; Fairhall, Adrienne L; Denéve, Sophie; Shea-Brown, Eric T

    2015-07-15

    While spike timing has been shown to carry detailed stimulus information at the sensory periphery, its possible role in network computation is less clear. Most models of computation by neural networks are based on population firing rates. In equivalent spiking implementations, firing is assumed to be random such that averaging across populations of neurons recovers the rate-based approach. Recently, however, Denéve and colleagues have suggested that the spiking behavior of neurons may be fundamental to how neuronal networks compute, with precise spike timing determined by each neuron's contribution to producing the desired output (Boerlin and Denéve, 2011; Boerlin et al., 2013). By postulating that each neuron fires to reduce the error in the network's output, it was demonstrated that linear computations can be performed by networks of integrate-and-fire neurons that communicate through instantaneous synapses. This left open, however, the possibility that realistic networks, with conductance-based neurons with subthreshold nonlinearity and the slower timescales of biophysical synapses, may not fit into this framework. Here, we show how the spike-based approach can be extended to biophysically plausible networks. We then show that our network reproduces a number of key features of cortical networks including irregular and Poisson-like spike times and a tight balance between excitation and inhibition. Lastly, we discuss how the behavior of our model scales with network size or with the number of neurons "recorded" from a larger computing network. These results significantly increase the biological plausibility of the spike-based approach to network computation. We derive a network of neurons with standard spike-generating currents and synapses with realistic timescales that computes based upon the principle that the precise timing of each spike is important for the computation. We then show that our network reproduces a number of key features of cortical networks including irregular, Poisson-like spike times, and a tight balance between excitation and inhibition. These results significantly increase the biological plausibility of the spike-based approach to network computation, and uncover how several components of biological networks may work together to efficiently carry out computation. Copyright © 2015 the authors 0270-6474/15/3510112-23$15.00/0.

  8. Choline-mediated modulation of hippocampal sharp wave-ripple complexes in vitro.

    PubMed

    Fischer, Viktoria; Both, Martin; Draguhn, Andreas; Egorov, Alexei V

    2014-06-01

    The cholinergic system is critically involved in the modulation of cognitive functions, including learning and memory. Acetylcholine acts through muscarinic (mAChRs) and nicotinic receptors (nAChRs), which are both abundantly expressed in the hippocampus. Previous evidence indicates that choline, the precursor and degradation product of Acetylcholine, can itself activate nAChRs and thereby affects intrinsic and synaptic neuronal functions. Here, we asked whether the cellular actions of choline directly affect hippocampal network activity. Using mouse hippocampal slices we found that choline efficiently suppresses spontaneously occurring sharp wave-ripple complexes (SPW-R) and can induce gamma oscillations. In addition, choline reduces synaptic transmission between hippocampal subfields CA3 and CA1. Surprisingly, these effects are mediated by activation of both mAChRs and α7-containing nAChRs. Most nicotinic effects became only apparent after local, fast application of choline, indicating rapid desensitization kinetics of nAChRs. Effects were still present following block of choline uptake and are, therefore, likely because of direct actions of choline at the respective receptors. Together, choline turns out to be a potent regulator of patterned network activity within the hippocampus. These actions may be of importance for understanding state transitions in normal and pathologically altered neuronal networks. In this study we asked whether choline, the precursor and degradation product of acetylcholine, directly affects hippocampal network activity. Using mouse hippocampal slices we found that choline efficiently suppresses spontaneously occurring sharp wave-ripple complexes (SPW-R). In addition, choline reduces synaptic transmission between hippocampal subfields. These effects are mediated by direct activation of muscarinic as well as nicotinic cholinergic pathways. Together, choline turns out to be a potent regulator of patterned activity within hippocampal networks. © 2014 International Society for Neurochemistry.

  9. Neural networks with local receptive fields and superlinear VC dimension.

    PubMed

    Schmitt, Michael

    2002-04-01

    Local receptive field neurons comprise such well-known and widely used unit types as radial basis function (RBF) neurons and neurons with center-surround receptive field. We study the Vapnik-Chervonenkis (VC) dimension of feedforward neural networks with one hidden layer of these units. For several variants of local receptive field neurons, we show that the VC dimension of these networks is superlinear. In particular, we establish the bound Omega(W log k) for any reasonably sized network with W parameters and k hidden nodes. This bound is shown to hold for discrete center-surround receptive field neurons, which are physiologically relevant models of cells in the mammalian visual system, for neurons computing a difference of gaussians, which are popular in computational vision, and for standard RBF neurons, a major alternative to sigmoidal neurons in artificial neural networks. The result for RBF neural networks is of particular interest since it answers a question that has been open for several years. The results also give rise to lower bounds for networks with fixed input dimension. Regarding constants, all bounds are larger than those known thus far for similar architectures with sigmoidal neurons. The superlinear lower bounds contrast with linear upper bounds for single local receptive field neurons also derived here.

  10. Results on a binding neuron model and their implications for modified hourglass model for neuronal network.

    PubMed

    Arunachalam, Viswanathan; Akhavan-Tabatabaei, Raha; Lopez, Cristina

    2013-01-01

    The classical models of single neuron like Hodgkin-Huxley point neuron or leaky integrate and fire neuron assume the influence of postsynaptic potentials to last till the neuron fires. Vidybida (2008) in a refreshing departure has proposed models for binding neurons in which the trace of an input is remembered only for a finite fixed period of time after which it is forgotten. The binding neurons conform to the behaviour of real neurons and are applicable in constructing fast recurrent networks for computer modeling. This paper develops explicitly several useful results for a binding neuron like the firing time distribution and other statistical characteristics. We also discuss the applicability of the developed results in constructing a modified hourglass network model in which there are interconnected neurons with excitatory as well as inhibitory inputs. Limited simulation results of the hourglass network are presented.

  11. Dynamical state of the network determines the efficacy of single neuron properties in shaping the network activity

    PubMed Central

    Sahasranamam, Ajith; Vlachos, Ioannis; Aertsen, Ad; Kumar, Arvind

    2016-01-01

    Spike patterns are among the most common electrophysiological descriptors of neuron types. Surprisingly, it is not clear how the diversity in firing patterns of the neurons in a network affects its activity dynamics. Here, we introduce the state-dependent stochastic bursting neuron model allowing for a change in its firing patterns independent of changes in its input-output firing rate relationship. Using this model, we show that the effect of single neuron spiking on the network dynamics is contingent on the network activity state. While spike bursting can both generate and disrupt oscillations, these patterns are ineffective in large regions of the network state space in changing the network activity qualitatively. Finally, we show that when single-neuron properties are made dependent on the population activity, a hysteresis like dynamics emerges. This novel phenomenon has important implications for determining the network response to time-varying inputs and for the network sensitivity at different operating points. PMID:27212008

  12. Dynamical state of the network determines the efficacy of single neuron properties in shaping the network activity.

    PubMed

    Sahasranamam, Ajith; Vlachos, Ioannis; Aertsen, Ad; Kumar, Arvind

    2016-05-23

    Spike patterns are among the most common electrophysiological descriptors of neuron types. Surprisingly, it is not clear how the diversity in firing patterns of the neurons in a network affects its activity dynamics. Here, we introduce the state-dependent stochastic bursting neuron model allowing for a change in its firing patterns independent of changes in its input-output firing rate relationship. Using this model, we show that the effect of single neuron spiking on the network dynamics is contingent on the network activity state. While spike bursting can both generate and disrupt oscillations, these patterns are ineffective in large regions of the network state space in changing the network activity qualitatively. Finally, we show that when single-neuron properties are made dependent on the population activity, a hysteresis like dynamics emerges. This novel phenomenon has important implications for determining the network response to time-varying inputs and for the network sensitivity at different operating points.

  13. Effects of bursting dynamic features on the generation of multi-clustered structure of neural network with symmetric spike-timing-dependent plasticity learning rule.

    PubMed

    Liu, Hui; Song, Yongduan; Xue, Fangzheng; Li, Xiumin

    2015-11-01

    In this paper, the generation of multi-clustered structure of self-organized neural network with different neuronal firing patterns, i.e., bursting or spiking, has been investigated. The initially all-to-all-connected spiking neural network or bursting neural network can be self-organized into clustered structure through the symmetric spike-timing-dependent plasticity learning for both bursting and spiking neurons. However, the time consumption of this clustering procedure of the burst-based self-organized neural network (BSON) is much shorter than the spike-based self-organized neural network (SSON). Our results show that the BSON network has more obvious small-world properties, i.e., higher clustering coefficient and smaller shortest path length than the SSON network. Also, the results of larger structure entropy and activity entropy of the BSON network demonstrate that this network has higher topological complexity and dynamical diversity, which benefits for enhancing information transmission of neural circuits. Hence, we conclude that the burst firing can significantly enhance the efficiency of clustering procedure and the emergent clustered structure renders the whole network more synchronous and therefore more sensitive to weak input. This result is further confirmed from its improved performance on stochastic resonance. Therefore, we believe that the multi-clustered neural network which self-organized from the bursting dynamics has high efficiency in information processing.

  14. Effects of bursting dynamic features on the generation of multi-clustered structure of neural network with symmetric spike-timing-dependent plasticity learning rule

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Hui; Song, Yongduan; Xue, Fangzheng

    In this paper, the generation of multi-clustered structure of self-organized neural network with different neuronal firing patterns, i.e., bursting or spiking, has been investigated. The initially all-to-all-connected spiking neural network or bursting neural network can be self-organized into clustered structure through the symmetric spike-timing-dependent plasticity learning for both bursting and spiking neurons. However, the time consumption of this clustering procedure of the burst-based self-organized neural network (BSON) is much shorter than the spike-based self-organized neural network (SSON). Our results show that the BSON network has more obvious small-world properties, i.e., higher clustering coefficient and smaller shortest path length than themore » SSON network. Also, the results of larger structure entropy and activity entropy of the BSON network demonstrate that this network has higher topological complexity and dynamical diversity, which benefits for enhancing information transmission of neural circuits. Hence, we conclude that the burst firing can significantly enhance the efficiency of clustering procedure and the emergent clustered structure renders the whole network more synchronous and therefore more sensitive to weak input. This result is further confirmed from its improved performance on stochastic resonance. Therefore, we believe that the multi-clustered neural network which self-organized from the bursting dynamics has high efficiency in information processing.« less

  15. Joining the dots - protein-RNA interactions mediating local mRNA translation in neurons.

    PubMed

    Gallagher, Christopher; Ramos, Andres

    2018-06-01

    Establishing and maintaining the complex network of connections required for neuronal communication requires the transport and in situ translation of large groups of mRNAs to create local proteomes. In this Review, we discuss the regulation of local mRNA translation in neurons and the RNA-binding proteins that recognise RNA zipcode elements and connect the mRNAs to the cellular transport networks, as well as regulate their translation control. However, mRNA recognition by the regulatory proteins is mediated by the combinatorial action of multiple RNA-binding domains. This increases the specificity and affinity of the interaction, while allowing the protein to recognise a diverse set of targets and mediate a range of mechanisms for translational regulation. The structural and molecular understanding of the interactions can be used together with novel microscopy and transcriptome-wide data to build a mechanistic framework for the regulation of local mRNA translation. © 2018 Federation of European Biochemical Societies.

  16. Fitting neuron models to spike trains.

    PubMed

    Rossant, Cyrille; Goodman, Dan F M; Fontaine, Bertrand; Platkiewicz, Jonathan; Magnusson, Anna K; Brette, Romain

    2011-01-01

    Computational modeling is increasingly used to understand the function of neural circuits in systems neuroscience. These studies require models of individual neurons with realistic input-output properties. Recently, it was found that spiking models can accurately predict the precisely timed spike trains produced by cortical neurons in response to somatically injected currents, if properly fitted. This requires fitting techniques that are efficient and flexible enough to easily test different candidate models. We present a generic solution, based on the Brian simulator (a neural network simulator in Python), which allows the user to define and fit arbitrary neuron models to electrophysiological recordings. It relies on vectorization and parallel computing techniques to achieve efficiency. We demonstrate its use on neural recordings in the barrel cortex and in the auditory brainstem, and confirm that simple adaptive spiking models can accurately predict the response of cortical neurons. Finally, we show how a complex multicompartmental model can be reduced to a simple effective spiking model.

  17. Planar cell polarity genes control the connectivity of enteric neurons

    PubMed Central

    Sasselli, Valentina; Boesmans, Werend; Vanden Berghe, Pieter; Tissir, Fadel; Goffinet, André M.; Pachnis, Vassilis

    2013-01-01

    A highly complex network of intrinsic enteric neurons is required for the digestive and homeostatic functions of the gut. Nevertheless, the genetic and molecular mechanisms that regulate their assembly into functional neuronal circuits are currently unknown. Here we report that the planar cell polarity (PCP) genes Celsr3 and Fzd3 are required during murine embryogenesis to specifically control the guidance and growth of enteric neuronal projections relative to the longitudinal and radial gut axes. Ablation of these genes disrupts the normal organization of nascent neuronal projections, leading to subtle changes of axonal tract configuration in the mature enteric nervous system (ENS), but profound abnormalities in gastrointestinal motility. Our data argue that PCP-dependent modules of connectivity established at early stages of enteric neurogenesis control gastrointestinal function in adult animals and provide the first evidence that developmental deficits in ENS wiring may contribute to the pathogenesis of idiopathic bowel disorders. PMID:23478408

  18. Trafficking Mechanisms Underlying Neuronal Voltage-gated Ion Channel Localization at the Axon Initial Segment

    PubMed Central

    Vacher, Helene; Trimmer, James S.

    2012-01-01

    Summary Voltage-gated ion channels are diverse and fundamental determinants of neuronal intrinsic excitability. Voltage-gated K+ (Kv) and Na+ (Nav) channels play complex yet fundamentally important roles in determining intrinsic excitability. The Kv and Nav channels located at the axon initial segment (AIS) play a unique and especially important role in generating neuronal output in the form of anterograde axonal and backpropagating action potentials, Aberrant intrinsic excitability in individual neurons within networks contributes to synchronous neuronal activity leading to seizures. Mutations in ion channel genes gives rise to a variety of seizure-related “Channelopathies”, and many of the ion channel subunits associated with epilepsy mutations are localized at the AIS, making this a hotspot for epileptogenesis. Here we review the cellular mechanisms that underlie the trafficking of Kv and Nav channels found at the AIS, and how Kv and Nav channel mutations associated with epilepsy can alter these processes. PMID:23216576

  19. Neural Action Fields for Optic Flow Based Navigation: A Simulation Study of the Fly Lobula Plate Network

    PubMed Central

    Borst, Alexander; Weber, Franz

    2011-01-01

    Optic flow based navigation is a fundamental way of visual course control described in many different species including man. In the fly, an essential part of optic flow analysis is performed in the lobula plate, a retinotopic map of motion in the environment. There, the so-called lobula plate tangential cells possess large receptive fields with different preferred directions in different parts of the visual field. Previous studies demonstrated an extensive connectivity between different tangential cells, providing, in principle, the structural basis for their large and complex receptive fields. We present a network simulation of the tangential cells, comprising most of the neurons studied so far (22 on each hemisphere) with all the known connectivity between them. On their dendrite, model neurons receive input from a retinotopic array of Reichardt-type motion detectors. Model neurons exhibit receptive fields much like their natural counterparts, demonstrating that the connectivity between the lobula plate tangential cells indeed can account for their complex receptive field structure. We describe the tuning of a model neuron to particular types of ego-motion (rotation as well as translation around/along a given body axis) by its ‘action field’. As we show for model neurons of the vertical system (VS-cells), each of them displays a different type of action field, i.e., responds maximally when the fly is rotating around a particular body axis. However, the tuning width of the rotational action fields is relatively broad, comparable to the one with dendritic input only. The additional intra-lobula-plate connectivity mainly reduces their translational action field amplitude, i.e., their sensitivity to translational movements along any body axis of the fly. PMID:21305019

  20. Increasing CREB Function in the CA1 Region of Dorsal Hippocampus Rescues the Spatial Memory Deficits in a Mouse Model of Alzheimer's Disease

    PubMed Central

    Yiu, Adelaide P; Rashid, Asim J; Josselyn, Sheena A

    2011-01-01

    The principal defining feature of Alzheimer's disease (AD) is memory impairment. As the transcription factor CREB (cAMP/Ca2+ responsive element-binding protein) is critical for memory formation across species, we investigated the role of CREB in a mouse model of AD. We found that TgCRND8 mice exhibit a profound impairment in the ability to form a spatial memory, a process that critically relies on the dorsal hippocampus. Perhaps contributing to this memory deficit, we observed additional deficits in the dorsal hippocampus of TgCRND8 mice in terms of (1) biochemistry (decreased CREB activation in the CA1 region), (2) neuronal structure (decreased spine density and dendritic complexity of CA1 pyramidal neurons), and (3) neuronal network activity (decreased arc mRNA levels following behavioral training). Locally and acutely increasing CREB function in the CA1 region of dorsal hippocampus of TgCRND8 mice was sufficient to restore function in each of these key domains (biochemistry, neuronal structure, network activity, and most importantly, memory formation). The rescue produced by increasing CREB was specific both anatomically and behaviorally and independent of plaque load or Aβ levels. Interestingly, humans with AD show poor spatial memory/navigation and AD brains have disrupted (1) CREB activation, and (2) spine density and dendritic complexity in hippocampal CA1 pyramidal neurons. These parallel findings not only confirm that TgCRND8 mice accurately model key aspects of human AD, but furthermore, suggest the intriguing possibility that targeting CREB may be a useful therapeutic strategy in treating humans with AD. PMID:21734652

  1. Solving the quantum many-body problem with artificial neural networks

    NASA Astrophysics Data System (ADS)

    Carleo, Giuseppe; Troyer, Matthias

    2017-02-01

    The challenge posed by the many-body problem in quantum physics originates from the difficulty of describing the nontrivial correlations encoded in the exponential complexity of the many-body wave function. Here we demonstrate that systematic machine learning of the wave function can reduce this complexity to a tractable computational form for some notable cases of physical interest. We introduce a variational representation of quantum states based on artificial neural networks with a variable number of hidden neurons. A reinforcement-learning scheme we demonstrate is capable of both finding the ground state and describing the unitary time evolution of complex interacting quantum systems. Our approach achieves high accuracy in describing prototypical interacting spins models in one and two dimensions.

  2. Evolution of the VEGF-regulated vascular network from a neural guidance system.

    PubMed

    Ponnambalam, Sreenivasan; Alberghina, Mario

    2011-06-01

    The vascular network is closely linked to the neural system, and an interdependence is displayed in healthy and in pathophysiological responses. How has close apposition of two such functionally different systems occurred? Here, we present a hypothesis for the evolution of the vascular network from an ancestral neural guidance system. Biological cornerstones of this hypothesis are the vascular endothelial growth factor (VEGF) protein family and cognate receptors. The primary sequences of such proteins are conserved from invertebrates, such as worms and flies that lack discernible vascular systems compared to mammals, but all these systems have sophisticated neuronal wiring involving such molecules. Ancestral VEGFs and receptors (VEGFRs) could have been used to develop and maintain the nervous system in primitive eukaryotes. During evolution, the demands of increased morphological complexity required systems for transporting molecules and cells, i.e., biological conductive tubes. We propose that the VEGF-VEGFR axis was subverted by evolution to mediate the formation of biological tubes necessary for transport of fluids, e.g., blood. Increasingly, there is evidence that aberrant VEGF-mediated responses are also linked to neuronal dysfunctions ranging from motor neuron disease, stroke, Parkinson's disease, Alzheimer's disease, ischemic brain disease, epilepsy, multiple sclerosis, and neuronal repair after injury, as well as common vascular diseases (e.g., retinal disease). Manipulation and correction of the VEGF response in different neural tissues could be an effective strategy to treat different neurological diseases.

  3. Empirical modeling for intelligent, real-time manufacture control

    NASA Technical Reports Server (NTRS)

    Xu, Xiaoshu

    1994-01-01

    Artificial neural systems (ANS), also known as neural networks, are an attempt to develop computer systems that emulate the neural reasoning behavior of biological neural systems (e.g. the human brain). As such, they are loosely based on biological neural networks. The ANS consists of a series of nodes (neurons) and weighted connections (axons) that, when presented with a specific input pattern, can associate specific output patterns. It is essentially a highly complex, nonlinear, mathematical relationship or transform. These constructs have two significant properties that have proven useful to the authors in signal processing and process modeling: noise tolerance and complex pattern recognition. Specifically, the authors have developed a new network learning algorithm that has resulted in the successful application of ANS's to high speed signal processing and to developing models of highly complex processes. Two of the applications, the Weld Bead Geometry Control System and the Welding Penetration Monitoring System, are discussed in the body of this paper.

  4. The Gully in the "Brain Glitch" Theory

    ERIC Educational Resources Information Center

    Willis, Judy

    2007-01-01

    Learning to read is a complex process that requires multiple areas of the brain to operate together through intricate networks of neurons. The author of this article, a neurologist and middle school teacher, takes exception to interpretations of neuroimaging research that treat reading as an isolated, independent cognitive process. She…

  5. Command and Compensation in a Neuromodulatory Decision Network

    PubMed Central

    Luan, Haojiang; Diao, Fengqiu; Peabody, Nathan C.; White, Benjamin H.

    2012-01-01

    The neural circuits that mediate behavioral choices must not only weigh internal demands and environmental circumstances, but also select and implement specific actions, including associated visceral or neuroendocrine functions. Coordinating these multiple processes suggests considerable complexity. As a consequence, even circuits that support simple behavioral decisions remain poorly understood. Here we show that the environmentally-sensitive wing expansion decision of adult fruit flies is coordinated by a single pair of neuromodulatory neurons with command-like function. Targeted suppression of these neurons using the Split Gal4 system abrogates the fly's ability to expand its wings in the face of environmental challenges, while stimulating them forces expansion by coordinately activating both motor and neuroendocrine outputs. The arbitration and implementation of the wing expansion decision by this neuronal pair may illustrate a general strategy by which neuromodulatory neurons orchestrate behavior. Interestingly, the decision network shows a behavioral plasticity that is unmasked under conducive environmental conditions in flies lacking the function of the command-like neuromodulatory neurons. Such flies can often expand their wings using a motor program distinct from that of wildtype animals and controls. This compensatory program may be the vestige of an ancestral, environmentally-insensitive program used for wing expansion that existed prior to the evolution of the environmentally-adaptive program currently used by Drosophila and other cyclorrhaphan flies. PMID:22262886

  6. Bounds on the number of hidden neurons in three-layer binary neural networks.

    PubMed

    Zhang, Zhaozhi; Ma, Xiaomin; Yang, Yixian

    2003-09-01

    This paper investigates an important problem concerning the complexity of three-layer binary neural networks (BNNs) with one hidden layer. The neuron in the studied BNNs employs a hard limiter activation function with only integer weights and an integer threshold. The studies are focused on implementations of arbitrary Boolean functions which map from [0, 1]n into [0, 1]. A deterministic algorithm called set covering algorithm (SCA) is proposed for the construction of a three-layer BNN to implement an arbitrary Boolean function. The SCA is based on a unit sphere covering (USC) of the Hamming space (HS) which is chosen in advance. It is proved that for the implementation of an arbitrary Boolean function of n-variables (n > or = 3) by using SCA, [3L/2] hidden neurons are necessary and sufficient, where L is the number of unit spheres contained in the chosen USC of the n-dimensional HS. It is shown that by using SCA, the number of hidden neurons required is much less than that by using a two-parallel hyperplane method. In order to indicate the potential ability of three-layer BNNs, a lower bound on the required number of hidden neurons which is derived by using the method of estimating the Vapnik-Chervonenkis (VC) dimension is also given.

  7. Neurobiologically realistic determinants of self-organized criticality in networks of spiking neurons.

    PubMed

    Rubinov, Mikail; Sporns, Olaf; Thivierge, Jean-Philippe; Breakspear, Michael

    2011-06-01

    Self-organized criticality refers to the spontaneous emergence of self-similar dynamics in complex systems poised between order and randomness. The presence of self-organized critical dynamics in the brain is theoretically appealing and is supported by recent neurophysiological studies. Despite this, the neurobiological determinants of these dynamics have not been previously sought. Here, we systematically examined the influence of such determinants in hierarchically modular networks of leaky integrate-and-fire neurons with spike-timing-dependent synaptic plasticity and axonal conduction delays. We characterized emergent dynamics in our networks by distributions of active neuronal ensemble modules (neuronal avalanches) and rigorously assessed these distributions for power-law scaling. We found that spike-timing-dependent synaptic plasticity enabled a rapid phase transition from random subcritical dynamics to ordered supercritical dynamics. Importantly, modular connectivity and low wiring cost broadened this transition, and enabled a regime indicative of self-organized criticality. The regime only occurred when modular connectivity, low wiring cost and synaptic plasticity were simultaneously present, and the regime was most evident when between-module connection density scaled as a power-law. The regime was robust to variations in other neurobiologically relevant parameters and favored systems with low external drive and strong internal interactions. Increases in system size and connectivity facilitated internal interactions, permitting reductions in external drive and facilitating convergence of postsynaptic-response magnitude and synaptic-plasticity learning rate parameter values towards neurobiologically realistic levels. We hence infer a novel association between self-organized critical neuronal dynamics and several neurobiologically realistic features of structural connectivity. The central role of these features in our model may reflect their importance for neuronal information processing.

  8. Submillisecond Optogenetic Control of Neuronal Firing with Two-Photon Holographic Photoactivation of Chronos

    PubMed Central

    Ronzitti, Emiliano; Conti, Rossella; Zampini, Valeria; Tanese, Dimitrii; Klapoetke, Nathan; Boyden, Edward S.; Papagiakoumou, Eirini

    2017-01-01

    Optogenetic neuronal network manipulation promises to unravel a long-standing mystery in neuroscience: how does microcircuit activity relate causally to behavioral and pathological states? The challenge to evoke spikes with high spatial and temporal complexity necessitates further joint development of light-delivery approaches and custom opsins. Two-photon (2P) light-targeting strategies demonstrated in-depth generation of action potentials in photosensitive neurons both in vitro and in vivo, but thus far lack the temporal precision necessary to induce precisely timed spiking events. Here, we show that efficient current integration enabled by 2P holographic amplified laser illumination of Chronos, a highly light-sensitive and fast opsin, can evoke spikes with submillisecond precision and repeated firing up to 100 Hz in brain slices from Swiss male mice. These results pave the way for optogenetic manipulation with the spatial and temporal sophistication necessary to mimic natural microcircuit activity. SIGNIFICANCE STATEMENT To reveal causal links between neuronal activity and behavior, it is necessary to develop experimental strategies to induce spatially and temporally sophisticated perturbation of network microcircuits. Two-photon computer generated holography (2P-CGH) recently demonstrated 3D optogenetic control of selected pools of neurons with single-cell accuracy in depth in the brain. Here, we show that exciting the fast opsin Chronos with amplified laser 2P-CGH enables cellular-resolution targeting with unprecedented temporal control, driving spiking up to 100 Hz with submillisecond onset precision using low laser power densities. This system achieves a unique combination of spatial flexibility and temporal precision needed to pattern optogenetically inputs that mimic natural neuronal network activity patterns. PMID:28972125

  9. Synchronization in a chaotic neural network with time delay depending on the spatial distance between neurons

    NASA Astrophysics Data System (ADS)

    Tang, Guoning; Xu, Kesheng; Jiang, Luoluo

    2011-10-01

    The synchronization is investigated in a two-dimensional Hindmarsh-Rose neuronal network by introducing a global coupling scheme with time delay, where the length of time delay is proportional to the spatial distance between neurons. We find that the time delay always disturbs synchronization of the neuronal network. When both the coupling strength and length of time delay per unit distance (i.e., enlargement factor) are large enough, the time delay induces the abnormal membrane potential oscillations in neurons. Specifically, the abnormal membrane potential oscillations for the symmetrically placed neurons form an antiphase, so that the large coupling strength and enlargement factor lead to the desynchronization of the neuronal network. The complete and intermittently complete synchronization of the neuronal network are observed for the right choice of parameters. The physical mechanism underlying these phenomena is analyzed.

  10. Organization of excitable dynamics in hierarchical biological networks.

    PubMed

    Müller-Linow, Mark; Hilgetag, Claus C; Hütt, Marc-Thorsten

    2008-09-26

    This study investigates the contributions of network topology features to the dynamic behavior of hierarchically organized excitable networks. Representatives of different types of hierarchical networks as well as two biological neural networks are explored with a three-state model of node activation for systematically varying levels of random background network stimulation. The results demonstrate that two principal topological aspects of hierarchical networks, node centrality and network modularity, correlate with the network activity patterns at different levels of spontaneous network activation. The approach also shows that the dynamic behavior of the cerebral cortical systems network in the cat is dominated by the network's modular organization, while the activation behavior of the cellular neuronal network of Caenorhabditis elegans is strongly influenced by hub nodes. These findings indicate the interaction of multiple topological features and dynamic states in the function of complex biological networks.

  11. Transition to subthreshold activity with the use of phase shifting in a model thalamic network

    NASA Astrophysics Data System (ADS)

    Thomas, Elizabeth; Grisar, Thierry

    1997-05-01

    Absence epilepsy involves a state of low frequency synchronous oscillations by the involved neuronal networks. These oscillations may be either above or subthreshold. In this investigation, we studied the methods which could be utilized to transform the threshold activity of neurons in the network to a subthreshold state. A model thalamic network was constructed using the Hodgkin Huxley framework. Subthreshold activity was achieved by the application of stimuli to the network which caused phase shifts in the oscillatory activity of selected neurons in the network. In some instances the stimulus was a periodic pulse train of low frequency to the reticular thalamic neurons of the network while in others, it was a constant hyperpolarizing current applied to the thalamocortical neurons.

  12. Towards an Analogue Neuromorphic VLSI Instrument for the Sensing of Complex Odours

    NASA Astrophysics Data System (ADS)

    Ab Aziz, Muhammad Fazli; Harun, Fauzan Khairi Che; Covington, James A.; Gardner, Julian W.

    2011-09-01

    Almost all electronic nose instruments reported today employ pattern recognition algorithms written in software and run on digital processors, e.g. micro-processors, microcontrollers or FPGAs. Conversely, in this paper we describe the analogue VLSI implementation of an electronic nose through the design of a neuromorphic olfactory chip. The modelling, design and fabrication of the chip have already been reported. Here a smart interface has been designed and characterised for thisneuromorphic chip. Thus we can demonstrate the functionality of the a VLSI neuromorphic chip, producing differing principal neuron firing patterns to real sensor response data. Further work is directed towards integrating 9 separate neuromorphic chips to create a large neuronal network to solve more complex olfactory problems.

  13. Self-Consistent Scheme for Spike-Train Power Spectra in Heterogeneous Sparse Networks.

    PubMed

    Pena, Rodrigo F O; Vellmer, Sebastian; Bernardi, Davide; Roque, Antonio C; Lindner, Benjamin

    2018-01-01

    Recurrent networks of spiking neurons can be in an asynchronous state characterized by low or absent cross-correlations and spike statistics which resemble those of cortical neurons. Although spatial correlations are negligible in this state, neurons can show pronounced temporal correlations in their spike trains that can be quantified by the autocorrelation function or the spike-train power spectrum. Depending on cellular and network parameters, correlations display diverse patterns (ranging from simple refractory-period effects and stochastic oscillations to slow fluctuations) and it is generally not well-understood how these dependencies come about. Previous work has explored how the single-cell correlations in a homogeneous network (excitatory and inhibitory integrate-and-fire neurons with nearly balanced mean recurrent input) can be determined numerically from an iterative single-neuron simulation. Such a scheme is based on the fact that every neuron is driven by the network noise (i.e., the input currents from all its presynaptic partners) but also contributes to the network noise, leading to a self-consistency condition for the input and output spectra. Here we first extend this scheme to homogeneous networks with strong recurrent inhibition and a synaptic filter, in which instabilities of the previous scheme are avoided by an averaging procedure. We then extend the scheme to heterogeneous networks in which (i) different neural subpopulations (e.g., excitatory and inhibitory neurons) have different cellular or connectivity parameters; (ii) the number and strength of the input connections are random (Erdős-Rényi topology) and thus different among neurons. In all heterogeneous cases, neurons are lumped in different classes each of which is represented by a single neuron in the iterative scheme; in addition, we make a Gaussian approximation of the input current to the neuron. These approximations seem to be justified over a broad range of parameters as indicated by comparison with simulation results of large recurrent networks. Our method can help to elucidate how network heterogeneity shapes the asynchronous state in recurrent neural networks.

  14. An efficient approach to suppress the negative role of contrarian oscillators in synchronization

    NASA Astrophysics Data System (ADS)

    Zhang, Xiyun; Ruan, Zhongyuan; Liu, Zonghua

    2013-09-01

    It has been found that contrarian oscillators usually take a negative role in the collective behaviors formed by conformist oscillators. However, experiments revealed that it is also possible to achieve a strong coherence even when there are contrarians in the system such as neuron networks with both excitable and inhibitory neurons. To understand the underlying mechanism of this abnormal phenomenon, we here consider a complex network of coupled Kuramoto oscillators with mixed positive and negative couplings and present an efficient approach, i.e., tit-for-tat strategy, to suppress the negative role of contrarian oscillators in synchronization and thus increase the order parameter of synchronization. Two classes of contrarian oscillators are numerically studied and a brief theoretical analysis is provided to explain the numerical results.

  15. Neuronal integration of dynamic sources: Bayesian learning and Bayesian inference.

    PubMed

    Siegelmann, Hava T; Holzman, Lars E

    2010-09-01

    One of the brain's most basic functions is integrating sensory data from diverse sources. This ability causes us to question whether the neural system is computationally capable of intelligently integrating data, not only when sources have known, fixed relative dependencies but also when it must determine such relative weightings based on dynamic conditions, and then use these learned weightings to accurately infer information about the world. We suggest that the brain is, in fact, fully capable of computing this parallel task in a single network and describe a neural inspired circuit with this property. Our implementation suggests the possibility that evidence learning requires a more complex organization of the network than was previously assumed, where neurons have different specialties, whose emergence brings the desired adaptivity seen in human online inference.

  16. Sleeping of a Complex Brain Networks with Hierarchical Organization

    NASA Astrophysics Data System (ADS)

    Zhang, Ying-Yue; Yang, Qiu-Ying; Chen, Tian-Lun

    2009-01-01

    The dynamical behavior in the cortical brain network of macaque is studied by modeling each cortical area with a subnetwork of interacting excitable neurons. We characterize the system by studying how to perform the transition, which is now topology-dependent, from the active state to that with no activity. This could be a naive model for the wakening and sleeping of a brain-like system, i.e., a multi-component system with two different dynamical behavior.

  17. Multiplex Networks of Cortical and Hippocampal Neurons Revealed at Different Timescales

    PubMed Central

    Timme, Nicholas; Ito, Shinya; Myroshnychenko, Maxym; Yeh, Fang-Chin; Hiolski, Emma; Hottowy, Pawel; Beggs, John M.

    2014-01-01

    Recent studies have emphasized the importance of multiplex networks – interdependent networks with shared nodes and different types of connections – in systems primarily outside of neuroscience. Though the multiplex properties of networks are frequently not considered, most networks are actually multiplex networks and the multiplex specific features of networks can greatly affect network behavior (e.g. fault tolerance). Thus, the study of networks of neurons could potentially be greatly enhanced using a multiplex perspective. Given the wide range of temporally dependent rhythms and phenomena present in neural systems, we chose to examine multiplex networks of individual neurons with time scale dependent connections. To study these networks, we used transfer entropy – an information theoretic quantity that can be used to measure linear and nonlinear interactions – to systematically measure the connectivity between individual neurons at different time scales in cortical and hippocampal slice cultures. We recorded the spiking activity of almost 12,000 neurons across 60 tissue samples using a 512-electrode array with 60 micrometer inter-electrode spacing and 50 microsecond temporal resolution. To the best of our knowledge, this preparation and recording method represents a superior combination of number of recorded neurons and temporal and spatial recording resolutions to any currently available in vivo system. We found that highly connected neurons (“hubs”) were localized to certain time scales, which, we hypothesize, increases the fault tolerance of the network. Conversely, a large proportion of non-hub neurons were not localized to certain time scales. In addition, we found that long and short time scale connectivity was uncorrelated. Finally, we found that long time scale networks were significantly less modular and more disassortative than short time scale networks in both tissue types. As far as we are aware, this analysis represents the first systematic study of temporally dependent multiplex networks among individual neurons. PMID:25536059

  18. A unifying view of synchronization for data assimilation in complex nonlinear networks

    NASA Astrophysics Data System (ADS)

    Abarbanel, Henry D. I.; Shirman, Sasha; Breen, Daniel; Kadakia, Nirag; Rey, Daniel; Armstrong, Eve; Margoliash, Daniel

    2017-12-01

    Networks of nonlinear systems contain unknown parameters and dynamical degrees of freedom that may not be observable with existing instruments. From observable state variables, we want to estimate the connectivity of a model of such a network and determine the full state of the model at the termination of a temporal observation window during which measurements transfer information to a model of the network. The model state at the termination of a measurement window acts as an initial condition for predicting the future behavior of the network. This allows the validation (or invalidation) of the model as a representation of the dynamical processes producing the observations. Once the model has been tested against new data, it may be utilized as a predictor of responses to innovative stimuli or forcing. We describe a general framework for the tasks involved in the "inverse" problem of determining properties of a model built to represent measured output from physical, biological, or other processes when the measurements are noisy, the model has errors, and the state of the model is unknown when measurements begin. This framework is called statistical data assimilation and is the best one can do in estimating model properties through the use of the conditional probability distributions of the model state variables, conditioned on observations. There is a very broad arena of applications of the methods described. These include numerical weather prediction, properties of nonlinear electrical circuitry, and determining the biophysical properties of functional networks of neurons. Illustrative examples will be given of (1) estimating the connectivity among neurons with known dynamics in a network of unknown connectivity, and (2) estimating the biophysical properties of individual neurons in vitro taken from a functional network underlying vocalization in songbirds.

  19. A modeling comparison of projection neuron- and neuromodulator-elicited oscillations in a central pattern generating network.

    PubMed

    Kintos, Nickolas; Nusbaum, Michael P; Nadim, Farzan

    2008-06-01

    Many central pattern generating networks are influenced by synaptic input from modulatory projection neurons. The network response to a projection neuron is sometimes mimicked by bath applying the neuronally-released modulator, despite the absence of network interactions with the projection neuron. One interesting example occurs in the crab stomatogastric ganglion (STG), where bath applying the neuropeptide pyrokinin (PK) elicits a gastric mill rhythm which is similar to that elicited by the projection neuron modulatory commissural neuron 1 (MCN1), despite the absence of PK in MCN1 and the fact that MCN1 is not active during the PK-elicited rhythm. MCN1 terminals have fast and slow synaptic actions on the gastric mill network and are presynaptically inhibited by this network in the STG. These local connections are inactive in the PK-elicited rhythm, and the mechanism underlying this rhythm is unknown. We use mathematical and biophysically-realistic modeling to propose potential mechanisms by which PK can elicit a gastric mill rhythm that is similar to the MCN1-elicited rhythm. We analyze slow-wave network oscillations using simplified mathematical models and, in parallel, develop biophysically-realistic models that account for fast, action potential-driven oscillations and some spatial structure of the network neurons. Our results illustrate how the actions of bath-applied neuromodulators can mimic those of descending projection neurons through mathematically similar but physiologically distinct mechanisms.

  20. Voltage-Dependent Rhythmogenic Property of Respiratory Pre-Bötzinger Complex Glutamatergic, Dbx1-Derived, and Somatostatin-Expressing Neuron Populations Revealed by Graded Optogenetic Inhibition123

    PubMed Central

    Koizumi, Hidehiko; Mosher, Bryan; Tariq, Mohammad F.; Zhang, Ruli

    2016-01-01

    Abstract The rhythm of breathing in mammals, originating within the brainstem pre-Bötzinger complex (pre-BötC), is presumed to be generated by glutamatergic neurons, but this has not been directly demonstrated. Additionally, developmental expression of the transcription factor Dbx1 or expression of the neuropeptide somatostatin (Sst), has been proposed as a marker for the rhythmogenic pre-BötC glutamatergic neurons, but it is unknown whether these other two phenotypically defined neuronal populations are functionally equivalent to glutamatergic neurons with regard to rhythm generation. To address these problems, we comparatively investigated, by optogenetic approaches, the roles of pre-BötC glutamatergic, Dbx1-derived, and Sst-expressing neurons in respiratory rhythm generation in neonatal transgenic mouse medullary slices in vitro and also more intact adult perfused brainstem-spinal cord preparations in situ. We established three different triple-transgenic mouse lines with Cre-driven Archaerhodopsin-3 (Arch) expression selectively in glutamatergic, Dbx1-derived, or Sst-expressing neurons for targeted photoinhibition. In each line, we identified subpopulations of rhythmically active, Arch-expressing pre-BötC inspiratory neurons by whole-cell recordings in medullary slice preparations in vitro, and established that Arch-mediated hyperpolarization of these inspiratory neurons was laser power dependent with equal efficacy. By site- and population-specific graded photoinhibition, we then demonstrated that inspiratory frequency was reduced by each population with the same neuronal voltage-dependent frequency control mechanism in each state of the respiratory network examined. We infer that enough of the rhythmogenic pre-BötC glutamatergic neurons also have the Dbx1 and Sst expression phenotypes, and thus all three phenotypes share the same voltage-dependent frequency control property. PMID:27275007

  1. Simulating synchronization in neuronal networks

    NASA Astrophysics Data System (ADS)

    Fink, Christian G.

    2016-06-01

    We discuss several techniques used in simulating neuronal networks by exploring how a network's connectivity structure affects its propensity for synchronous spiking. Network connectivity is generated using the Watts-Strogatz small-world algorithm, and two key measures of network structure are described. These measures quantify structural characteristics that influence collective neuronal spiking, which is simulated using the leaky integrate-and-fire model. Simulations show that adding a small number of random connections to an otherwise lattice-like connectivity structure leads to a dramatic increase in neuronal synchronization.

  2. Bidirectional Coupling between Astrocytes and Neurons Mediates Learning and Dynamic Coordination in the Brain: A Multiple Modeling Approach

    PubMed Central

    Wade, John J.; McDaid, Liam J.; Harkin, Jim; Crunelli, Vincenzo; Kelso, J. A. Scott

    2011-01-01

    In recent years research suggests that astrocyte networks, in addition to nutrient and waste processing functions, regulate both structural and synaptic plasticity. To understand the biological mechanisms that underpin such plasticity requires the development of cell level models that capture the mutual interaction between astrocytes and neurons. This paper presents a detailed model of bidirectional signaling between astrocytes and neurons (the astrocyte-neuron model or AN model) which yields new insights into the computational role of astrocyte-neuronal coupling. From a set of modeling studies we demonstrate two significant findings. Firstly, that spatial signaling via astrocytes can relay a “learning signal” to remote synaptic sites. Results show that slow inward currents cause synchronized postsynaptic activity in remote neurons and subsequently allow Spike-Timing-Dependent Plasticity based learning to occur at the associated synapses. Secondly, that bidirectional communication between neurons and astrocytes underpins dynamic coordination between neuron clusters. Although our composite AN model is presently applied to simplified neural structures and limited to coordination between localized neurons, the principle (which embodies structural, functional and dynamic complexity), and the modeling strategy may be extended to coordination among remote neuron clusters. PMID:22242121

  3. Impact of Partial Time Delay on Temporal Dynamics of Watts-Strogatz Small-World Neuronal Networks

    NASA Astrophysics Data System (ADS)

    Yan, Hao; Sun, Xiaojuan

    2017-06-01

    In this paper, we mainly discuss effects of partial time delay on temporal dynamics of Watts-Strogatz (WS) small-world neuronal networks by controlling two parameters. One is the time delay τ and the other is the probability of partial time delay pdelay. Temporal dynamics of WS small-world neuronal networks are discussed with the aid of temporal coherence and mean firing rate. With the obtained simulation results, it is revealed that for small time delay τ, the probability pdelay could weaken temporal coherence and increase mean firing rate of neuronal networks, which indicates that it could improve neuronal firings of the neuronal networks while destroying firing regularity. For large time delay τ, temporal coherence and mean firing rate do not have great changes with respect to pdelay. Time delay τ always has great influence on both temporal coherence and mean firing rate no matter what is the value of pdelay. Moreover, with the analysis of spike trains and histograms of interspike intervals of neurons inside neuronal networks, it is found that the effects of partial time delays on temporal coherence and mean firing rate could be the result of locking between the period of neuronal firing activities and the value of time delay τ. In brief, partial time delay could have great influence on temporal dynamics of the neuronal networks.

  4. Emergent Oscillations in Networks of Stochastic Spiking Neurons

    PubMed Central

    van Drongelen, Wim; Cowan, Jack D.

    2011-01-01

    Networks of neurons produce diverse patterns of oscillations, arising from the network's global properties, the propensity of individual neurons to oscillate, or a mixture of the two. Here we describe noisy limit cycles and quasi-cycles, two related mechanisms underlying emergent oscillations in neuronal networks whose individual components, stochastic spiking neurons, do not themselves oscillate. Both mechanisms are shown to produce gamma band oscillations at the population level while individual neurons fire at a rate much lower than the population frequency. Spike trains in a network undergoing noisy limit cycles display a preferred period which is not found in the case of quasi-cycles, due to the even faster decay of phase information in quasi-cycles. These oscillations persist in sparsely connected networks, and variation of the network's connectivity results in variation of the oscillation frequency. A network of such neurons behaves as a stochastic perturbation of the deterministic Wilson-Cowan equations, and the network undergoes noisy limit cycles or quasi-cycles depending on whether these have limit cycles or a weakly stable focus. These mechanisms provide a new perspective on the emergence of rhythmic firing in neural networks, showing the coexistence of population-level oscillations with very irregular individual spike trains in a simple and general framework. PMID:21573105

  5. Individual neurons in the rat lateral habenular complex project mostly to the dopaminergic ventral tegmental area or to the serotonergic raphe nuclei.

    PubMed

    Bernard, René; Veh, Rüdiger W

    2012-08-01

    The lateral habenular complex (LHb) is a bilateral epithalamic brain structure involved in the modulation of ascending monoamine systems in response to afferents from limbic regions and basal ganglia. The LHb is implicated in various biological functions, such as reward, sleep-wake cycle, feeding, pain processing, and memory formation. The modulatory role of the LHb is partially assumed by putative spontaneously active LHb neurons projecting to the dopaminergic ventral tegmental area (VTA) and to the serotonergic median (MnR) and dorsal raphe nuclei (DR). All four nuclei form a complex and coordinated network to evoke appropriate responses to reward-related stimuli. At present it is not known whether individual LHb neurons project to only one or to more than one monoaminergic nucleus. To answer this question, we made dual injections of two different retrograde tracers into the rat VTA and either DR or MnR. Tracers were visualized by immunohistochemistry. In coronal sections, the different retrogradly labeled habenular neurons were quantified and assigned to the corresponding habenular subnuclei. Our results show that 1) the distribution of neurons in the LHb projecting to the three monoamine nuclei is similar and exhibits a great overlap, 2) the vast majority of LHb projection neurons target one monoaminergic nucleus only, and 3) very few, heterogeneously distributed LHb neurons project to both dopaminergic and serotonergic nuclei. These results imply that the LHb forms both separate and interconnected circuits with each monoaminergic nucleus, permitting the LHb to modulate its output to different monoamine systems either independently or jointly. Copyright © 2012 Wiley Periodicals, Inc.

  6. Vehicle dynamic analysis using neuronal network algorithms

    NASA Astrophysics Data System (ADS)

    Oloeriu, Florin; Mocian, Oana

    2014-06-01

    Theoretical developments of certain engineering areas, the emergence of new investigation tools, which are better and more precise and their implementation on-board the everyday vehicles, all these represent main influence factors that impact the theoretical and experimental study of vehicle's dynamic behavior. Once the implementation of these new technologies onto the vehicle's construction had been achieved, it had led to more and more complex systems. Some of the most important, such as the electronic control of engine, transmission, suspension, steering, braking and traction had a positive impact onto the vehicle's dynamic behavior. The existence of CPU on-board vehicles allows data acquisition and storage and it leads to a more accurate and better experimental and theoretical study of vehicle dynamics. It uses the information offered directly by the already on-board built-in elements of electronic control systems. The technical literature that studies vehicle dynamics is entirely focused onto parametric analysis. This kind of approach adopts two simplifying assumptions. Functional parameters obey certain distribution laws, which are known in classical statistics theory. The second assumption states that the mathematical models are previously known and have coefficients that are not time-dependent. Both the mentioned assumptions are not confirmed in real situations: the functional parameters do not follow any known statistical repartition laws and the mathematical laws aren't previously known and contain families of parameters and are mostly time-dependent. The purpose of the paper is to present a more accurate analysis methodology that can be applied when studying vehicle's dynamic behavior. A method that provides the setting of non-parametrical mathematical models for vehicle's dynamic behavior is relying on neuronal networks. This method contains coefficients that are time-dependent. Neuronal networks are mostly used in various types' system controls, thus being a non-linear process identification algorithm. The common use of neuronal networks for non-linear processes is justified by the fact that both have the ability to organize by themselves. That is why the neuronal networks best define intelligent systems, thus the word `neuronal' is sending one's mind to the biological neuron cell. The paper presents how to better interpret data fed from the on-board computer and a new way of processing that data to better model the real life dynamic behavior of the vehicle.

  7. A real-time hybrid neuron network for highly parallel cognitive systems.

    PubMed

    Christiaanse, Gerrit Jan; Zjajo, Amir; Galuzzi, Carlo; van Leuken, Rene

    2016-08-01

    For comprehensive understanding of how neurons communicate with each other, new tools need to be developed that can accurately mimic the behaviour of such neurons and neuron networks under `real-time' constraints. In this paper, we propose an easily customisable, highly pipelined, neuron network design, which executes optimally scheduled floating-point operations for maximal amount of biophysically plausible neurons per FPGA family type. To reduce the required amount of resources without adverse effect on the calculation latency, a single exponent instance is used for multiple neuron calculation operations. Experimental results indicate that the proposed network design allows the simulation of up to 1188 neurons on Virtex7 (XC7VX550T) device in brain real-time yielding a speed-up of x12.4 compared to the state-of-the art.

  8. Contribution of supraspinal systems to generation of automatic postural responses

    PubMed Central

    Deliagina, Tatiana G.; Beloozerova, Irina N.; Orlovsky, Grigori N.; Zelenin, Pavel V.

    2014-01-01

    Different species maintain a particular body orientation in space due to activity of the closed-loop postural control system. In this review we discuss the role of neurons of descending pathways in operation of this system as revealed in animal models of differing complexity: lower vertebrate (lamprey) and higher vertebrates (rabbit and cat). In the lamprey and quadruped mammals, the role of spinal and supraspinal mechanisms in the control of posture is different. In the lamprey, the system contains one closed-loop mechanism consisting of supraspino-spinal networks. Reticulospinal (RS) neurons play a key role in generation of postural corrections. Due to vestibular input, any deviation from the stabilized body orientation leads to activation of a specific population of RS neurons. Each of the neurons activates a specific motor synergy. Collectively, these neurons evoke the motor output necessary for the postural correction. In contrast to lampreys, postural corrections in quadrupeds are primarily based not on the vestibular input but on the somatosensory input from limb mechanoreceptors. The system contains two closed-loop mechanisms – spinal and spino-supraspinal networks, which supplement each other. Spinal networks receive somatosensory input from the limb signaling postural perturbations, and generate spinal postural limb reflexes. These reflexes are relatively weak, but in intact animals they are enhanced due to both tonic supraspinal drive and phasic supraspinal commands. Recent studies of these supraspinal influences are considered in this review. A hypothesis suggesting common principles of operation of the postural systems stabilizing body orientation in a particular plane in the lamprey and quadrupeds, that is interaction of antagonistic postural reflexes, is discussed. PMID:25324741

  9. Autonomous self-configuration of artificial neural networks for data classification or system control

    NASA Astrophysics Data System (ADS)

    Fink, Wolfgang

    2009-05-01

    Artificial neural networks (ANNs) are powerful methods for the classification of multi-dimensional data as well as for the control of dynamic systems. In general terms, ANNs consist of neurons that are, e.g., arranged in layers and interconnected by real-valued or binary neural couplings or weights. ANNs try mimicking the processing taking place in biological brains. The classification and generalization capabilities of ANNs are given by the interconnection architecture and the coupling strengths. To perform a certain classification or control task with a particular ANN architecture (i.e., number of neurons, number of layers, etc.), the inter-neuron couplings and their accordant coupling strengths must be determined (1) either by a priori design (i.e., manually) or (2) using training algorithms such as error back-propagation. The more complex the classification or control task, the less obvious it is how to determine an a priori design of an ANN, and, as a consequence, the architecture choice becomes somewhat arbitrary. Furthermore, rather than being able to determine for a given architecture directly the corresponding coupling strengths necessary to perform the classification or control task, these have to be obtained/learned through training of the ANN on test data. We report on the use of a Stochastic Optimization Framework (SOF; Fink, SPIE 2008) for the autonomous self-configuration of Artificial Neural Networks (i.e., the determination of number of hidden layers, number of neurons per hidden layer, interconnections between neurons, and respective coupling strengths) for performing classification or control tasks. This may provide an approach towards cognizant and self-adapting computing architectures and systems.

  10. GABAa excitation and synaptogenesis after Status Epilepticus - A computational study.

    PubMed

    França, Keite Lira de Almeida; de Almeida, Antônio-Carlos Guimarães; Saddow, Stephen E; Santos, Luiz Eduardo Canton; Scorza, Carla Alessandra; Scorza, Fulvio Alexandre; Rodrigues, Antônio Márcio

    2018-03-08

    The role of GABAergic neurotransmission on epileptogenesis has been the subject of speculation according to different approaches. However, it is a very complex task to specifically consider the action of the GABAa neurotransmitter, which, in its dependence on the intracellular level of Cl - , can change its effect from inhibitory to excitatory. We have developed a computational model that represents the dentate gyrus and is composed of three different populations of neurons (granule cells, interneurons and mossy cells) that are mutually interconnected. The interconnections of the neurons were based on compensation theory with Hebbian and anti-Hebbian rules. The model also incorporates non-synaptic mechanisms to control the ionic homeostasis and was able to reproduce ictal discharges. The goal of the work was to investigate the hypothesis that the observed aberrant sprouting is promoted by GABAa excitatory action. Conjointly with the abnormal sprouting of the mossy fibres, the simulations show a reduction of the mossy cells connections in the network and an increased inhibition of the interneurons as a response of the neuronal network to control the activity. This finding contributes to increasing the changes in the connectivity of the neuronal circuitry and to increasing the epileptiform activity occurrences.

  11. Cumulative lesioning of respiratory interneurons disrupts and precludes motor rhythms in vitro

    PubMed Central

    Hayes, John A.; Wang, Xueying; Del Negro, Christopher A.

    2012-01-01

    How brain functions degenerate in the face of progressive cell loss is an important issue that pertains to neurodegenerative diseases and basic properties of neural networks. We developed an automated system that uses two-photon microscopy to detect rhythmic neurons from calcium activity, and then individually laser ablates the targets while monitoring network function in real time. We applied this system to the mammalian respiratory oscillator located in the pre-Bötzinger Complex (preBötC) of the ventral medulla, which spontaneously generates breathing-related motor activity in vitro. Here, we show that cumulatively deleting preBötC neurons progressively decreases respiratory frequency and the amplitude of motor output. On average, the deletion of 120 ± 45 neurons stopped spontaneous respiratory rhythm, and our data suggest ≈82% of the rhythm-generating neurons remain unlesioned. Cumulative ablations in other medullary respiratory regions did not affect frequency but diminished the amplitude of motor output to a lesser degree. These results suggest that the preBötC can sustain insults that destroy no more than ≈18% of its constituent interneurons, which may have implications for the onset of respiratory pathologies in disease states. PMID:22566628

  12. pH during non-synaptic epileptiform activity-computational simulations.

    PubMed

    Rodrigues, Antônio Márcio; Santos, Luiz Eduardo Canton; Covolan, Luciene; Hamani, Clement; de Almeida, Antônio-Carlos Guimarães

    2015-09-02

    The excitability of neuronal networks is strongly modulated by changes in pH. The origin of these changes, however, is still under debate. The high complexity of neural systems justifies the use of computational simulation to investigate mechanisms that are possibly involved. Simulated neuronal activity includes non-synaptic epileptiform events (NEA) induced in hippocampal slices perfused with high-K(+) and zero-Ca(2+), therefore in the absence of the synaptic circuitry. A network of functional units composes the NEA model. Each functional unit represents one interface of neuronal/extracellular space/glial segments. Each interface contains transmembrane ionic transports, such as ionic channels, cotransporters, exchangers and pumps. Neuronal interconnections are mediated by gap-junctions, electric field effects and extracellular ionic fluctuations modulated by extracellular electrodiffusion. Mechanisms investigated are those that change intracellular and extracellular ionic concentrations and are able to affect [H(+)]. Our simulations suggest that the intense fluctuations in intra and extracellular concentrations of Na(+), K(+) and Cl(-) that accompany NEA are able to affect the combined action of the Na(+)/H(+) exchanger (NHE), [HCO(-)(3)]/Cl(-) exchanger (HCE), H(+) pump and the catalytic activity of intra and extracellular carbonic anhydrase. Cellular volume changes and extracellular electrodiffusion are responsible for modulating pH.

  13. Raphé neurons stimulate respiratory circuit activity by multiple mechanisms via endogenously released serotonin and substance P

    PubMed Central

    Ptak, Krzysztof; Yamanishi, Tadashi; Aungst, Jason; Milescu, Lorin S.; Zhang, Ruli; Richerson, George B.; Smith, Jeffrey C.

    2010-01-01

    Brainstem serotonin (5-HT) neurons modulate activity of many neural circuits in the mammalian brain, but in many cases endogenous mechanisms have not been resolved. Here, we analyzed actions of raphé 5-HT neurons on respiratory network activity including at the level of the pre–Bötzinger complex (pre-BötC) in neonatal rat medullary slices in vitro, and in the more intact nervous system of juvenile rats in arterially perfused brainstem-spinal cord preparations in situ. At basal levels of activity, excitation of the respiratory network via simultaneous release of 5-HT and substance P (SP), acting at 5-HT2A/2C, 5-HT4 and/or neurokinin-1 receptors, was required to maintain inspiratory motor output in both the neonatal and juvenile systems. The midline raphé obscurus contained spontaneously active 5-HT neurons, some of which projected to the pre-BötC and hypoglossal motoneurons, co-localized 5-HT and SP, and received reciprocal excitatory connections from the pre-BötC. Experimentally augmenting raphé obscurus activity increased motor output by simultaneously exciting pre-BötC and motor neurons. Biophysical analyses in vitro demonstrated that 5-HT and SP modulated background cation conductances in pre-BötC and motor neurons, including a non–selective cation leak current that contributed to the resting potential, which explains the neuronal depolarization that augmented motor output. Furthermore, we found that 5-HT, but not SP, can transform the electrophysiological phenotype of some pre-BötC neurons to intrinsic bursters, providing 5-HT with an additional role in promoting rhythm generation. We conclude that raphé 5-HT neurons excite key circuit components required for generation of respiratory motor output. PMID:19321769

  14. Impact of self-healing capability on network robustness

    NASA Astrophysics Data System (ADS)

    Shang, Yilun

    2015-04-01

    A wide spectrum of real-life systems ranging from neurons to botnets display spontaneous recovery ability. Using the generating function formalism applied to static uncorrelated random networks with arbitrary degree distributions, the microscopic mechanism underlying the depreciation-recovery process is characterized and the effect of varying self-healing capability on network robustness is revealed. It is found that the self-healing capability of nodes has a profound impact on the phase transition in the emergence of percolating clusters, and that salient difference exists in upholding network integrity under random failures and intentional attacks. The results provide a theoretical framework for quantitatively understanding the self-healing phenomenon in varied complex systems.

  15. Impact of self-healing capability on network robustness.

    PubMed

    Shang, Yilun

    2015-04-01

    A wide spectrum of real-life systems ranging from neurons to botnets display spontaneous recovery ability. Using the generating function formalism applied to static uncorrelated random networks with arbitrary degree distributions, the microscopic mechanism underlying the depreciation-recovery process is characterized and the effect of varying self-healing capability on network robustness is revealed. It is found that the self-healing capability of nodes has a profound impact on the phase transition in the emergence of percolating clusters, and that salient difference exists in upholding network integrity under random failures and intentional attacks. The results provide a theoretical framework for quantitatively understanding the self-healing phenomenon in varied complex systems.

  16. Causal influence in neural systems: Reconciling mechanistic-reductionist and statistical perspectives. Comment on "Foundational perspectives on causality in large-scale brain networks" by M. Mannino & S.L. Bressler

    NASA Astrophysics Data System (ADS)

    Griffiths, John D.

    2015-12-01

    The modern understanding of the brain as a large, complex network of interacting elements is a natural consequence of the Neuron Doctrine [1,2] that has been bolstered in recent years by the tools and concepts of connectomics. In this abstracted, network-centric view, the essence of neural and cognitive function derives from the flows between network elements of activity and information - or, more generally, causal influence. The appropriate characterization of causality in neural systems, therefore, is a question at the very heart of systems neuroscience.

  17. Intrinsically active and pacemaker neurons in pluripotent stem cell-derived neuronal populations.

    PubMed

    Illes, Sebastian; Jakab, Martin; Beyer, Felix; Gelfert, Renate; Couillard-Despres, Sébastien; Schnitzler, Alfons; Ritter, Markus; Aigner, Ludwig

    2014-03-11

    Neurons generated from pluripotent stem cells (PSCs) self-organize into functional neuronal assemblies in vitro, generating synchronous network activities. Intriguingly, PSC-derived neuronal assemblies develop spontaneous activities that are independent of external stimulation, suggesting the presence of thus far undetected intrinsically active neurons (IANs). Here, by using mouse embryonic stem cells, we provide evidence for the existence of IANs in PSC-neuronal networks based on extracellular multielectrode array and intracellular patch-clamp recordings. IANs remain active after pharmacological inhibition of fast synaptic communication and possess intrinsic mechanisms required for autonomous neuronal activity. PSC-derived IANs are functionally integrated in PSC-neuronal populations, contribute to synchronous network bursting, and exhibit pacemaker properties. The intrinsic activity and pacemaker properties of the neuronal subpopulation identified herein may be particularly relevant for interventions involving transplantation of neural tissues. IANs may be a key element in the regulation of the functional activity of grafted as well as preexisting host neuronal networks.

  18. Intrinsically Active and Pacemaker Neurons in Pluripotent Stem Cell-Derived Neuronal Populations

    PubMed Central

    Illes, Sebastian; Jakab, Martin; Beyer, Felix; Gelfert, Renate; Couillard-Despres, Sébastien; Schnitzler, Alfons; Ritter, Markus; Aigner, Ludwig

    2014-01-01

    Summary Neurons generated from pluripotent stem cells (PSCs) self-organize into functional neuronal assemblies in vitro, generating synchronous network activities. Intriguingly, PSC-derived neuronal assemblies develop spontaneous activities that are independent of external stimulation, suggesting the presence of thus far undetected intrinsically active neurons (IANs). Here, by using mouse embryonic stem cells, we provide evidence for the existence of IANs in PSC-neuronal networks based on extracellular multielectrode array and intracellular patch-clamp recordings. IANs remain active after pharmacological inhibition of fast synaptic communication and possess intrinsic mechanisms required for autonomous neuronal activity. PSC-derived IANs are functionally integrated in PSC-neuronal populations, contribute to synchronous network bursting, and exhibit pacemaker properties. The intrinsic activity and pacemaker properties of the neuronal subpopulation identified herein may be particularly relevant for interventions involving transplantation of neural tissues. IANs may be a key element in the regulation of the functional activity of grafted as well as preexisting host neuronal networks. PMID:24672755

  19. A spatially resolved network spike in model neuronal cultures reveals nucleation centers, circular traveling waves and drifting spiral waves.

    PubMed

    Paraskevov, A V; Zendrikov, D K

    2017-03-23

    We show that in model neuronal cultures, where the probability of interneuronal connection formation decreases exponentially with increasing distance between the neurons, there exists a small number of spatial nucleation centers of a network spike, from where the synchronous spiking activity starts propagating in the network typically in the form of circular traveling waves. The number of nucleation centers and their spatial locations are unique and unchanged for a given realization of neuronal network but are different for different networks. In contrast, if the probability of interneuronal connection formation is independent of the distance between neurons, then the nucleation centers do not arise and the synchronization of spiking activity during a network spike occurs spatially uniform throughout the network. Therefore one can conclude that spatial proximity of connections between neurons is important for the formation of nucleation centers. It is also shown that fluctuations of the spatial density of neurons at their random homogeneous distribution typical for the experiments in vitro do not determine the locations of the nucleation centers. The simulation results are qualitatively consistent with the experimental observations.

  20. A spatially resolved network spike in model neuronal cultures reveals nucleation centers, circular traveling waves and drifting spiral waves

    NASA Astrophysics Data System (ADS)

    Paraskevov, A. V.; Zendrikov, D. K.

    2017-04-01

    We show that in model neuronal cultures, where the probability of interneuronal connection formation decreases exponentially with increasing distance between the neurons, there exists a small number of spatial nucleation centers of a network spike, from where the synchronous spiking activity starts propagating in the network typically in the form of circular traveling waves. The number of nucleation centers and their spatial locations are unique and unchanged for a given realization of neuronal network but are different for different networks. In contrast, if the probability of interneuronal connection formation is independent of the distance between neurons, then the nucleation centers do not arise and the synchronization of spiking activity during a network spike occurs spatially uniform throughout the network. Therefore one can conclude that spatial proximity of connections between neurons is important for the formation of nucleation centers. It is also shown that fluctuations of the spatial density of neurons at their random homogeneous distribution typical for the experiments in vitro do not determine the locations of the nucleation centers. The simulation results are qualitatively consistent with the experimental observations.

  1. A new cross-correlation algorithm for the analysis of "in vitro" neuronal network activity aimed at pharmacological studies.

    PubMed

    Biffi, E; Menegon, A; Regalia, G; Maida, S; Ferrigno, G; Pedrocchi, A

    2011-08-15

    Modern drug discovery for Central Nervous System pathologies has recently focused its attention to in vitro neuronal networks as models for the study of neuronal activities. Micro Electrode Arrays (MEAs), a widely recognized tool for pharmacological investigations, enable the simultaneous study of the spiking activity of discrete regions of a neuronal culture, providing an insight into the dynamics of networks. Taking advantage of MEAs features and making the most of the cross-correlation analysis to assess internal parameters of a neuronal system, we provide an efficient method for the evaluation of comprehensive neuronal network activity. We developed an intra network burst correlation algorithm, we evaluated its sensitivity and we explored its potential use in pharmacological studies. Our results demonstrate the high sensitivity of this algorithm and the efficacy of this methodology in pharmacological dose-response studies, with the advantage of analyzing the effect of drugs on the comprehensive correlative properties of integrated neuronal networks. Copyright © 2011 Elsevier B.V. All rights reserved.

  2. Neural networks with multiple general neuron models: a hybrid computational intelligence approach using Genetic Programming.

    PubMed

    Barton, Alan J; Valdés, Julio J; Orchard, Robert

    2009-01-01

    Classical neural networks are composed of neurons whose nature is determined by a certain function (the neuron model), usually pre-specified. In this paper, a type of neural network (NN-GP) is presented in which: (i) each neuron may have its own neuron model in the form of a general function, (ii) any layout (i.e network interconnection) is possible, and (iii) no bias nodes or weights are associated to the connections, neurons or layers. The general functions associated to a neuron are learned by searching a function space. They are not provided a priori, but are rather built as part of an Evolutionary Computation process based on Genetic Programming. The resulting network solutions are evaluated based on a fitness measure, which may, for example, be based on classification or regression errors. Two real-world examples are presented to illustrate the promising behaviour on classification problems via construction of a low-dimensional representation of a high-dimensional parameter space associated to the set of all network solutions.

  3. Self-organization of synchronous activity propagation in neuronal networks driven by local excitation

    PubMed Central

    Bayati, Mehdi; Valizadeh, Alireza; Abbassian, Abdolhossein; Cheng, Sen

    2015-01-01

    Many experimental and theoretical studies have suggested that the reliable propagation of synchronous neural activity is crucial for neural information processing. The propagation of synchronous firing activity in so-called synfire chains has been studied extensively in feed-forward networks of spiking neurons. However, it remains unclear how such neural activity could emerge in recurrent neuronal networks through synaptic plasticity. In this study, we investigate whether local excitation, i.e., neurons that fire at a higher frequency than the other, spontaneously active neurons in the network, can shape a network to allow for synchronous activity propagation. We use two-dimensional, locally connected and heterogeneous neuronal networks with spike-timing dependent plasticity (STDP). We find that, in our model, local excitation drives profound network changes within seconds. In the emergent network, neural activity propagates synchronously through the network. This activity originates from the site of the local excitation and propagates through the network. The synchronous activity propagation persists, even when the local excitation is removed, since it derives from the synaptic weight matrix. Importantly, once this connectivity is established it remains stable even in the presence of spontaneous activity. Our results suggest that synfire-chain-like activity can emerge in a relatively simple way in realistic neural networks by locally exciting the desired origin of the neuronal sequence. PMID:26089794

  4. Prediction of hearing loss among the noise-exposed workers in a steel factory using artificial intelligence approach.

    PubMed

    Aliabadi, Mohsen; Farhadian, Maryam; Darvishi, Ebrahim

    2015-08-01

    Prediction of hearing loss in noisy workplaces is considered to be an important aspect of hearing conservation program. Artificial intelligence, as a new approach, can be used to predict the complex phenomenon such as hearing loss. Using artificial neural networks, this study aims to present an empirical model for the prediction of the hearing loss threshold among noise-exposed workers. Two hundred and ten workers employed in a steel factory were chosen, and their occupational exposure histories were collected. To determine the hearing loss threshold, the audiometric test was carried out using a calibrated audiometer. The personal noise exposure was also measured using a noise dosimeter in the workstations of workers. Finally, data obtained five variables, which can influence the hearing loss, were used for the development of the prediction model. Multilayer feed-forward neural networks with different structures were developed using MATLAB software. Neural network structures had one hidden layer with the number of neurons being approximately between 5 and 15 neurons. The best developed neural networks with one hidden layer and ten neurons could accurately predict the hearing loss threshold with RMSE = 2.6 dB and R(2) = 0.89. The results also confirmed that neural networks could provide more accurate predictions than multiple regressions. Since occupational hearing loss is frequently non-curable, results of accurate prediction can be used by occupational health experts to modify and improve noise exposure conditions.

  5. Synaptic Impairment and Robustness of Excitatory Neuronal Networks with Different Topologies

    PubMed Central

    Mirzakhalili, Ehsan; Gourgou, Eleni; Booth, Victoria; Epureanu, Bogdan

    2017-01-01

    Synaptic deficiencies are a known hallmark of neurodegenerative diseases, but the diagnosis of impaired synapses on the cellular level is not an easy task. Nonetheless, changes in the system-level dynamics of neuronal networks with damaged synapses can be detected using techniques that do not require high spatial resolution. This paper investigates how the structure/topology of neuronal networks influences their dynamics when they suffer from synaptic loss. We study different neuronal network structures/topologies by specifying their degree distributions. The modes of the degree distribution can be used to construct networks that consist of rich clubs and resemble small world networks, as well. We define two dynamical metrics to compare the activity of networks with different structures: persistent activity (namely, the self-sustained activity of the network upon removal of the initial stimulus) and quality of activity (namely, percentage of neurons that participate in the persistent activity of the network). Our results show that synaptic loss affects the persistent activity of networks with bimodal degree distributions less than it affects random networks. The robustness of neuronal networks enhances when the distance between the modes of the degree distribution increases, suggesting that the rich clubs of networks with distinct modes keep the whole network active. In addition, a tradeoff is observed between the quality of activity and the persistent activity. For a range of distributions, both of these dynamical metrics are considerably high for networks with bimodal degree distribution compared to random networks. We also propose three different scenarios of synaptic impairment, which may correspond to different pathological or biological conditions. Regardless of the network structure/topology, results demonstrate that synaptic loss has more severe effects on the activity of the network when impairments are correlated with the activity of the neurons. PMID:28659765

  6. PhotoMEA: an opto-electronic biosensor for monitoring in vitro neuronal network activity.

    PubMed

    Ghezzi, Diego; Pedrocchi, Alessandra; Menegon, Andrea; Mantero, Sara; Valtorta, Flavia; Ferrigno, Giancarlo

    2007-02-01

    PhotoMEA is a biosensor useful for the analysis of an in vitro neuronal network, fully based on optical methods. Its function is based on the stimulation of neurons with caged glutamate and the recording of neuronal activity by Voltage-Sensitive fluorescent Dyes (VSD). The main advantage is that it will be possible to stimulate even at sub-single neuron level and to record with high resolution the activity of the entire network in the culture. A large-scale view of neuronal intercommunications offers a unique opportunity for testing the ability of drugs to affect neuronal properties as well as alterations in the behaviour of the entire network. The concept and a prototype for validation is described here in detail.

  7. Synaptic dynamics regulation in response to high frequency stimulation in neuronal networks

    NASA Astrophysics Data System (ADS)

    Su, Fei; Wang, Jiang; Li, Huiyan; Wei, Xile; Yu, Haitao; Deng, Bin

    2018-02-01

    High frequency stimulation (HFS) has confirmed its ability in modulating the pathological neural activities. However its detailed mechanism is unclear. This study aims to explore the effects of HFS on neuronal networks dynamics. First, the two-neuron FitzHugh-Nagumo (FHN) networks with static coupling strength and the small-world FHN networks with spike-time-dependent plasticity (STDP) modulated synaptic coupling strength are constructed. Then, the multi-scale method is used to transform the network models into equivalent averaged models, where the HFS intensity is modeled as the ratio between stimulation amplitude and frequency. Results show that in static two-neuron networks, there is still synaptic current projected to the postsynaptic neuron even if the presynaptic neuron is blocked by the HFS. In the small-world networks, the effects of the STDP adjusting rate parameter on the inactivation ratio and synchrony degree increase with the increase of HFS intensity. However, only when the HFS intensity becomes very large can the STDP time window parameter affect the inactivation ratio and synchrony index. Both simulation and numerical analysis demonstrate that the effects of HFS on neuronal network dynamics are realized through the adjustment of synaptic variable and conductance.

  8. Training a Network of Electronic Neurons for Control of a Mobile Robot

    NASA Astrophysics Data System (ADS)

    Vromen, T. G. M.; Steur, E.; Nijmeijer, H.

    An adaptive training procedure is developed for a network of electronic neurons, which controls a mobile robot driving around in an unknown environment while avoiding obstacles. The neuronal network controls the angular velocity of the wheels of the robot based on the sensor readings. The nodes in the neuronal network controller are clusters of neurons rather than single neurons. The adaptive training procedure ensures that the input-output behavior of the clusters is identical, even though the constituting neurons are nonidentical and have, in isolation, nonidentical responses to the same input. In particular, we let the neurons interact via a diffusive coupling, and the proposed training procedure modifies the diffusion interaction weights such that the neurons behave synchronously with a predefined response. The working principle of the training procedure is experimentally validated and results of an experiment with a mobile robot that is completely autonomously driving in an unknown environment with obstacles are presented.

  9. Establishment of a Human Neuronal Network Assessment System by Using a Human Neuron/Astrocyte Co-Culture Derived from Fetal Neural Stem/Progenitor Cells.

    PubMed

    Fukushima, Kazuyuki; Miura, Yuji; Sawada, Kohei; Yamazaki, Kazuto; Ito, Masashi

    2016-01-01

    Using human cell models mimicking the central nervous system (CNS) provides a better understanding of the human CNS, and it is a key strategy to improve success rates in CNS drug development. In the CNS, neurons function as networks in which astrocytes play important roles. Thus, an assessment system of neuronal network functions in a co-culture of human neurons and astrocytes has potential to accelerate CNS drug development. We previously demonstrated that human hippocampus-derived neural stem/progenitor cells (HIP-009 cells) were a novel tool to obtain human neurons and astrocytes in the same culture. In this study, we applied HIP-009 cells to a multielectrode array (MEA) system to detect neuronal signals as neuronal network functions. We observed spontaneous firings of HIP-009 neurons, and validated functional formation of neuronal networks pharmacologically. By using this assay system, we investigated effects of several reference compounds, including agonists and antagonists of glutamate and γ-aminobutyric acid receptors, and sodium, potassium, and calcium channels, on neuronal network functions using firing and burst numbers, and synchrony as readouts. These results indicate that the HIP-009/MEA assay system is applicable to the pharmacological assessment of drug candidates affecting synaptic functions for CNS drug development. © 2015 Society for Laboratory Automation and Screening.

  10. Extraction of Inter-Aural Time Differences Using a Spiking Neuron Network Model of the Medial Superior Olive.

    PubMed

    Encke, Jörg; Hemmert, Werner

    2018-01-01

    The mammalian auditory system is able to extract temporal and spectral features from sound signals at the two ears. One important cue for localization of low-frequency sound sources in the horizontal plane are inter-aural time differences (ITDs) which are first analyzed in the medial superior olive (MSO) in the brainstem. Neural recordings of ITD tuning curves at various stages along the auditory pathway suggest that ITDs in the mammalian brainstem are not represented in form of a Jeffress-type place code. An alternative is the hemispheric opponent-channel code, according to which ITDs are encoded as the difference in the responses of the MSO nuclei in the two hemispheres. In this study, we present a physiologically-plausible, spiking neuron network model of the mammalian MSO circuit and apply two different methods of extracting ITDs from arbitrary sound signals. The network model is driven by a functional model of the auditory periphery and physiological models of the cochlear nucleus and the MSO. Using a linear opponent-channel decoder, we show that the network is able to detect changes in ITD with a precision down to 10 μs and that the sensitivity of the decoder depends on the slope of the ITD-rate functions. A second approach uses an artificial neuronal network to predict ITDs directly from the spiking output of the MSO and ANF model. Using this predictor, we show that the MSO-network is able to reliably encode static and time-dependent ITDs over a large frequency range, also for complex signals like speech.

  11. Synaptic plasticity and neuronal refractory time cause scaling behaviour of neuronal avalanches

    NASA Astrophysics Data System (ADS)

    Michiels van Kessenich, L.; de Arcangelis, L.; Herrmann, H. J.

    2016-08-01

    Neuronal avalanches measured in vitro and in vivo in different cortical networks consistently exhibit power law behaviour for the size and duration distributions with exponents typical for a mean field self-organized branching process. These exponents are also recovered in neuronal network simulations implementing various neuronal dynamics on different network topologies. They can therefore be considered a very robust feature of spontaneous neuronal activity. Interestingly, this scaling behaviour is also observed on regular lattices in finite dimensions, which raises the question about the origin of the mean field behavior observed experimentally. In this study we provide an answer to this open question by investigating the effect of activity dependent plasticity in combination with the neuronal refractory time in a neuronal network. Results show that the refractory time hinders backward avalanches forcing a directed propagation. Hebbian plastic adaptation plays the role of sculpting these directed avalanche patterns into the topology of the network slowly changing it into a branched structure where loops are marginal.

  12. Synaptic plasticity and neuronal refractory time cause scaling behaviour of neuronal avalanches.

    PubMed

    Michiels van Kessenich, L; de Arcangelis, L; Herrmann, H J

    2016-08-18

    Neuronal avalanches measured in vitro and in vivo in different cortical networks consistently exhibit power law behaviour for the size and duration distributions with exponents typical for a mean field self-organized branching process. These exponents are also recovered in neuronal network simulations implementing various neuronal dynamics on different network topologies. They can therefore be considered a very robust feature of spontaneous neuronal activity. Interestingly, this scaling behaviour is also observed on regular lattices in finite dimensions, which raises the question about the origin of the mean field behavior observed experimentally. In this study we provide an answer to this open question by investigating the effect of activity dependent plasticity in combination with the neuronal refractory time in a neuronal network. Results show that the refractory time hinders backward avalanches forcing a directed propagation. Hebbian plastic adaptation plays the role of sculpting these directed avalanche patterns into the topology of the network slowly changing it into a branched structure where loops are marginal.

  13. Clonal development and organization of the adult Drosophila central brain.

    PubMed

    Yu, Hung-Hsiang; Awasaki, Takeshi; Schroeder, Mark David; Long, Fuhui; Yang, Jacob S; He, Yisheng; Ding, Peng; Kao, Jui-Chun; Wu, Gloria Yueh-Yi; Peng, Hanchuan; Myers, Gene; Lee, Tzumin

    2013-04-22

    The insect brain can be divided into neuropils that are formed by neurites of both local and remote origin. The complexity of the interconnections obscures how these neuropils are established and interconnected through development. The Drosophila central brain develops from a fixed number of neuroblasts (NBs) that deposit neurons in regional clusters. By determining individual NB clones and pursuing their projections into specific neuropils, we unravel the regional development of the brain neural network. Exhaustive clonal analysis revealed 95 stereotyped neuronal lineages with characteristic cell-body locations and neurite trajectories. Most clones show complex projection patterns, but despite the complexity, neighboring clones often coinnervate the same local neuropil or neuropils and further target a restricted set of distant neuropils. These observations argue for regional clonal development of both neuropils and neuropil connectivity throughout the Drosophila central brain. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. The relevance of network micro-structure for neural dynamics.

    PubMed

    Pernice, Volker; Deger, Moritz; Cardanobile, Stefano; Rotter, Stefan

    2013-01-01

    The activity of cortical neurons is determined by the input they receive from presynaptic neurons. Many previous studies have investigated how specific aspects of the statistics of the input affect the spike trains of single neurons and neurons in recurrent networks. However, typically very simple random network models are considered in such studies. Here we use a recently developed algorithm to construct networks based on a quasi-fractal probability measure which are much more variable than commonly used network models, and which therefore promise to sample the space of recurrent networks in a more exhaustive fashion than previously possible. We use the generated graphs as the underlying network topology in simulations of networks of integrate-and-fire neurons in an asynchronous and irregular state. Based on an extensive dataset of networks and neuronal simulations we assess statistical relations between features of the network structure and the spiking activity. Our results highlight the strong influence that some details of the network structure have on the activity dynamics of both single neurons and populations, even if some global network parameters are kept fixed. We observe specific and consistent relations between activity characteristics like spike-train irregularity or correlations and network properties, for example the distributions of the numbers of in- and outgoing connections or clustering. Exploiting these relations, we demonstrate that it is possible to estimate structural characteristics of the network from activity data. We also assess higher order correlations of spiking activity in the various networks considered here, and find that their occurrence strongly depends on the network structure. These results provide directions for further theoretical studies on recurrent networks, as well as new ways to interpret spike train recordings from neural circuits.

  15. The Schizophrenia Risk Gene MIR137 Acts as a Hippocampal Gene Network Node Orchestrating the Expression of Genes Relevant to Nervous System Development and Function

    PubMed Central

    Loohuis, Nikkie FM Olde; Kasri, Nael Nadif; Glennon, Jeffrey C; van Bokhoven, Hans; Hébert, Sébastien S; Kaplan, Barry B.; Martens, Gerard JM; Aschrafi, Armaz

    2016-01-01

    MicroRNAs (miRs) are small regulatory molecules, which orchestrate neuronal development and plasticity through modulation of complex gene networks. microRNA-137 (miR-137) is a brain-enriched RNA with a critical role in regulating brain development and in mediating synaptic plasticity. Importantly, mutations in this miR are associated with the pathoetiology of schizophrenia (SZ), and there is a widespread assumption that disruptions in miR-137 expression lead to aberrant expression of gene regulatory networks associated with SZ. To systematically identify the mRNA targets for this miR, we performed miR-137 gain- and loss-of-function experiments in primary rat hippocampal neurons and profiled differentially expressed mRNAs through next-generation sequencing. We identified 500 genes that were bidirectionally activated or repressed in their expression by the modulation of miR-137 levels. Gene ontology analysis using two independent software resources suggested functions for these miR-137-regulated genes in neurodevelopmental processes, neuronal maturation processes and cell maintenance, all of which known to be critical for proper brain circuitry formation. Since many of the putative miR-137 targets identified here also have been previously shown to be associated with SZ, we propose that this miR acts as a critical gene network hub contributing to the pathophysiology of this neurodevelopmental disorder. PMID:26925706

  16. Spike-train spectra and network response functions for non-linear integrate-and-fire neurons.

    PubMed

    Richardson, Magnus J E

    2008-11-01

    Reduced models have long been used as a tool for the analysis of the complex activity taking place in neurons and their coupled networks. Recent advances in experimental and theoretical techniques have further demonstrated the usefulness of this approach. Despite the often gross simplification of the underlying biophysical properties, reduced models can still present significant difficulties in their analysis, with the majority of exact and perturbative results available only for the leaky integrate-and-fire model. Here an elementary numerical scheme is demonstrated which can be used to calculate a number of biologically important properties of the general class of non-linear integrate-and-fire models. Exact results for the first-passage-time density and spike-train spectrum are derived, as well as the linear response properties and emergent states of recurrent networks. Given that the exponential integrate-fire model has recently been shown to agree closely with the experimentally measured response of pyramidal cells, the methodology presented here promises to provide a convenient tool to facilitate the analysis of cortical-network dynamics.

  17. The neuroanatomical function of leptin in the hypothalamus.

    PubMed

    van Swieten, M M H; Pandit, R; Adan, R A H; van der Plasse, G

    2014-11-01

    The anorexigenic hormone leptin plays an important role in the control of food intake and feeding-related behavior, for an important part through its action in the hypothalamus. The adipose-derived hormone modulates a complex network of several intercommunicating orexigenic and anorexigenic neuropeptides in the hypothalamus to reduce food intake and increase energy expenditure. In this review we present an updated overview of the functional role of leptin in respect to feeding and feeding-related behavior per distinct hypothalamic nuclei. In addition to the arcuate nucleus, which is a major leptin sensitive hub, leptin-responsive neurons in other hypothalamic nuclei, including the, dorsomedial-, ventromedial- and paraventricular nucleus and the lateral hypothalamic area, are direct targets of leptin. However, leptin also modulates hypothalamic neurons in an indirect manner, such as via the melanocortin system. The dissection of the complexity of leptin's action on the networks involved in energy balance is subject of recent and future studies. A full understanding of the role of hypothalamic leptin in the regulation of energy balance requires cell-specific manipulation using of conditional deletion and expression of leptin receptors. In addition, optogenetic and pharmacogenetic tools in combination with other pharmacological (such as the recent discovery of a leptin receptor antagonist) and neuronal tracing techniques to map the circuit, will be helpful to understand the role of leptin receptor expressing neurons. Better understanding of these circuits and the involvement of leptin could provide potential sites for therapeutic interventions in obesity and metabolic diseases characterized by dysregulation of energy balance. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. Homeostatic Scaling of Excitability in Recurrent Neural Networks

    PubMed Central

    Remme, Michiel W. H.; Wadman, Wytse J.

    2012-01-01

    Neurons adjust their intrinsic excitability when experiencing a persistent change in synaptic drive. This process can prevent neural activity from moving into either a quiescent state or a saturated state in the face of ongoing plasticity, and is thought to promote stability of the network in which neurons reside. However, most neurons are embedded in recurrent networks, which require a delicate balance between excitation and inhibition to maintain network stability. This balance could be disrupted when neurons independently adjust their intrinsic excitability. Here, we study the functioning of activity-dependent homeostatic scaling of intrinsic excitability (HSE) in a recurrent neural network. Using both simulations of a recurrent network consisting of excitatory and inhibitory neurons that implement HSE, and a mean-field description of adapting excitatory and inhibitory populations, we show that the stability of such adapting networks critically depends on the relationship between the adaptation time scales of both neuron populations. In a stable adapting network, HSE can keep all neurons functioning within their dynamic range, while the network is undergoing several (patho)physiologically relevant types of plasticity, such as persistent changes in external drive, changes in connection strengths, or the loss of inhibitory cells from the network. However, HSE cannot prevent the unstable network dynamics that result when, due to such plasticity, recurrent excitation in the network becomes too strong compared to feedback inhibition. This suggests that keeping a neural network in a stable and functional state requires the coordination of distinct homeostatic mechanisms that operate not only by adjusting neural excitability, but also by controlling network connectivity. PMID:22570604

  19. Periodic activation function and a modified learning algorithm for the multivalued neuron.

    PubMed

    Aizenberg, Igor

    2010-12-01

    In this paper, we consider a new periodic activation function for the multivalued neuron (MVN). The MVN is a neuron with complex-valued weights and inputs/output, which are located on the unit circle. Although the MVN outperforms many other neurons and MVN-based neural networks have shown their high potential, the MVN still has a limited capability of learning highly nonlinear functions. A periodic activation function, which is introduced in this paper, makes it possible to learn nonlinearly separable problems and non-threshold multiple-valued functions using a single multivalued neuron. We call this neuron a multivalued neuron with a periodic activation function (MVN-P). The MVN-Ps functionality is much higher than that of the regular MVN. The MVN-P is more efficient in solving various classification problems. A learning algorithm based on the error-correction rule for the MVN-P is also presented. It is shown that a single MVN-P can easily learn and solve those benchmark classification problems that were considered unsolvable using a single neuron. It is also shown that a universal binary neuron, which can learn nonlinearly separable Boolean functions, and a regular MVN are particular cases of the MVN-P.

  20. Multiple conserved cell adhesion protein interactions mediate neural wiring of a sensory circuit in C. elegans.

    PubMed

    Kim, Byunghyuk; Emmons, Scott W

    2017-09-13

    Nervous system function relies on precise synaptic connections. A number of widely-conserved cell adhesion proteins are implicated in cell recognition between synaptic partners, but how these proteins act as a group to specify a complex neural network is poorly understood. Taking advantage of known connectivity in C. elegans , we identified and studied cell adhesion genes expressed in three interacting neurons in the mating circuits of the adult male. Two interacting pairs of cell surface proteins independently promote fasciculation between sensory neuron HOA and its postsynaptic target interneuron AVG: BAM-2/neurexin-related in HOA binds to CASY-1/calsyntenin in AVG; SAX-7/L1CAM in sensory neuron PHC binds to RIG-6/contactin in AVG. A third, basal pathway results in considerable HOA-AVG fasciculation and synapse formation in the absence of the other two. The features of this multiplexed mechanism help to explain how complex connectivity is encoded and robustly established during nervous system development.

  1. Inherited Paediatric Motor Neuron Disorders: Beyond Spinal Muscular Atrophy

    PubMed Central

    Sampaio, Hugo; Mowat, David; Roscioli, Tony

    2017-01-01

    Paediatric motor neuron diseases encompass a group of neurodegenerative diseases characterised by the onset of muscle weakness and atrophy before the age of 18 years, attributable to motor neuron loss across various neuronal networks in the brain and spinal cord. While the genetic underpinnings are diverse, advances in next generation sequencing have transformed diagnostic paradigms. This has reinforced the clinical phenotyping and molecular genetic expertise required to navigate the complexities of such diagnoses. In turn, improved genetic technology and subsequent gene identification have enabled further insights into the mechanisms of motor neuron degeneration and how these diseases form part of a neurodegenerative disorder spectrum. Common pathophysiologies include abnormalities in axonal architecture and function, RNA processing, and protein quality control. This review incorporates an overview of the clinical manifestations, genetics, and pathophysiology of inherited paediatric motor neuron disorders beyond classic SMN1-related spinal muscular atrophy and describes recent advances in next generation sequencing and its clinical application. Specific disease-modifying treatment is becoming a clinical reality in some disorders of the motor neuron highlighting the importance of a timely and specific diagnosis. PMID:28634552

  2. FPGA implementation of a biological neural network based on the Hodgkin-Huxley neuron model.

    PubMed

    Yaghini Bonabi, Safa; Asgharian, Hassan; Safari, Saeed; Nili Ahmadabadi, Majid

    2014-01-01

    A set of techniques for efficient implementation of Hodgkin-Huxley-based (H-H) model of a neural network on FPGA (Field Programmable Gate Array) is presented. The central implementation challenge is H-H model complexity that puts limits on the network size and on the execution speed. However, basics of the original model cannot be compromised when effect of synaptic specifications on the network behavior is the subject of study. To solve the problem, we used computational techniques such as CORDIC (Coordinate Rotation Digital Computer) algorithm and step-by-step integration in the implementation of arithmetic circuits. In addition, we employed different techniques such as sharing resources to preserve the details of model as well as increasing the network size in addition to keeping the network execution speed close to real time while having high precision. Implementation of a two mini-columns network with 120/30 excitatory/inhibitory neurons is provided to investigate the characteristic of our method in practice. The implementation techniques provide an opportunity to construct large FPGA-based network models to investigate the effect of different neurophysiological mechanisms, like voltage-gated channels and synaptic activities, on the behavior of a neural network in an appropriate execution time. Additional to inherent properties of FPGA, like parallelism and re-configurability, our approach makes the FPGA-based system a proper candidate for study on neural control of cognitive robots and systems as well.

  3. Evaluating a Multivariate Directional Connectivity Measure for Use in Electroencephalogram (EEG) Network Analysis Using a Conductance-Based Neuron Network Model

    DTIC Science & Technology

    2015-03-01

    of 7 information -theoretic criteria plotted against the model order used . The legend is labeled according to the figures in which the power spectra...spectrum (Brovelli et al. 2004). 6 Fig. 2 Values of 7 information -theoretic criteria plotted against the model order used . The legend is labeled...Identification of directed influence: Granger causality, Kullback - Leibler divergence, and complexity. Neural Computation. 2012;24(7):1722–1739. doi:10.1162

  4. Detection of 5-hydroxytryptamine (5-HT) in vitro using a hippocampal neuronal network-based biosensor with extracellular potential analysis of neurons.

    PubMed

    Hu, Liang; Wang, Qin; Qin, Zhen; Su, Kaiqi; Huang, Liquan; Hu, Ning; Wang, Ping

    2015-04-15

    5-hydroxytryptamine (5-HT) is an important neurotransmitter in regulating emotions and related behaviors in mammals. To detect and monitor the 5-HT, effective and convenient methods are demanded in investigation of neuronal network. In this study, hippocampal neuronal networks (HNNs) endogenously expressing 5-HT receptors were employed as sensing elements to build an in vitro neuronal network-based biosensor. The electrophysiological characteristics were analyzed in both neuron and network levels. The firing rates and amplitudes were derived from signal to determine the biosensor response characteristics. The experimental results demonstrate a dose-dependent inhibitory effect of 5-HT on hippocampal neuron activities, indicating the effectiveness of this hybrid biosensor in detecting 5-HT with a response range from 0.01μmol/L to 10μmol/L. In addition, the cross-correlation analysis of HNNs activities suggests 5-HT could weaken HNN connectivity reversibly, providing more specificity of this biosensor in detecting 5-HT. Moreover, 5-HT induced spatiotemporal firing pattern alterations could be monitored in neuron and network levels simultaneously by this hybrid biosensor in a convenient and direct way. With those merits, this neuronal network-based biosensor will be promising to be a valuable and utility platform for the study of neurotransmitter in vitro. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Extensive excitatory network interactions shape temporal processing of communication signals in a model sensory system.

    PubMed

    Ma, Xiaofeng; Kohashi, Tsunehiko; Carlson, Bruce A

    2013-07-01

    Many sensory brain regions are characterized by extensive local network interactions. However, we know relatively little about the contribution of this microcircuitry to sensory coding. Detailed analyses of neuronal microcircuitry are usually performed in vitro, whereas sensory processing is typically studied by recording from individual neurons in vivo. The electrosensory pathway of mormyrid fish provides a unique opportunity to link in vitro studies of synaptic physiology with in vivo studies of sensory processing. These fish communicate by actively varying the intervals between pulses of electricity. Within the midbrain posterior exterolateral nucleus (ELp), the temporal filtering of afferent spike trains establishes interval tuning by single neurons. We characterized pairwise neuronal connectivity among ELp neurons with dual whole cell recording in an in vitro whole brain preparation. We found a densely connected network in which single neurons influenced the responses of other neurons throughout the network. Similarly tuned neurons were more likely to share an excitatory synaptic connection than differently tuned neurons, and synaptic connections between similarly tuned neurons were stronger than connections between differently tuned neurons. We propose a general model for excitatory network interactions in which strong excitatory connections both reinforce and adjust tuning and weak excitatory connections make smaller modifications to tuning. The diversity of interval tuning observed among this population of neurons can be explained, in part, by each individual neuron receiving a different complement of local excitatory inputs.

  6. Self-Consistent Scheme for Spike-Train Power Spectra in Heterogeneous Sparse Networks

    PubMed Central

    Pena, Rodrigo F. O.; Vellmer, Sebastian; Bernardi, Davide; Roque, Antonio C.; Lindner, Benjamin

    2018-01-01

    Recurrent networks of spiking neurons can be in an asynchronous state characterized by low or absent cross-correlations and spike statistics which resemble those of cortical neurons. Although spatial correlations are negligible in this state, neurons can show pronounced temporal correlations in their spike trains that can be quantified by the autocorrelation function or the spike-train power spectrum. Depending on cellular and network parameters, correlations display diverse patterns (ranging from simple refractory-period effects and stochastic oscillations to slow fluctuations) and it is generally not well-understood how these dependencies come about. Previous work has explored how the single-cell correlations in a homogeneous network (excitatory and inhibitory integrate-and-fire neurons with nearly balanced mean recurrent input) can be determined numerically from an iterative single-neuron simulation. Such a scheme is based on the fact that every neuron is driven by the network noise (i.e., the input currents from all its presynaptic partners) but also contributes to the network noise, leading to a self-consistency condition for the input and output spectra. Here we first extend this scheme to homogeneous networks with strong recurrent inhibition and a synaptic filter, in which instabilities of the previous scheme are avoided by an averaging procedure. We then extend the scheme to heterogeneous networks in which (i) different neural subpopulations (e.g., excitatory and inhibitory neurons) have different cellular or connectivity parameters; (ii) the number and strength of the input connections are random (Erdős-Rényi topology) and thus different among neurons. In all heterogeneous cases, neurons are lumped in different classes each of which is represented by a single neuron in the iterative scheme; in addition, we make a Gaussian approximation of the input current to the neuron. These approximations seem to be justified over a broad range of parameters as indicated by comparison with simulation results of large recurrent networks. Our method can help to elucidate how network heterogeneity shapes the asynchronous state in recurrent neural networks. PMID:29551968

  7. Nanostructured superhydrophobic substrates trigger the development of 3D neuronal networks.

    PubMed

    Limongi, Tania; Cesca, Fabrizia; Gentile, Francesco; Marotta, Roberto; Ruffilli, Roberta; Barberis, Andrea; Dal Maschio, Marco; Petrini, Enrica Maria; Santoriello, Stefania; Benfenati, Fabio; Di Fabrizio, Enzo

    2013-02-11

    The generation of 3D networks of primary neurons is a big challenge in neuroscience. Here, a novel method is presented for a 3D neuronal culture on superhydrophobic (SH) substrates. How nano-patterned SH devices stimulate neurons to build 3D networks is investigated. Scanning electron microscopy and confocal imaging show that soon after plating neurites adhere to the nanopatterned pillar sidewalls and they are subsequently pulled between pillars in a suspended position. These neurons display an enhanced survival rate compared to standard cultures and develop mature networks with physiological excitability. These findings underline the importance of using nanostructured SH surfaces for directing 3D neuronal growth, as well as for the design of biomaterials for neuronal regeneration. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Self-Organized Supercriticality and Oscillations in Networks of Stochastic Spiking Neurons

    NASA Astrophysics Data System (ADS)

    Costa, Ariadne; Brochini, Ludmila; Kinouchi, Osame

    2017-08-01

    Networks of stochastic spiking neurons are interesting models in the area of Theoretical Neuroscience, presenting both continuous and discontinuous phase transitions. Here we study fully connected networks analytically, numerically and by computational simulations. The neurons have dynamic gains that enable the network to converge to a stationary slightly supercritical state (self-organized supercriticality or SOSC) in the presence of the continuous transition. We show that SOSC, which presents power laws for neuronal avalanches plus some large events, is robust as a function of the main parameter of the neuronal gain dynamics. We discuss the possible applications of the idea of SOSC to biological phenomena like epilepsy and dragon king avalanches. We also find that neuronal gains can produce collective oscillations that coexists with neuronal avalanches, with frequencies compatible with characteristic brain rhythms.

  9. Biological neural networks as model systems for designing future parallel processing computers

    NASA Technical Reports Server (NTRS)

    Ross, Muriel D.

    1991-01-01

    One of the more interesting debates of the present day centers on whether human intelligence can be simulated by computer. The author works under the premise that neurons individually are not smart at all. Rather, they are physical units which are impinged upon continuously by other matter that influences the direction of voltage shifts across the units membranes. It is only the action of a great many neurons, billions in the case of the human nervous system, that intelligent behavior emerges. What is required to understand even the simplest neural system is painstaking analysis, bit by bit, of the architecture and the physiological functioning of its various parts. The biological neural network studied, the vestibular utricular and saccular maculas of the inner ear, are among the most simple of the mammalian neural networks to understand and model. While there is still a long way to go to understand even this most simple neural network in sufficient detail for extrapolation to computers and robots, a start was made. Moreover, the insights obtained and the technologies developed help advance the understanding of the more complex neural networks that underlie human intelligence.

  10. GABA receptors and T-type Ca2+ channels crosstalk in thalamic networks.

    PubMed

    Leresche, Nathalie; Lambert, Régis C

    2017-06-07

    Although the thalamus presents a rather limited repertoire of GABAergic cell types compare to other CNS area, this structure is a privileged system to study how GABA impacts neuronal network excitability. Indeed both glutamatergic thalamocortical (TC) and GABAergic nucleus reticularis thalami (NRT) neurons present a high expression of T-type voltage-dependent Ca 2+ channels whose activation that shapes the output of the thalamus critically depends upon a preceding hyperpolarisation. Because of this strict dependence, a tight functional link between GABA mediated hyperpolarization and T-currents characterizes the thalamic network excitability. In this review we summarize a number of studies showing that the relationships between the various thalamic GABA A/B receptors and T-channels are complex and bidirectional. We discuss how this dynamic interaction sets the global intrathalamic network activity and its long-term plasticity and highlight how the functional relationship between GABA release and T-channel-dependent excitability is finely tuned by the T-channel activation itself. Finally, we illustrate how an impaired balance between T-channels and GABA receptors can lead to pathologically abnormal cellular and network behaviours. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Estimating network parameters from combined dynamics of firing rate and irregularity of single neurons.

    PubMed

    Hamaguchi, Kosuke; Riehle, Alexa; Brunel, Nicolas

    2011-01-01

    High firing irregularity is a hallmark of cortical neurons in vivo, and modeling studies suggest a balance of excitation and inhibition is necessary to explain this high irregularity. Such a balance must be generated, at least partly, from local interconnected networks of excitatory and inhibitory neurons, but the details of the local network structure are largely unknown. The dynamics of the neural activity depends on the local network structure; this in turn suggests the possibility of estimating network structure from the dynamics of the firing statistics. Here we report a new method to estimate properties of the local cortical network from the instantaneous firing rate and irregularity (CV(2)) under the assumption that recorded neurons are a part of a randomly connected sparse network. The firing irregularity, measured in monkey motor cortex, exhibits two features; many neurons show relatively stable firing irregularity in time and across different task conditions; the time-averaged CV(2) is widely distributed from quasi-regular to irregular (CV(2) = 0.3-1.0). For each recorded neuron, we estimate the three parameters of a local network [balance of local excitation-inhibition, number of recurrent connections per neuron, and excitatory postsynaptic potential (EPSP) size] that best describe the dynamics of the measured firing rates and irregularities. Our analysis shows that optimal parameter sets form a two-dimensional manifold in the three-dimensional parameter space that is confined for most of the neurons to the inhibition-dominated region. High irregularity neurons tend to be more strongly connected to the local network, either in terms of larger EPSP and inhibitory PSP size or larger number of recurrent connections, compared with the low irregularity neurons, for a given excitatory/inhibitory balance. Incorporating either synaptic short-term depression or conductance-based synapses leads many low CV(2) neurons to move to the excitation-dominated region as well as to an increase of EPSP size.

  12. High-resolution CMOS MEA platform to study neurons at subcellular, cellular, and network levels†

    PubMed Central

    Müller, Jan; Ballini, Marco; Livi, Paolo; Chen, Yihui; Radivojevic, Milos; Shadmani, Amir; Viswam, Vijay; Jones, Ian L.; Fiscella, Michele; Diggelmann, Roland; Stettler, Alexander; Frey, Urs; Bakkum, Douglas J.; Hierlemann, Andreas

    2017-01-01

    Studies on information processing and learning properties of neuronal networks would benefit from simultaneous and parallel access to the activity of a large fraction of all neurons in such networks. Here, we present a CMOS-based device, capable of simultaneously recording the electrical activity of over a thousand cells in in vitro neuronal networks. The device provides sufficiently high spatiotemporal resolution to enable, at the same time, access to neuronal preparations on subcellular, cellular, and network level. The key feature is a rapidly reconfigurable array of 26 400 microelectrodes arranged at low pitch (17.5 μm) within a large overall sensing area (3.85 × 2.10 mm2). An arbitrary subset of the electrodes can be simultaneously connected to 1024 low-noise readout channels as well as 32 stimulation units. Each electrode or electrode subset can be used to electrically stimulate or record the signals of virtually any neuron on the array. We demonstrate the applicability and potential of this device for various different experimental paradigms: large-scale recordings from whole networks of neurons as well as investigations of axonal properties of individual neurons. PMID:25973786

  13. High-resolution CMOS MEA platform to study neurons at subcellular, cellular, and network levels.

    PubMed

    Müller, Jan; Ballini, Marco; Livi, Paolo; Chen, Yihui; Radivojevic, Milos; Shadmani, Amir; Viswam, Vijay; Jones, Ian L; Fiscella, Michele; Diggelmann, Roland; Stettler, Alexander; Frey, Urs; Bakkum, Douglas J; Hierlemann, Andreas

    2015-07-07

    Studies on information processing and learning properties of neuronal networks would benefit from simultaneous and parallel access to the activity of a large fraction of all neurons in such networks. Here, we present a CMOS-based device, capable of simultaneously recording the electrical activity of over a thousand cells in in vitro neuronal networks. The device provides sufficiently high spatiotemporal resolution to enable, at the same time, access to neuronal preparations on subcellular, cellular, and network level. The key feature is a rapidly reconfigurable array of 26 400 microelectrodes arranged at low pitch (17.5 μm) within a large overall sensing area (3.85 × 2.10 mm(2)). An arbitrary subset of the electrodes can be simultaneously connected to 1024 low-noise readout channels as well as 32 stimulation units. Each electrode or electrode subset can be used to electrically stimulate or record the signals of virtually any neuron on the array. We demonstrate the applicability and potential of this device for various different experimental paradigms: large-scale recordings from whole networks of neurons as well as investigations of axonal properties of individual neurons.

  14. Linking dynamics of the inhibitory network to the input structure

    PubMed Central

    Komarov, Maxim

    2017-01-01

    Networks of inhibitory interneurons are found in many distinct classes of biological systems. Inhibitory interneurons govern the dynamics of principal cells and are likely to be critically involved in the coding of information. In this theoretical study, we describe the dynamics of a generic inhibitory network in terms of low-dimensional, simplified rate models. We study the relationship between the structure of external input applied to the network and the patterns of activity arising in response to that stimulation. We found that even a minimal inhibitory network can generate a great diversity of spatio-temporal patterning including complex bursting regimes with non-trivial ratios of burst firing. Despite the complexity of these dynamics, the network’s response patterns can be predicted from the rankings of the magnitudes of external inputs to the inhibitory neurons. This type of invariant dynamics is robust to noise and stable in densely connected networks with strong inhibitory coupling. Our study predicts that the response dynamics generated by an inhibitory network may provide critical insights about the temporal structure of the sensory input it receives. PMID:27650865

  15. Phase synchronization of bursting neurons in clustered small-world networks

    NASA Astrophysics Data System (ADS)

    Batista, C. A. S.; Lameu, E. L.; Batista, A. M.; Lopes, S. R.; Pereira, T.; Zamora-López, G.; Kurths, J.; Viana, R. L.

    2012-07-01

    We investigate the collective dynamics of bursting neurons on clustered networks. The clustered network model is composed of subnetworks, each of them presenting the so-called small-world property. This model can also be regarded as a network of networks. In each subnetwork a neuron is connected to other ones with regular as well as random connections, the latter with a given intracluster probability. Moreover, in a given subnetwork each neuron has an intercluster probability to be connected to the other subnetworks. The local neuron dynamics has two time scales (fast and slow) and is modeled by a two-dimensional map. In such small-world network the neuron parameters are chosen to be slightly different such that, if the coupling strength is large enough, there may be synchronization of the bursting (slow) activity. We give bounds for the critical coupling strength to obtain global burst synchronization in terms of the network structure, that is, the probabilities of intracluster and intercluster connections. We find that, as the heterogeneity in the network is reduced, the network global synchronizability is improved. We show that the transitions to global synchrony may be abrupt or smooth depending on the intercluster probability.

  16. Effects of partial time delays on phase synchronization in Watts-Strogatz small-world neuronal networks

    NASA Astrophysics Data System (ADS)

    Sun, Xiaojuan; Perc, Matjaž; Kurths, Jürgen

    2017-05-01

    In this paper, we study effects of partial time delays on phase synchronization in Watts-Strogatz small-world neuronal networks. Our focus is on the impact of two parameters, namely the time delay τ and the probability of partial time delay pdelay, whereby the latter determines the probability with which a connection between two neurons is delayed. Our research reveals that partial time delays significantly affect phase synchronization in this system. In particular, partial time delays can either enhance or decrease phase synchronization and induce synchronization transitions with changes in the mean firing rate of neurons, as well as induce switching between synchronized neurons with period-1 firing to synchronized neurons with period-2 firing. Moreover, in comparison to a neuronal network where all connections are delayed, we show that small partial time delay probabilities have especially different influences on phase synchronization of neuronal networks.

  17. Effects of partial time delays on phase synchronization in Watts-Strogatz small-world neuronal networks.

    PubMed

    Sun, Xiaojuan; Perc, Matjaž; Kurths, Jürgen

    2017-05-01

    In this paper, we study effects of partial time delays on phase synchronization in Watts-Strogatz small-world neuronal networks. Our focus is on the impact of two parameters, namely the time delay τ and the probability of partial time delay p delay , whereby the latter determines the probability with which a connection between two neurons is delayed. Our research reveals that partial time delays significantly affect phase synchronization in this system. In particular, partial time delays can either enhance or decrease phase synchronization and induce synchronization transitions with changes in the mean firing rate of neurons, as well as induce switching between synchronized neurons with period-1 firing to synchronized neurons with period-2 firing. Moreover, in comparison to a neuronal network where all connections are delayed, we show that small partial time delay probabilities have especially different influences on phase synchronization of neuronal networks.

  18. Functional Interactions between Newborn and Mature Neurons Leading to Integration into Established Neuronal Circuits.

    PubMed

    Boulanger-Weill, Jonathan; Candat, Virginie; Jouary, Adrien; Romano, Sebastián A; Pérez-Schuster, Verónica; Sumbre, Germán

    2017-06-19

    From development up to adulthood, the vertebrate brain is continuously supplied with newborn neurons that integrate into established mature circuits. However, how this process is coordinated during development remains unclear. Using two-photon imaging, GCaMP5 transgenic zebrafish larvae, and sparse electroporation in the larva's optic tectum, we monitored spontaneous and induced activity of large neuronal populations containing newborn and functionally mature neurons. We observed that the maturation of newborn neurons is a 4-day process. Initially, newborn neurons showed undeveloped dendritic arbors, no neurotransmitter identity, and were unresponsive to visual stimulation, although they displayed spontaneous calcium transients. Later on, newborn-labeled neurons began to respond to visual stimuli but in a very variable manner. At the end of the maturation period, newborn-labeled neurons exhibited visual tuning curves (spatial receptive fields and direction selectivity) and spontaneous correlated activity with neighboring functionally mature neurons. At this developmental stage, newborn-labeled neurons presented complex dendritic arbors and neurotransmitter identity (excitatory or inhibitory). Removal of retinal inputs significantly perturbed the integration of newborn neurons into the functionally mature tectal network. Our results provide a comprehensive description of the maturation of newborn neurons during development and shed light on potential mechanisms underlying their integration into a functionally mature neuronal circuit. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  19. The effect of the neural activity on topological properties of growing neural networks.

    PubMed

    Gafarov, F M; Gafarova, V R

    2016-09-01

    The connectivity structure in cortical networks defines how information is transmitted and processed, and it is a source of the complex spatiotemporal patterns of network's development, and the process of creation and deletion of connections is continuous in the whole life of the organism. In this paper, we study how neural activity influences the growth process in neural networks. By using a two-dimensional activity-dependent growth model we demonstrated the neural network growth process from disconnected neurons to fully connected networks. For making quantitative investigation of the network's activity influence on its topological properties we compared it with the random growth network not depending on network's activity. By using the random graphs theory methods for the analysis of the network's connections structure it is shown that the growth in neural networks results in the formation of a well-known "small-world" network.

  20. Thermal non-equilibrium in porous medium adjacent to vertical plate: ANN approach

    NASA Astrophysics Data System (ADS)

    Ahmed, N. J. Salman; Ahamed, K. S. Nazim; Al-Rashed, Abdullah A. A. A.; Kamangar, Sarfaraz; Athani, Abdulgaphur

    2018-05-01

    Thermal non-equilibrium in porous medium is a condition that refers to temperature discrepancy in solid matrix and fluid of porous medium. This type of flow is complex flow requiring complex set of partial differential equations that govern the flow behavior. The current work is undertaken to predict the thermal non-equilibrium behavior of porous medium adjacent to vertical plate using artificial neural network. A set of neurons in 3 layers are trained to predict the heat transfer characteristics. It is found that the thermal non-equilibrium heat transfer behavior in terms of Nusselt number of fluid as well as solid phase can be predicted accurately by using well-trained neural network.

  1. Circuit variability interacts with excitatory-inhibitory diversity of interneurons to regulate network encoding capacity.

    PubMed

    Tsai, Kuo-Ting; Hu, Chin-Kun; Li, Kuan-Wei; Hwang, Wen-Liang; Chou, Ya-Hui

    2018-05-23

    Local interneurons (LNs) in the Drosophila olfactory system exhibit neuronal diversity and variability, yet it is still unknown how these features impact information encoding capacity and reliability in a complex LN network. We employed two strategies to construct a diverse excitatory-inhibitory neural network beginning with a ring network structure and then introduced distinct types of inhibitory interneurons and circuit variability to the simulated network. The continuity of activity within the node ensemble (oscillation pattern) was used as a readout to describe the temporal dynamics of network activity. We found that inhibitory interneurons enhance the encoding capacity by protecting the network from extremely short activation periods when the network wiring complexity is very high. In addition, distinct types of interneurons have differential effects on encoding capacity and reliability. Circuit variability may enhance the encoding reliability, with or without compromising encoding capacity. Therefore, we have described how circuit variability of interneurons may interact with excitatory-inhibitory diversity to enhance the encoding capacity and distinguishability of neural networks. In this work, we evaluate the effects of different types and degrees of connection diversity on a ring model, which may simulate interneuron networks in the Drosophila olfactory system or other biological systems.

  2. Blocking synaptic transmission with tetanus toxin light chain reveals modes of neurotransmission in the PDF-positive circadian clock neurons of Drosophila melanogaster.

    PubMed

    Umezaki, Yujiro; Yasuyama, Kouji; Nakagoshi, Hideki; Tomioka, Kenji

    2011-09-01

    Circadian locomotor rhythms of Drosophila melanogaster are controlled by a neuronal circuit composed of approximately 150 clock neurons that are roughly classified into seven groups. In the circuit, a group of neurons expressing pigment-dispersing factor (PDF) play an important role in organizing the pacemaking system. Recent studies imply that unknown chemical neurotransmitter(s) (UNT) other than PDF is also expressed in the PDF-positive neurons. To explore its role in the circadian pacemaker, we examined the circadian locomotor rhythms of pdf-Gal4/UAS-TNT transgenic flies in which chemical synaptic transmission in PDF-positive neurons was blocked by expressed tetanus toxin light chain (TNT). In constant darkness (DD), the flies showed a free-running rhythm, which was similar to that of wild-type flies but significantly different from pdf null mutants. Under constant light conditions (LL), however, they often showed complex rhythms with a short period and a long period component. The UNT is thus likely involved in the synaptic transmission in the clock network and its release caused by LL leads to arrhythmicity. Immunocytochemistry revealed that LL induced phase separation in TIMELESS (TIM) cycling among some of the PDF-positive and PDF-negative clock neurons in the transgenic flies. These results suggest that both PDF and UNT play important roles in the Drosophila circadian clock, and activation of PDF pathway alone by LL leads to the complex locomotor rhythm through desynchronized oscillation among some of the clock neurons. Copyright © 2011 Elsevier Ltd. All rights reserved.

  3. Long-term optical stimulation of channelrhodopsin-expressing neurons to study network plasticity

    PubMed Central

    Lignani, Gabriele; Ferrea, Enrico; Difato, Francesco; Amarù, Jessica; Ferroni, Eleonora; Lugarà, Eleonora; Espinoza, Stefano; Gainetdinov, Raul R.; Baldelli, Pietro; Benfenati, Fabio

    2013-01-01

    Neuronal plasticity produces changes in excitability, synaptic transmission, and network architecture in response to external stimuli. Network adaptation to environmental conditions takes place in time scales ranging from few seconds to days, and modulates the entire network dynamics. To study the network response to defined long-term experimental protocols, we setup a system that combines optical and electrophysiological tools embedded in a cell incubator. Primary hippocampal neurons transduced with lentiviruses expressing channelrhodopsin-2/H134R were subjected to various photostimulation protocols in a time window in the order of days. To monitor the effects of light-induced gating of network activity, stimulated transduced neurons were simultaneously recorded using multi-electrode arrays (MEAs). The developed experimental model allows discerning short-term, long-lasting, and adaptive plasticity responses of the same neuronal network to distinct stimulation frequencies applied over different temporal windows. PMID:23970852

  4. Long-term optical stimulation of channelrhodopsin-expressing neurons to study network plasticity.

    PubMed

    Lignani, Gabriele; Ferrea, Enrico; Difato, Francesco; Amarù, Jessica; Ferroni, Eleonora; Lugarà, Eleonora; Espinoza, Stefano; Gainetdinov, Raul R; Baldelli, Pietro; Benfenati, Fabio

    2013-01-01

    Neuronal plasticity produces changes in excitability, synaptic transmission, and network architecture in response to external stimuli. Network adaptation to environmental conditions takes place in time scales ranging from few seconds to days, and modulates the entire network dynamics. To study the network response to defined long-term experimental protocols, we setup a system that combines optical and electrophysiological tools embedded in a cell incubator. Primary hippocampal neurons transduced with lentiviruses expressing channelrhodopsin-2/H134R were subjected to various photostimulation protocols in a time window in the order of days. To monitor the effects of light-induced gating of network activity, stimulated transduced neurons were simultaneously recorded using multi-electrode arrays (MEAs). The developed experimental model allows discerning short-term, long-lasting, and adaptive plasticity responses of the same neuronal network to distinct stimulation frequencies applied over different temporal windows.

  5. Time-oriented hierarchical method for computation of principal components using subspace learning algorithm.

    PubMed

    Jankovic, Marko; Ogawa, Hidemitsu

    2004-10-01

    Principal Component Analysis (PCA) and Principal Subspace Analysis (PSA) are classic techniques in statistical data analysis, feature extraction and data compression. Given a set of multivariate measurements, PCA and PSA provide a smaller set of "basis vectors" with less redundancy, and a subspace spanned by them, respectively. Artificial neurons and neural networks have been shown to perform PSA and PCA when gradient ascent (descent) learning rules are used, which is related to the constrained maximization (minimization) of statistical objective functions. Due to their low complexity, such algorithms and their implementation in neural networks are potentially useful in cases of tracking slow changes of correlations in the input data or in updating eigenvectors with new samples. In this paper we propose PCA learning algorithm that is fully homogeneous with respect to neurons. The algorithm is obtained by modification of one of the most famous PSA learning algorithms--Subspace Learning Algorithm (SLA). Modification of the algorithm is based on Time-Oriented Hierarchical Method (TOHM). The method uses two distinct time scales. On a faster time scale PSA algorithm is responsible for the "behavior" of all output neurons. On a slower scale, output neurons will compete for fulfillment of their "own interests". On this scale, basis vectors in the principal subspace are rotated toward the principal eigenvectors. At the end of the paper it will be briefly analyzed how (or why) time-oriented hierarchical method can be used for transformation of any of the existing neural network PSA method, into PCA method.

  6. Energetic Constraints Produce Self-sustained Oscillatory Dynamics in Neuronal Networks

    PubMed Central

    Burroni, Javier; Taylor, P.; Corey, Cassian; Vachnadze, Tengiz; Siegelmann, Hava T.

    2017-01-01

    Overview: We model energy constraints in a network of spiking neurons, while exploring general questions of resource limitation on network function abstractly. Background: Metabolic states like dietary ketosis or hypoglycemia have a large impact on brain function and disease outcomes. Glia provide metabolic support for neurons, among other functions. Yet, in computational models of glia-neuron cooperation, there have been no previous attempts to explore the effects of direct realistic energy costs on network activity in spiking neurons. Currently, biologically realistic spiking neural networks assume that membrane potential is the main driving factor for neural spiking, and do not take into consideration energetic costs. Methods: We define local energy pools to constrain a neuron model, termed Spiking Neuron Energy Pool (SNEP), which explicitly incorporates energy limitations. Each neuron requires energy to spike, and resources in the pool regenerate over time. Our simulation displays an easy-to-use GUI, which can be run locally in a web browser, and is freely available. Results: Energy dependence drastically changes behavior of these neural networks, causing emergent oscillations similar to those in networks of biological neurons. We analyze the system via Lotka-Volterra equations, producing several observations: (1) energy can drive self-sustained oscillations, (2) the energetic cost of spiking modulates the degree and type of oscillations, (3) harmonics emerge with frequencies determined by energy parameters, and (4) varying energetic costs have non-linear effects on energy consumption and firing rates. Conclusions: Models of neuron function which attempt biological realism may benefit from including energy constraints. Further, we assert that observed oscillatory effects of energy limitations exist in networks of many kinds, and that these findings generalize to abstract graphs and technological applications. PMID:28289370

  7. Area postrema projects to FoxP2 neurons of the pre-locus coeruleus and parabrachial nuclei: brainstem sites implicated in sodium appetite regulation.

    PubMed

    Stein, Matthew K; Loewy, Arthur D

    2010-11-04

    The area postrema (AP) is a circumventricular organ located in the dorsal midline of the medulla. It functions as a chemosensor for blood-borne peptides and solutes, and converts this information into neural signals that are transmitted to the nucleus tractus solitarius (NTS) and parabrachial nucleus (PB). One of its NTS targets in the rat is the aldosterone-sensitive neurons which contain the enzyme 11 β-hydroxysteroid dehydrogenase type 2 (HSD2). The HSD2 neurons are part of a central network involved in sodium appetite regulation, and they innervate numerous brain sites including the pre-locus coeruleus (pre-LC) and PB external lateral-inner (PBel-inner) cell groups of the dorsolateral pons. Both pontine cell groups express the transcription factor FoxP2 and become c-Fos activated following sodium depletion. Because the AP is a component in this network, we wanted to determine whether it also projects to the same sites as the HSD2 neurons. By using a combination of anterograde axonal and retrograde cell body tract-tracing techniques in individual rats, we show that the AP projects to FoxP2 immunoreactive neurons in the pre-LC and PBel-inner. Thus, the AP sends a direct projection to both the first-order medullary (HSD2 neurons of the NTS) and the second-order dorsolateral pontine neurons (pre-LC and PB-el inner neurons). All three sites transmit information related to systemic sodium depletion to forebrain sites and are part of the central neural circuitry that regulates the complex behavior of sodium appetite. Copyright © 2010 Elsevier B.V. All rights reserved.

  8. Morphometric analysis of astrocytes in brainstem respiratory regions.

    PubMed

    Sheikhbahaei, Shahriar; Morris, Brian; Collina, Jared; Anjum, Sommer; Znati, Sami; Gamarra, Julio; Zhang, Ruli; Gourine, Alexander V; Smith, Jeffrey C

    2018-06-11

    Astrocytes, the most abundant and structurally complex glial cells of the central nervous system, are proposed to play an important role in modulating the activities of neuronal networks, including respiratory rhythm-generating circuits of the preBötzinger complex (preBötC) located in the ventrolateral medulla of the brainstem. However, structural properties of astrocytes residing within different brainstem regions are unknown. In this study astrocytes in the preBötC, an intermediate reticular formation (IRF) region with respiratory-related function, and a region of the nucleus tractus solitarius (NTS) in adult rats were reconstructed and their morphological features were compared. Detailed morphological analysis revealed that preBötC astrocytes are structurally more complex than those residing within the functionally distinct neighboring IRF region, or the NTS, located at the dorsal aspect of the medulla oblongata. Structural analyses of the brainstem microvasculature indicated no significant regional differences in vascular properties. We hypothesize that high morphological complexity of preBötC astrocytes reflects their functional role in providing structural/metabolic support and modulation of the key neuronal circuits essential for breathing, as well as constraints imposed by arrangements of associated neurons and/or other local structural features of the brainstem parenchyma. This article is protected by copyright. All rights reserved. © 2018 Wiley Periodicals, Inc.

  9. Developing a tissue-engineered neural-electrical relay using encapsulated neuronal constructs on conducting polymer fibers.

    PubMed

    Cullen, D Kacy; R Patel, Ankur; Doorish, John F; Smith, Douglas H; Pfister, Bryan J

    2008-12-01

    Neural-electrical interface platforms are being developed to extracellularly monitor neuronal population activity. Polyaniline-based electrically conducting polymer fibers are attractive substrates for sustained functional interfaces with neurons due to their flexibility, tailored geometry and controlled electro-conductive properties. In this study, we addressed the neurobiological considerations of utilizing small diameter (<400 microm) fibers consisting of a blend of electrically conductive polyaniline and polypropylene (PA-PP) as the backbone of encapsulated tissue-engineered neural-electrical relays. We devised new approaches to promote survival, adhesion and neurite outgrowth of primary dorsal root ganglion neurons on PA-PP fibers. We attained a greater than ten-fold increase in the density of viable neurons on fiber surfaces to approximately 700 neurons mm(-2) by manipulating surrounding surface charges to bias settling neuronal suspensions toward fibers coated with cell-adhesive ligands. This stark increase in neuronal density resulted in robust neuritic extension and network formation directly along the fibers. Additionally, we encapsulated these neuronal networks on PA-PP fibers using agarose to form a protective barrier while potentially facilitating network stability. Following encapsulation, the neuronal networks maintained integrity, high viability (>85%) and intimate adhesion to PA-PP fibers. These efforts accomplished key prerequisites for the establishment of functional electrical interfaces with neuronal populations using small diameter PA-PP fibers-specifically, improved neurocompatibility, high-density neuronal adhesion and neuritic network development directly on fiber surfaces.

  10. The roadmap for estimation of cell-type-specific neuronal activity from non-invasive measurements

    PubMed Central

    Uhlirova, Hana; Kılıç, Kıvılcım; Tian, Peifang; Sakadžić, Sava; Thunemann, Martin; Desjardins, Michèle; Saisan, Payam A.; Nizar, Krystal; Yaseen, Mohammad A.; Hagler, Donald J.; Vandenberghe, Matthieu; Djurovic, Srdjan; Andreassen, Ole A.; Silva, Gabriel A.; Masliah, Eliezer; Vinogradov, Sergei; Buxton, Richard B.; Einevoll, Gaute T.; Boas, David A.; Dale, Anders M.; Devor, Anna

    2016-01-01

    The computational properties of the human brain arise from an intricate interplay between billions of neurons connected in complex networks. However, our ability to study these networks in healthy human brain is limited by the necessity to use non-invasive technologies. This is in contrast to animal models where a rich, detailed view of cellular-level brain function with cell-type-specific molecular identity has become available due to recent advances in microscopic optical imaging and genetics. Thus, a central challenge facing neuroscience today is leveraging these mechanistic insights from animal studies to accurately draw physiological inferences from non-invasive signals in humans. On the essential path towards this goal is the development of a detailed ‘bottom-up’ forward model bridging neuronal activity at the level of cell-type-specific populations to non-invasive imaging signals. The general idea is that specific neuronal cell types have identifiable signatures in the way they drive changes in cerebral blood flow, cerebral metabolic rate of O2 (measurable with quantitative functional Magnetic Resonance Imaging), and electrical currents/potentials (measurable with magneto/electroencephalography). This forward model would then provide the ‘ground truth’ for the development of new tools for tackling the inverse problem—estimation of neuronal activity from multimodal non-invasive imaging data. This article is part of the themed issue ‘Interpreting BOLD: a dialogue between cognitive and cellular neuroscience’. PMID:27574309

  11. Asynchronous Rate Chaos in Spiking Neuronal Circuits

    PubMed Central

    Harish, Omri; Hansel, David

    2015-01-01

    The brain exhibits temporally complex patterns of activity with features similar to those of chaotic systems. Theoretical studies over the last twenty years have described various computational advantages for such regimes in neuronal systems. Nevertheless, it still remains unclear whether chaos requires specific cellular properties or network architectures, or whether it is a generic property of neuronal circuits. We investigate the dynamics of networks of excitatory-inhibitory (EI) spiking neurons with random sparse connectivity operating in the regime of balance of excitation and inhibition. Combining Dynamical Mean-Field Theory with numerical simulations, we show that chaotic, asynchronous firing rate fluctuations emerge generically for sufficiently strong synapses. Two different mechanisms can lead to these chaotic fluctuations. One mechanism relies on slow I-I inhibition which gives rise to slow subthreshold voltage and rate fluctuations. The decorrelation time of these fluctuations is proportional to the time constant of the inhibition. The second mechanism relies on the recurrent E-I-E feedback loop. It requires slow excitation but the inhibition can be fast. In the corresponding dynamical regime all neurons exhibit rate fluctuations on the time scale of the excitation. Another feature of this regime is that the population-averaged firing rate is substantially smaller in the excitatory population than in the inhibitory population. This is not necessarily the case in the I-I mechanism. Finally, we discuss the neurophysiological and computational significance of our results. PMID:26230679

  12. A reanalysis of "Two types of asynchronous activity in networks of excitatory and inhibitory spiking neurons".

    PubMed

    Engelken, Rainer; Farkhooi, Farzad; Hansel, David; van Vreeswijk, Carl; Wolf, Fred

    2016-01-01

    Neuronal activity in the central nervous system varies strongly in time and across neuronal populations. It is a longstanding proposal that such fluctuations generically arise from chaotic network dynamics. Various theoretical studies predict that the rich dynamics of rate models operating in the chaotic regime can subserve circuit computation and learning. Neurons in the brain, however, communicate via spikes and it is a theoretical challenge to obtain similar rate fluctuations in networks of spiking neuron models. A recent study investigated spiking balanced networks of leaky integrate and fire (LIF) neurons and compared their dynamics to a matched rate network with identical topology, where single unit input-output functions were chosen from isolated LIF neurons receiving Gaussian white noise input. A mathematical analogy between the chaotic instability in networks of rate units and the spiking network dynamics was proposed. Here we revisit the behavior of the spiking LIF networks and these matched rate networks. We find expected hallmarks of a chaotic instability in the rate network: For supercritical coupling strength near the transition point, the autocorrelation time diverges. For subcritical coupling strengths, we observe critical slowing down in response to small external perturbations. In the spiking network, we found in contrast that the timescale of the autocorrelations is insensitive to the coupling strength and that rate deviations resulting from small input perturbations rapidly decay. The decay speed even accelerates for increasing coupling strength. In conclusion, our reanalysis demonstrates fundamental differences between the behavior of pulse-coupled spiking LIF networks and rate networks with matched topology and input-output function. In particular there is no indication of a corresponding chaotic instability in the spiking network.

  13. Cultured neuronal networks as environmental biosensors.

    PubMed

    O'Shaughnessy, Thomas J; Gray, Samuel A; Pancrazio, Joseph J

    2004-01-01

    Contamination of water by toxins, either intentionally or unintentionally, is a growing concern for both military and civilian agencies and thus there is a need for systems capable of monitoring a wide range of natural and industrial toxicants. The EILATox-Oregon Workshop held in September 2002 provided an opportunity to test the capabilities of a prototype neuronal network-based biosensor with unknown contaminants in water samples. The biosensor is a portable device capable of recording the action potential activity from a network of mammalian neurons grown on glass microelectrode arrays. Changes in the action potential fi ring rate across the network are monitored to determine exposure to toxicants. A series of three neuronal networks derived from mice was used to test seven unknown samples. Two of these unknowns later were revealed to be blanks, to which the neuronal networks did not respond. Of the five remaining unknowns, a significant change in network activity was detected for four of the compounds at concentrations below a lethal level for humans: mercuric chloride, sodium arsenite, phosdrin and chlordimeform. These compounds--two heavy metals, an organophosphate and an insecticide--demonstrate the breadth of detection possible with neuronal networks. The results generated at the workshop show the promise of the neuronal network biosensor as an environmental detector but there is still considerable effort needed to produce a device suitable for routine environmental threat monitoring.

  14. Neurons from the adult human dentate nucleus: neural networks in the neuron classification.

    PubMed

    Grbatinić, Ivan; Marić, Dušica L; Milošević, Nebojša T

    2015-04-07

    Topological (central vs. border neuron type) and morphological classification of adult human dentate nucleus neurons according to their quantified histomorphological properties using neural networks on real and virtual neuron samples. In the real sample 53.1% and 14.1% of central and border neurons, respectively, are classified correctly with total of 32.8% of misclassified neurons. The most important result present 62.2% of misclassified neurons in border neurons group which is even greater than number of correctly classified neurons (37.8%) in that group, showing obvious failure of network to classify neurons correctly based on computational parameters used in our study. On the virtual sample 97.3% of misclassified neurons in border neurons group which is much greater than number of correctly classified neurons (2.7%) in that group, again confirms obvious failure of network to classify neurons correctly. Statistical analysis shows that there is no statistically significant difference in between central and border neurons for each measured parameter (p>0.05). Total of 96.74% neurons are morphologically classified correctly by neural networks and each one belongs to one of the four histomorphological types: (a) neurons with small soma and short dendrites, (b) neurons with small soma and long dendrites, (c) neuron with large soma and short dendrites, (d) neurons with large soma and long dendrites. Statistical analysis supports these results (p<0.05). Human dentate nucleus neurons can be classified in four neuron types according to their quantitative histomorphological properties. These neuron types consist of two neuron sets, small and large ones with respect to their perykarions with subtypes differing in dendrite length i.e. neurons with short vs. long dendrites. Besides confirmation of neuron classification on small and large ones, already shown in literature, we found two new subtypes i.e. neurons with small soma and long dendrites and with large soma and short dendrites. These neurons are most probably equally distributed throughout the dentate nucleus as no significant difference in their topological distribution is observed. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Connectomic constraints on computation in feedforward networks of spiking neurons.

    PubMed

    Ramaswamy, Venkatakrishnan; Banerjee, Arunava

    2014-10-01

    Several efforts are currently underway to decipher the connectome or parts thereof in a variety of organisms. Ascertaining the detailed physiological properties of all the neurons in these connectomes, however, is out of the scope of such projects. It is therefore unclear to what extent knowledge of the connectome alone will advance a mechanistic understanding of computation occurring in these neural circuits, especially when the high-level function of the said circuit is unknown. We consider, here, the question of how the wiring diagram of neurons imposes constraints on what neural circuits can compute, when we cannot assume detailed information on the physiological response properties of the neurons. We call such constraints-that arise by virtue of the connectome-connectomic constraints on computation. For feedforward networks equipped with neurons that obey a deterministic spiking neuron model which satisfies a small number of properties, we ask if just by knowing the architecture of a network, we can rule out computations that it could be doing, no matter what response properties each of its neurons may have. We show results of this form, for certain classes of network architectures. On the other hand, we also prove that with the limited set of properties assumed for our model neurons, there are fundamental limits to the constraints imposed by network structure. Thus, our theory suggests that while connectomic constraints might restrict the computational ability of certain classes of network architectures, we may require more elaborate information on the properties of neurons in the network, before we can discern such results for other classes of networks.

  16. Clique of Functional Hubs Orchestrates Population Bursts in Developmentally Regulated Neural Networks

    PubMed Central

    Luccioli, Stefano; Ben-Jacob, Eshel; Barzilai, Ari; Bonifazi, Paolo; Torcini, Alessandro

    2014-01-01

    It has recently been discovered that single neuron stimulation can impact network dynamics in immature and adult neuronal circuits. Here we report a novel mechanism which can explain in neuronal circuits, at an early stage of development, the peculiar role played by a few specific neurons in promoting/arresting the population activity. For this purpose, we consider a standard neuronal network model, with short-term synaptic plasticity, whose population activity is characterized by bursting behavior. The addition of developmentally inspired constraints and correlations in the distribution of the neuronal connectivities and excitabilities leads to the emergence of functional hub neurons, whose stimulation/deletion is critical for the network activity. Functional hubs form a clique, where a precise sequential activation of the neurons is essential to ignite collective events without any need for a specific topological architecture. Unsupervised time-lagged firings of supra-threshold cells, in connection with coordinated entrainments of near-threshold neurons, are the key ingredients to orchestrate population activity. PMID:25255443

  17. Computational Models and Emergent Properties of Respiratory Neural Networks

    PubMed Central

    Lindsey, Bruce G.; Rybak, Ilya A.; Smith, Jeffrey C.

    2012-01-01

    Computational models of the neural control system for breathing in mammals provide a theoretical and computational framework bringing together experimental data obtained from different animal preparations under various experimental conditions. Many of these models were developed in parallel and iteratively with experimental studies and provided predictions guiding new experiments. This data-driven modeling approach has advanced our understanding of respiratory network architecture and neural mechanisms underlying generation of the respiratory rhythm and pattern, including their functional reorganization under different physiological conditions. Models reviewed here vary in neurobiological details and computational complexity and span multiple spatiotemporal scales of respiratory control mechanisms. Recent models describe interacting populations of respiratory neurons spatially distributed within the Bötzinger and pre-Bötzinger complexes and rostral ventrolateral medulla that contain core circuits of the respiratory central pattern generator (CPG). Network interactions within these circuits along with intrinsic rhythmogenic properties of neurons form a hierarchy of multiple rhythm generation mechanisms. The functional expression of these mechanisms is controlled by input drives from other brainstem components, including the retrotrapezoid nucleus and pons, which regulate the dynamic behavior of the core circuitry. The emerging view is that the brainstem respiratory network has rhythmogenic capabilities at multiple levels of circuit organization. This allows flexible, state-dependent expression of different neural pattern-generation mechanisms under various physiological conditions, enabling a wide repertoire of respiratory behaviors. Some models consider control of the respiratory CPG by pulmonary feedback and network reconfiguration during defensive behaviors such as cough. Future directions in modeling of the respiratory CPG are considered. PMID:23687564

  18. Revealing degree distribution of bursting neuron networks.

    PubMed

    Shen, Yu; Hou, Zhonghuai; Xin, Houwen

    2010-03-01

    We present a method to infer the degree distribution of a bursting neuron network from its dynamics. Burst synchronization (BS) of coupled Morris-Lecar neurons has been studied under the weak coupling condition. In the BS state, all the neurons start and end bursting almost simultaneously, while the spikes inside the burst are incoherent among the neurons. Interestingly, we find that the spike amplitude of a given neuron shows an excellent linear relationship with its degree, which makes it possible to estimate the degree distribution of the network by simple statistics of the spike amplitudes. We demonstrate the validity of this scheme on scale-free as well as small-world networks. The underlying mechanism of such a method is also briefly discussed.

  19. Recording axonal conduction to evaluate the integration of pluripotent cell-derived neurons into a neuronal network.

    PubMed

    Shimba, Kenta; Sakai, Koji; Takayama, Yuzo; Kotani, Kiyoshi; Jimbo, Yasuhiko

    2015-10-01

    Stem cell transplantation is a promising therapy to treat neurodegenerative disorders, and a number of in vitro models have been developed for studying interactions between grafted neurons and the host neuronal network to promote drug discovery. However, methods capable of evaluating the process by which stem cells integrate into the host neuronal network are lacking. In this study, we applied an axonal conduction-based analysis to a co-culture study of primary and differentiated neurons. Mouse cortical neurons and neuronal cells differentiated from P19 embryonal carcinoma cells, a model for early neural differentiation of pluripotent stem cells, were co-cultured in a microfabricated device. The somata of these cells were separated by the co-culture device, but their axons were able to elongate through microtunnels and then form synaptic contacts. Propagating action potentials were recorded from these axons by microelectrodes embedded at the bottom of the microtunnels and sorted into clusters representing individual axons. While the number of axons of cortical neurons increased until 14 days in vitro and then decreased, those of P19 neurons increased throughout the culture period. Network burst analysis showed that P19 neurons participated in approximately 80% of the bursting activity after 14 days in vitro. Interestingly, the axonal conduction delay of P19 neurons was significantly greater than that of cortical neurons, suggesting that there are some physiological differences in their axons. These results suggest that our method is feasible to evaluate the process by which stem cell-derived neurons integrate into a host neuronal network.

  20. Orientation selectivity in inhibition-dominated networks of spiking neurons: effect of single neuron properties and network dynamics.

    PubMed

    Sadeh, Sadra; Rotter, Stefan

    2015-01-01

    The neuronal mechanisms underlying the emergence of orientation selectivity in the primary visual cortex of mammals are still elusive. In rodents, visual neurons show highly selective responses to oriented stimuli, but neighboring neurons do not necessarily have similar preferences. Instead of a smooth map, one observes a salt-and-pepper organization of orientation selectivity. Modeling studies have recently confirmed that balanced random networks are indeed capable of amplifying weakly tuned inputs and generating highly selective output responses, even in absence of feature-selective recurrent connectivity. Here we seek to elucidate the neuronal mechanisms underlying this phenomenon by resorting to networks of integrate-and-fire neurons, which are amenable to analytic treatment. Specifically, in networks of perfect integrate-and-fire neurons, we observe that highly selective and contrast invariant output responses emerge, very similar to networks of leaky integrate-and-fire neurons. We then demonstrate that a theory based on mean firing rates and the detailed network topology predicts the output responses, and explains the mechanisms underlying the suppression of the common-mode, amplification of modulation, and contrast invariance. Increasing inhibition dominance in our networks makes the rectifying nonlinearity more prominent, which in turn adds some distortions to the otherwise essentially linear prediction. An extension of the linear theory can account for all the distortions, enabling us to compute the exact shape of every individual tuning curve in our networks. We show that this simple form of nonlinearity adds two important properties to orientation selectivity in the network, namely sharpening of tuning curves and extra suppression of the modulation. The theory can be further extended to account for the nonlinearity of the leaky model by replacing the rectifier by the appropriate smooth input-output transfer function. These results are robust and do not depend on the state of network dynamics, and hold equally well for mean-driven and fluctuation-driven regimes of activity.

  1. Orientation Selectivity in Inhibition-Dominated Networks of Spiking Neurons: Effect of Single Neuron Properties and Network Dynamics

    PubMed Central

    Sadeh, Sadra; Rotter, Stefan

    2015-01-01

    The neuronal mechanisms underlying the emergence of orientation selectivity in the primary visual cortex of mammals are still elusive. In rodents, visual neurons show highly selective responses to oriented stimuli, but neighboring neurons do not necessarily have similar preferences. Instead of a smooth map, one observes a salt-and-pepper organization of orientation selectivity. Modeling studies have recently confirmed that balanced random networks are indeed capable of amplifying weakly tuned inputs and generating highly selective output responses, even in absence of feature-selective recurrent connectivity. Here we seek to elucidate the neuronal mechanisms underlying this phenomenon by resorting to networks of integrate-and-fire neurons, which are amenable to analytic treatment. Specifically, in networks of perfect integrate-and-fire neurons, we observe that highly selective and contrast invariant output responses emerge, very similar to networks of leaky integrate-and-fire neurons. We then demonstrate that a theory based on mean firing rates and the detailed network topology predicts the output responses, and explains the mechanisms underlying the suppression of the common-mode, amplification of modulation, and contrast invariance. Increasing inhibition dominance in our networks makes the rectifying nonlinearity more prominent, which in turn adds some distortions to the otherwise essentially linear prediction. An extension of the linear theory can account for all the distortions, enabling us to compute the exact shape of every individual tuning curve in our networks. We show that this simple form of nonlinearity adds two important properties to orientation selectivity in the network, namely sharpening of tuning curves and extra suppression of the modulation. The theory can be further extended to account for the nonlinearity of the leaky model by replacing the rectifier by the appropriate smooth input-output transfer function. These results are robust and do not depend on the state of network dynamics, and hold equally well for mean-driven and fluctuation-driven regimes of activity. PMID:25569445

  2. Optimization behavior of brainstem respiratory neurons. A cerebral neural network model.

    PubMed

    Poon, C S

    1991-01-01

    A recent model of respiratory control suggested that the steady-state respiratory responses to CO2 and exercise may be governed by an optimal control law in the brainstem respiratory neurons. It was not certain, however, whether such complex optimization behavior could be accomplished by a realistic biological neural network. To test this hypothesis, we developed a hybrid computer-neural model in which the dynamics of the lung, brain and other tissue compartments were simulated on a digital computer. Mimicking the "controller" was a human subject who pedalled on a bicycle with varying speed (analog of ventilatory output) with a view to minimize an analog signal of the total cost of breathing (chemical and mechanical) which was computed interactively and displayed on an oscilloscope. In this manner, the visuomotor cortex served as a proxy (homolog) of the brainstem respiratory neurons in the model. Results in 4 subjects showed a linear steady-state ventilatory CO2 response to arterial PCO2 during simulated CO2 inhalation and a nearly isocapnic steady-state response during simulated exercise. Thus, neural optimization is a plausible mechanism for respiratory control during exercise and can be achieved by a neural network with cognitive computational ability without the need for an exercise stimulus.

  3. SERS investigations and electrical recording of neuronal networks with three-dimensional plasmonic nanoantennas (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    De Angelis, Francesco

    2017-06-01

    SERS investigations and electrical recording of neuronal networks with three-dimensional plasmonic nanoantennas Michele Dipalo, Valeria Caprettini, Anbrea Barbaglia, Laura Lovato, Francesco De Angelis e-mail: francesco.deangelis@iit.it Istituto Italiano di Tecnologia, Via Morego 30, 16163, Genova Biological systems are analysed mainly by optical, chemical or electrical methods. Normally each of these techniques provides only partial information about the environment, while combined investigations could reveal new phenomena occurring in complex systems such as in-vitro neuronal networks. Aiming at the merging of optical and electrical investigations of biological samples, we introduced three-dimensional plasmonic nanoantennas on CMOS-based electrical sensors [1]. The overall device is then capable of enhanced Raman Analysis of cultured cells combined with electrical recording of neuronal activity. The Raman measurements show a much higher sensitivity when performed on the tip of the nanoantenna in respect to the flat substrate [2]; this effect is a combination of the high plasmonic field enhancement and of the tight adhesion of cells on the nanoantenna tip. Furthermore, when plasmonic opto-poration is exploited [3] the 3D nanoelectrodes are able to penetrate through the cell membrane thus accessing the intracellular environment. Our latest results (unpublished) show that the technique is completely non-invasive and solves many problems related to state-of-the-art intracellular recording approaches on large neuronal networks. This research received funding from ERC-IDEAS Program: "Neuro-Plasmonics" [Grant n. 616213]. References: [1] M. Dipalo, G. C. Messina, H. Amin, R. La Rocca, V. Shalabaeva, A. Simi, A. Maccione, P. Zilio, L. Berdondini, F. De Angelis, Nanoscale 2015, 7, 3703. [2] R. La Rocca, G. C. Messina, M. Dipalo, V. Shalabaeva, F. De Angelis, Small 2015, 11, 4632. [3] G. C. Messina et al., Spatially, Temporally, and Quantitatively Controlled Delivery of Broad Range of Molecules into Selected Cells through Plasmonic Nanotubes. Advanced Materials 2015.

  4. A Learning Theory for Reward-Modulated Spike-Timing-Dependent Plasticity with Application to Biofeedback

    PubMed Central

    Maass, Wolfgang

    2008-01-01

    Reward-modulated spike-timing-dependent plasticity (STDP) has recently emerged as a candidate for a learning rule that could explain how behaviorally relevant adaptive changes in complex networks of spiking neurons could be achieved in a self-organizing manner through local synaptic plasticity. However, the capabilities and limitations of this learning rule could so far only be tested through computer simulations. This article provides tools for an analytic treatment of reward-modulated STDP, which allows us to predict under which conditions reward-modulated STDP will achieve a desired learning effect. These analytical results imply that neurons can learn through reward-modulated STDP to classify not only spatial but also temporal firing patterns of presynaptic neurons. They also can learn to respond to specific presynaptic firing patterns with particular spike patterns. Finally, the resulting learning theory predicts that even difficult credit-assignment problems, where it is very hard to tell which synaptic weights should be modified in order to increase the global reward for the system, can be solved in a self-organizing manner through reward-modulated STDP. This yields an explanation for a fundamental experimental result on biofeedback in monkeys by Fetz and Baker. In this experiment monkeys were rewarded for increasing the firing rate of a particular neuron in the cortex and were able to solve this extremely difficult credit assignment problem. Our model for this experiment relies on a combination of reward-modulated STDP with variable spontaneous firing activity. Hence it also provides a possible functional explanation for trial-to-trial variability, which is characteristic for cortical networks of neurons but has no analogue in currently existing artificial computing systems. In addition our model demonstrates that reward-modulated STDP can be applied to all synapses in a large recurrent neural network without endangering the stability of the network dynamics. PMID:18846203

  5. On the molecular basis of the receptor mosaic hypothesis of the engram.

    PubMed

    Agnati, Luigi F; Ferré, Sergi; Leo, Giuseppina; Lluis, Carme; Canela, Enric I; Franco, Rafael; Fuxe, Kjell

    2004-08-01

    1. This paper revisits the so-called "receptor mosaic hypothesis" for memory trace formation in the light of recent findings in "functional (or interaction) proteomics." The receptor mosaic hypothesis maintains that receptors may form molecular aggregates at the plasma membrane level representing part of the computational molecular networks. 2. Specific interactions between receptors occur as a consequence of the pattern of transmitter release from the source neurons, which release the chemical code impinging on the receptor mosaics of the target neuron. Thus, the decoding of the chemical message depends on the receptors forming the receptor mosaics and on the type of interactions among receptors and other proteins in the molecular network with novel long-term mosaics formed by their stabilization via adapter proteins formed in target neurons through the incoming neurotransmitter code. The internalized receptor heteromeric complexes or parts of them may act as transcription factors for the formation of such adapter proteins. 3. Receptor mosaics are formed both at the pre- and postsynaptic level of the plasma membranes and this phenomenon can play a role in the Hebbian behavior of some synaptic contacts. The appropriate "matching" of the pre- with the postsynaptic receptor mosaic can be thought of as the "clamping of the synapse to the external teaching signal." According to our hypothesis the behavior of the molecular networks at plasma membrane level to which the receptor mosaics belong can be set in a "frozen" conformation (i.e. in a frozen functional state) and this may represent a mechanism to maintain constant the input to a neuron. 4. Thus, we are suggesting that molecular networks at plasma membrane level may display multiple "attractors" each of which stores the memory of a specific neurotransmitter code due to a unique firing pattern. Hence, this mechanism may play a role in learning processes where the input to a neuron is likely to remain constant for a while.

  6. Widespread receptivity to neuropeptide PDF throughout the neuronal circadian clock network of Drosophila revealed by real-time cyclic AMP imaging.

    PubMed

    Shafer, Orie T; Kim, Dong Jo; Dunbar-Yaffe, Richard; Nikolaev, Viacheslav O; Lohse, Martin J; Taghert, Paul H

    2008-04-24

    The neuropeptide PDF is released by sixteen clock neurons in Drosophila and helps maintain circadian activity rhythms by coordinating a network of approximately 150 neuronal clocks. Whether PDF acts directly on elements of this neural network remains unknown. We address this question by adapting Epac1-camps, a genetically encoded cAMP FRET sensor, for use in the living brain. We find that a subset of the PDF-expressing neurons respond to PDF with long-lasting cAMP increases and confirm that such responses require the PDF receptor. In contrast, an unrelated Drosophila neuropeptide, DH31, stimulates large cAMP increases in all PDF-expressing clock neurons. Thus, the network of approximately 150 clock neurons displays widespread, though not uniform, PDF receptivity. This work introduces a sensitive means of measuring cAMP changes in a living brain with subcellular resolution. Specifically, it experimentally confirms the longstanding hypothesis that PDF is a direct modulator of most neurons in the Drosophila clock network.

  7. Dynamical analysis of Parkinsonian state emulated by hybrid Izhikevich neuron models

    NASA Astrophysics Data System (ADS)

    Liu, Chen; Wang, Jiang; Yu, Haitao; Deng, Bin; Wei, Xile; Li, Huiyan; Loparo, Kenneth A.; Fietkiewicz, Chris

    2015-11-01

    Computational models play a significant role in exploring novel theories to complement the findings of physiological experiments. Various computational models have been developed to reveal the mechanisms underlying brain functions. Particularly, in the development of therapies to modulate behavioral and pathological abnormalities, computational models provide the basic foundations to exhibit transitions between physiological and pathological conditions. Considering the significant roles of the intrinsic properties of the globus pallidus and the coupling connections between neurons in determining the firing patterns and the dynamical activities of the basal ganglia neuronal network, we propose a hypothesis that pathological behaviors under the Parkinsonian state may originate from combined effects of intrinsic properties of globus pallidus neurons and synaptic conductances in the whole neuronal network. In order to establish a computational efficient network model, hybrid Izhikevich neuron model is used due to its capacity of capturing the dynamical characteristics of the biological neuronal activities. Detailed analysis of the individual Izhikevich neuron model can assist in understanding the roles of model parameters, which then facilitates the establishment of the basal ganglia-thalamic network model, and contributes to a further exploration of the underlying mechanisms of the Parkinsonian state. Simulation results show that the hybrid Izhikevich neuron model is capable of capturing many of the dynamical properties of the basal ganglia-thalamic neuronal network, such as variations of the firing rates and emergence of synchronous oscillations under the Parkinsonian condition, despite the simplicity of the two-dimensional neuronal model. It may suggest that the computational efficient hybrid Izhikevich neuron model can be used to explore basal ganglia normal and abnormal functions. Especially it provides an efficient way of emulating the large-scale neuron network and potentially contributes to development of improved therapy for neurological disorders such as Parkinson's disease.

  8. Advancing interconnect density for spiking neural network hardware implementations using traffic-aware adaptive network-on-chip routers.

    PubMed

    Carrillo, Snaider; Harkin, Jim; McDaid, Liam; Pande, Sandeep; Cawley, Seamus; McGinley, Brian; Morgan, Fearghal

    2012-09-01

    The brain is highly efficient in how it processes information and tolerates faults. Arguably, the basic processing units are neurons and synapses that are interconnected in a complex pattern. Computer scientists and engineers aim to harness this efficiency and build artificial neural systems that can emulate the key information processing principles of the brain. However, existing approaches cannot provide the dense interconnect for the billions of neurons and synapses that are required. Recently a reconfigurable and biologically inspired paradigm based on network-on-chip (NoC) and spiking neural networks (SNNs) has been proposed as a new method of realising an efficient, robust computing platform. However, the use of the NoC as an interconnection fabric for large-scale SNNs demands a good trade-off between scalability, throughput, neuron/synapse ratio and power consumption. This paper presents a novel traffic-aware, adaptive NoC router, which forms part of a proposed embedded mixed-signal SNN architecture called EMBRACE (EMulating Biologically-inspiRed ArChitectures in hardwarE). The proposed adaptive NoC router provides the inter-neuron connectivity for EMBRACE, maintaining router communication and avoiding dropped router packets by adapting to router traffic congestion. Results are presented on throughput, power and area performance analysis of the adaptive router using a 90 nm CMOS technology which outperforms existing NoCs in this domain. The adaptive behaviour of the router is also verified on a Stratix II FPGA implementation of a 4 × 2 router array with real-time traffic congestion. The presented results demonstrate the feasibility of using the proposed adaptive NoC router within the EMBRACE architecture to realise large-scale SNNs on embedded hardware. Copyright © 2012 Elsevier Ltd. All rights reserved.

  9. A neuronal network model for context-dependence of pitch change perception.

    PubMed

    Huang, Chengcheng; Englitz, Bernhard; Shamma, Shihab; Rinzel, John

    2015-01-01

    Many natural stimuli have perceptual ambiguities that can be cognitively resolved by the surrounding context. In audition, preceding context can bias the perception of speech and non-speech stimuli. Here, we develop a neuronal network model that can account for how context affects the perception of pitch change between a pair of successive complex tones. We focus especially on an ambiguous comparison-listeners experience opposite percepts (either ascending or descending) for an ambiguous tone pair depending on the spectral location of preceding context tones. We developed a recurrent, firing-rate network model, which detects frequency-change-direction of successively played stimuli and successfully accounts for the context-dependent perception demonstrated in behavioral experiments. The model consists of two tonotopically organized, excitatory populations, E up and E down, that respond preferentially to ascending or descending stimuli in pitch, respectively. These preferences are generated by an inhibitory population that provides inhibition asymmetric in frequency to the two populations; context dependence arises from slow facilitation of inhibition. We show that contextual influence depends on the spectral distribution of preceding tones and the tuning width of inhibitory neurons. Further, we demonstrate, using phase-space analysis, how the facilitated inhibition from previous stimuli and the waning inhibition from the just-preceding tone shape the competition between the E up and E down populations. In sum, our model accounts for contextual influences on the pitch change perception of an ambiguous tone pair by introducing a novel decoding strategy based on direction-selective units. The model's network architecture and slow facilitating inhibition emerge as predictions of neuronal mechanisms for these perceptual dynamics. Since the model structure does not depend on the specific stimuli, we show that it generalizes to other contextual effects and stimulus types.

  10. Structure-function analysis of genetically defined neuronal populations.

    PubMed

    Groh, Alexander; Krieger, Patrik

    2013-10-01

    Morphological and functional classification of individual neurons is a crucial aspect of the characterization of neuronal networks. Systematic structural and functional analysis of individual neurons is now possible using transgenic mice with genetically defined neurons that can be visualized in vivo or in brain slice preparations. Genetically defined neurons are useful for studying a particular class of neurons and also for more comprehensive studies of the neuronal content of a network. Specific subsets of neurons can be identified by fluorescence imaging of enhanced green fluorescent protein (eGFP) or another fluorophore expressed under the control of a cell-type-specific promoter. The advantages of such genetically defined neurons are not only their homogeneity and suitability for systematic descriptions of networks, but also their tremendous potential for cell-type-specific manipulation of neuronal networks in vivo. This article describes a selection of procedures for visualizing and studying the anatomy and physiology of genetically defined neurons in transgenic mice. We provide information about basic equipment, reagents, procedures, and analytical approaches for obtaining three-dimensional (3D) cell morphologies and determining the axonal input and output of genetically defined neurons. We exemplify with genetically labeled cortical neurons, but the procedures are applicable to other brain regions with little or no alterations.

  11. Phase transitions in Pareto optimal complex networks

    NASA Astrophysics Data System (ADS)

    Seoane, Luís F.; Solé, Ricard

    2015-09-01

    The organization of interactions in complex systems can be described by networks connecting different units. These graphs are useful representations of the local and global complexity of the underlying systems. The origin of their topological structure can be diverse, resulting from different mechanisms including multiplicative processes and optimization. In spatial networks or in graphs where cost constraints are at work, as it occurs in a plethora of situations from power grids to the wiring of neurons in the brain, optimization plays an important part in shaping their organization. In this paper we study network designs resulting from a Pareto optimization process, where different simultaneous constraints are the targets of selection. We analyze three variations on a problem, finding phase transitions of different kinds. Distinct phases are associated with different arrangements of the connections, but the need of drastic topological changes does not determine the presence or the nature of the phase transitions encountered. Instead, the functions under optimization do play a determinant role. This reinforces the view that phase transitions do not arise from intrinsic properties of a system alone, but from the interplay of that system with its external constraints.

  12. Developmental time windows for axon growth influence neuronal network topology.

    PubMed

    Lim, Sol; Kaiser, Marcus

    2015-04-01

    Early brain connectivity development consists of multiple stages: birth of neurons, their migration and the subsequent growth of axons and dendrites. Each stage occurs within a certain period of time depending on types of neurons and cortical layers. Forming synapses between neurons either by growing axons starting at similar times for all neurons (much-overlapped time windows) or at different time points (less-overlapped) may affect the topological and spatial properties of neuronal networks. Here, we explore the extreme cases of axon formation during early development, either starting at the same time for all neurons (parallel, i.e., maximally overlapped time windows) or occurring for each neuron separately one neuron after another (serial, i.e., no overlaps in time windows). For both cases, the number of potential and established synapses remained comparable. Topological and spatial properties, however, differed: Neurons that started axon growth early on in serial growth achieved higher out-degrees, higher local efficiency and longer axon lengths while neurons demonstrated more homogeneous connectivity patterns for parallel growth. Second, connection probability decreased more rapidly with distance between neurons for parallel growth than for serial growth. Third, bidirectional connections were more numerous for parallel growth. Finally, we tested our predictions with C. elegans data. Together, this indicates that time windows for axon growth influence the topological and spatial properties of neuronal networks opening up the possibility to a posteriori estimate developmental mechanisms based on network properties of a developed network.

  13. Single-cell transcriptional analysis of taste sensory neuron pair in Caenorhabditis elegans.

    PubMed

    Takayama, Jun; Faumont, Serge; Kunitomo, Hirofumi; Lockery, Shawn R; Iino, Yuichi

    2010-01-01

    The nervous system is composed of a wide variety of neurons. A description of the transcriptional profiles of each neuron would yield enormous information about the molecular mechanisms that define morphological or functional characteristics. Here we show that RNA isolation from single neurons is feasible by using an optimized mRNA tagging method. This method extracts transcripts in the target cells by co-immunoprecipitation of the complexes of RNA and epitope-tagged poly(A) binding protein expressed specifically in the cells. With this method and genome-wide microarray, we compared the transcriptional profiles of two functionally different neurons in the main C. elegans gustatory neuron class ASE. Eight of the 13 known subtype-specific genes were successfully detected. Additionally, we identified nine novel genes including a receptor guanylyl cyclase, secreted proteins, a TRPC channel and uncharacterized genes conserved among nematodes, suggesting the two neurons are substantially different than previously thought. The expression of these novel genes was controlled by the previously known regulatory network for subtype differentiation. We also describe unique motif organization within individual gene groups classified by the expression patterns in ASE. Our study paves the way to the complete catalog of the expression profiles of individual C. elegans neurons.

  14. Large-scale Exploration of Neuronal Morphologies Using Deep Learning and Augmented Reality.

    PubMed

    Li, Zhongyu; Butler, Erik; Li, Kang; Lu, Aidong; Ji, Shuiwang; Zhang, Shaoting

    2018-02-12

    Recently released large-scale neuron morphological data has greatly facilitated the research in neuroinformatics. However, the sheer volume and complexity of these data pose significant challenges for efficient and accurate neuron exploration. In this paper, we propose an effective retrieval framework to address these problems, based on frontier techniques of deep learning and binary coding. For the first time, we develop a deep learning based feature representation method for the neuron morphological data, where the 3D neurons are first projected into binary images and then learned features using an unsupervised deep neural network, i.e., stacked convolutional autoencoders (SCAEs). The deep features are subsequently fused with the hand-crafted features for more accurate representation. Considering the exhaustive search is usually very time-consuming in large-scale databases, we employ a novel binary coding method to compress feature vectors into short binary codes. Our framework is validated on a public data set including 58,000 neurons, showing promising retrieval precision and efficiency compared with state-of-the-art methods. In addition, we develop a novel neuron visualization program based on the techniques of augmented reality (AR), which can help users take a deep exploration of neuron morphologies in an interactive and immersive manner.

  15. Spiking Neurons for Analysis of Patterns

    NASA Technical Reports Server (NTRS)

    Huntsberger, Terrance

    2008-01-01

    Artificial neural networks comprising spiking neurons of a novel type have been conceived as improved pattern-analysis and pattern-recognition computational systems. These neurons are represented by a mathematical model denoted the state-variable model (SVM), which among other things, exploits a computational parallelism inherent in spiking-neuron geometry. Networks of SVM neurons offer advantages of speed and computational efficiency, relative to traditional artificial neural networks. The SVM also overcomes some of the limitations of prior spiking-neuron models. There are numerous potential pattern-recognition, tracking, and data-reduction (data preprocessing) applications for these SVM neural networks on Earth and in exploration of remote planets. Spiking neurons imitate biological neurons more closely than do the neurons of traditional artificial neural networks. A spiking neuron includes a central cell body (soma) surrounded by a tree-like interconnection network (dendrites). Spiking neurons are so named because they generate trains of output pulses (spikes) in response to inputs received from sensors or from other neurons. They gain their speed advantage over traditional neural networks by using the timing of individual spikes for computation, whereas traditional artificial neurons use averages of activity levels over time. Moreover, spiking neurons use the delays inherent in dendritic processing in order to efficiently encode the information content of incoming signals. Because traditional artificial neurons fail to capture this encoding, they have less processing capability, and so it is necessary to use more gates when implementing traditional artificial neurons in electronic circuitry. Such higher-order functions as dynamic tasking are effected by use of pools (collections) of spiking neurons interconnected by spike-transmitting fibers. The SVM includes adaptive thresholds and submodels of transport of ions (in imitation of such transport in biological neurons). These features enable the neurons to adapt their responses to high-rate inputs from sensors, and to adapt their firing thresholds to mitigate noise or effects of potential sensor failure. The mathematical derivation of the SVM starts from a prior model, known in the art as the point soma model, which captures all of the salient properties of neuronal response while keeping the computational cost low. The point-soma latency time is modified to be an exponentially decaying function of the strength of the applied potential. Choosing computational efficiency over biological fidelity, the dendrites surrounding a neuron are represented by simplified compartmental submodels and there are no dendritic spines. Updates to the dendritic potential, calcium-ion concentrations and conductances, and potassium-ion conductances are done by use of equations similar to those of the point soma. Diffusion processes in dendrites are modeled by averaging among nearest-neighbor compartments. Inputs to each of the dendritic compartments come from sensors. Alternatively or in addition, when an affected neuron is part of a pool, inputs can come from other spiking neurons. At present, SVM neural networks are implemented by computational simulation, using algorithms that encode the SVM and its submodels. However, it should be possible to implement these neural networks in hardware: The differential equations for the dendritic and cellular processes in the SVM model of spiking neurons map to equivalent circuits that can be implemented directly in analog very-large-scale integrated (VLSI) circuits.

  16. Reliability and synchronization in a delay-coupled neuronal network with synaptic plasticity

    NASA Astrophysics Data System (ADS)

    Pérez, Toni; Uchida, Atsushi

    2011-06-01

    We investigate the characteristics of reliability and synchronization of a neuronal network of delay-coupled integrate and fire neurons. Reliability and synchronization appear in separated regions of the phase space of the parameters considered. The effect of including synaptic plasticity and different delay values between the connections are also considered. We found that plasticity strongly changes the characteristics of reliability and synchronization in the parameter space of the coupling strength and the drive amplitude for the neuronal network. We also found that delay does not affect the reliability of the network but has a determinant influence on the synchronization of the neurons.

  17. Finding influential nodes for integration in brain networks using optimal percolation theory.

    PubMed

    Del Ferraro, Gino; Moreno, Andrea; Min, Byungjoon; Morone, Flaviano; Pérez-Ramírez, Úrsula; Pérez-Cervera, Laura; Parra, Lucas C; Holodny, Andrei; Canals, Santiago; Makse, Hernán A

    2018-06-11

    Global integration of information in the brain results from complex interactions of segregated brain networks. Identifying the most influential neuronal populations that efficiently bind these networks is a fundamental problem of systems neuroscience. Here, we apply optimal percolation theory and pharmacogenetic interventions in vivo to predict and subsequently target nodes that are essential for global integration of a memory network in rodents. The theory predicts that integration in the memory network is mediated by a set of low-degree nodes located in the nucleus accumbens. This result is confirmed with pharmacogenetic inactivation of the nucleus accumbens, which eliminates the formation of the memory network, while inactivations of other brain areas leave the network intact. Thus, optimal percolation theory predicts essential nodes in brain networks. This could be used to identify targets of interventions to modulate brain function.

  18. Effects of channel noise on firing coherence of small-world Hodgkin-Huxley neuronal networks

    NASA Astrophysics Data System (ADS)

    Sun, X. J.; Lei, J. Z.; Perc, M.; Lu, Q. S.; Lv, S. J.

    2011-01-01

    We investigate the effects of channel noise on firing coherence of Watts-Strogatz small-world networks consisting of biophysically realistic HH neurons having a fraction of blocked voltage-gated sodium and potassium ion channels embedded in their neuronal membranes. The intensity of channel noise is determined by the number of non-blocked ion channels, which depends on the fraction of working ion channels and the membrane patch size with the assumption of homogeneous ion channel density. We find that firing coherence of the neuronal network can be either enhanced or reduced depending on the source of channel noise. As shown in this paper, sodium channel noise reduces firing coherence of neuronal networks; in contrast, potassium channel noise enhances it. Furthermore, compared with potassium channel noise, sodium channel noise plays a dominant role in affecting firing coherence of the neuronal network. Moreover, we declare that the observed phenomena are independent of the rewiring probability.

  19. Study on algorithm of process neural network for soft sensing in sewage disposal system

    NASA Astrophysics Data System (ADS)

    Liu, Zaiwen; Xue, Hong; Wang, Xiaoyi; Yang, Bin; Lu, Siying

    2006-11-01

    A new method of soft sensing based on process neural network (PNN) for sewage disposal system is represented in the paper. PNN is an extension of traditional neural network, in which the inputs and outputs are time-variation. An aggregation operator is introduced to process neuron, and it makes the neuron network has the ability to deal with the information of space-time two dimensions at the same time, so the data processing enginery of biological neuron is imitated better than traditional neuron. Process neural network with the structure of three layers in which hidden layer is process neuron and input and output are common neurons for soft sensing is discussed. The intelligent soft sensing based on PNN may be used to fulfill measurement of the effluent BOD (Biochemical Oxygen Demand) from sewage disposal system, and a good training result of soft sensing was obtained by the method.

  20. APC/CCdh1-Rock2 pathway controls dendritic integrity and memory

    PubMed Central

    Bobo-Jiménez, Verónica; Delgado-Esteban, María; Angibaud, Julie; Sánchez-Morán, Irene; de la Fuente, Antonio; Yajeya, Javier; Nägerl, U. Valentin; Castillo, José; Bolaños, Juan P.

    2017-01-01

    Disruption of neuronal morphology contributes to the pathology of neurodegenerative disorders such as Alzheimer’s disease (AD). However, the underlying molecular mechanisms are unknown. Here, we show that postnatal deletion of Cdh1, a cofactor of the anaphase-promoting complex/cyclosome (APC/C) ubiquitin ligase in neurons [Cdh1 conditional knockout (cKO)], disrupts dendrite arborization and causes dendritic spine and synapse loss in the cortex and hippocampus, concomitant with memory impairment and neurodegeneration, in adult mice. We found that the dendrite destabilizer Rho protein kinase 2 (Rock2), which accumulates in the brain of AD patients, is an APC/CCdh1 substrate in vivo and that Rock2 protein and activity increased in the cortex and hippocampus of Cdh1 cKO mice. In these animals, inhibition of Rock activity, using the clinically approved drug fasudil, prevented dendritic network disorganization, memory loss, and neurodegeneration. Thus, APC/CCdh1-mediated degradation of Rock2 maintains the dendritic network, memory formation, and neuronal survival, suggesting that pharmacological inhibition of aberrantly accumulated Rock2 may be a suitable therapeutic strategy against neurodegeneration. PMID:28396402

  1. Emergent patterns in interacting neuronal sub-populations

    NASA Astrophysics Data System (ADS)

    Kamal, Neeraj Kumar; Sinha, Sudeshna

    2015-05-01

    We investigate an ensemble of coupled model neurons, consisting of groups of varying sizes and intrinsic dynamics, ranging from periodic to chaotic, where the inter-group coupling interaction is effectively like a dynamic signal from a different sub-population. We observe that the minority group can significantly influence the majority group. For instance, when a small chaotic group is coupled to a large periodic group, the chaotic group de-synchronizes. However, counter-intuitively, when a small periodic group couples strongly to a large chaotic group, it leads to complete synchronization in the majority chaotic population, which also spikes at the frequency of the small periodic group. It then appears that the small group of periodic neurons can act like a pacemaker for the whole network. Further, we report the existence of varied clustering patterns, ranging from sets of synchronized clusters to anti-phase clusters, governed by the interplay of the relative sizes and dynamics of the sub-populations. So these results have relevance in understanding how a group can influence the synchrony of another group of dynamically different elements, reminiscent of event-related synchronization/de-synchronization in complex networks.

  2. Racing to learn: statistical inference and learning in a single spiking neuron with adaptive kernels

    PubMed Central

    Afshar, Saeed; George, Libin; Tapson, Jonathan; van Schaik, André; Hamilton, Tara J.

    2014-01-01

    This paper describes the Synapto-dendritic Kernel Adapting Neuron (SKAN), a simple spiking neuron model that performs statistical inference and unsupervised learning of spatiotemporal spike patterns. SKAN is the first proposed neuron model to investigate the effects of dynamic synapto-dendritic kernels and demonstrate their computational power even at the single neuron scale. The rule-set defining the neuron is simple: there are no complex mathematical operations such as normalization, exponentiation or even multiplication. The functionalities of SKAN emerge from the real-time interaction of simple additive and binary processes. Like a biological neuron, SKAN is robust to signal and parameter noise, and can utilize both in its operations. At the network scale neurons are locked in a race with each other with the fastest neuron to spike effectively “hiding” its learnt pattern from its neighbors. The robustness to noise, high speed, and simple building blocks not only make SKAN an interesting neuron model in computational neuroscience, but also make it ideal for implementation in digital and analog neuromorphic systems which is demonstrated through an implementation in a Field Programmable Gate Array (FPGA). Matlab, Python, and Verilog implementations of SKAN are available at: http://www.uws.edu.au/bioelectronics_neuroscience/bens/reproducible_research. PMID:25505378

  3. Racing to learn: statistical inference and learning in a single spiking neuron with adaptive kernels.

    PubMed

    Afshar, Saeed; George, Libin; Tapson, Jonathan; van Schaik, André; Hamilton, Tara J

    2014-01-01

    This paper describes the Synapto-dendritic Kernel Adapting Neuron (SKAN), a simple spiking neuron model that performs statistical inference and unsupervised learning of spatiotemporal spike patterns. SKAN is the first proposed neuron model to investigate the effects of dynamic synapto-dendritic kernels and demonstrate their computational power even at the single neuron scale. The rule-set defining the neuron is simple: there are no complex mathematical operations such as normalization, exponentiation or even multiplication. The functionalities of SKAN emerge from the real-time interaction of simple additive and binary processes. Like a biological neuron, SKAN is robust to signal and parameter noise, and can utilize both in its operations. At the network scale neurons are locked in a race with each other with the fastest neuron to spike effectively "hiding" its learnt pattern from its neighbors. The robustness to noise, high speed, and simple building blocks not only make SKAN an interesting neuron model in computational neuroscience, but also make it ideal for implementation in digital and analog neuromorphic systems which is demonstrated through an implementation in a Field Programmable Gate Array (FPGA). Matlab, Python, and Verilog implementations of SKAN are available at: http://www.uws.edu.au/bioelectronics_neuroscience/bens/reproducible_research.

  4. A combined Bodian-Nissl stain for improved network analysis in neuronal cell culture.

    PubMed

    Hightower, M; Gross, G W

    1985-11-01

    Bodian and Nissl procedures were combined to stain dissociated mouse spinal cord cells cultured on coverslips. The Bodian technique stains fine neuronal processes in great detail as well as an intracellular fibrillar network concentrated around the nucleus and in proximal neurites. The Nissl stain clearly delimits neuronal cytoplasm in somata and in large dendrites. A combination of these techniques allows the simultaneous depiction of neuronal perikarya and all afferent and efferent processes. Costaining with little background staining by either procedure suggests high specificity for neurons. This procedure could be exploited for routine network analysis of cultured neurons.

  5. Cortical Dynamics in Presence of Assemblies of Densely Connected Weight-Hub Neurons

    PubMed Central

    Setareh, Hesam; Deger, Moritz; Petersen, Carl C. H.; Gerstner, Wulfram

    2017-01-01

    Experimental measurements of pairwise connection probability of pyramidal neurons together with the distribution of synaptic weights have been used to construct randomly connected model networks. However, several experimental studies suggest that both wiring and synaptic weight structure between neurons show statistics that differ from random networks. Here we study a network containing a subset of neurons which we call weight-hub neurons, that are characterized by strong inward synapses. We propose a connectivity structure for excitatory neurons that contain assemblies of densely connected weight-hub neurons, while the pairwise connection probability and synaptic weight distribution remain consistent with experimental data. Simulations of such a network with generalized integrate-and-fire neurons display regular and irregular slow oscillations akin to experimentally observed up/down state transitions in the activity of cortical neurons with a broad distribution of pairwise spike correlations. Moreover, stimulation of a model network in the presence or absence of assembly structure exhibits responses similar to light-evoked responses of cortical layers in optogenetically modified animals. We conclude that a high connection probability into and within assemblies of excitatory weight-hub neurons, as it likely is present in some but not all cortical layers, changes the dynamics of a layer of cortical microcircuitry significantly. PMID:28690508

  6. The Role of Adult-Born Neurons in the Constantly Changing Olfactory Bulb Network

    PubMed Central

    Malvaut, Sarah; Saghatelyan, Armen

    2016-01-01

    The adult mammalian brain is remarkably plastic and constantly undergoes structurofunctional modifications in response to environmental stimuli. In many regions plasticity is manifested by modifications in the efficacy of existing synaptic connections or synapse formation and elimination. In a few regions, however, plasticity is brought by the addition of new neurons that integrate into established neuronal networks. This type of neuronal plasticity is particularly prominent in the olfactory bulb (OB) where thousands of neuronal progenitors are produced on a daily basis in the subventricular zone (SVZ) and migrate along the rostral migratory stream (RMS) towards the OB. In the OB, these neuronal precursors differentiate into local interneurons, mature, and functionally integrate into the bulbar network by establishing output synapses with principal neurons. Despite continuous progress, it is still not well understood how normal functioning of the OB is preserved in the constantly remodelling bulbar network and what role adult-born neurons play in odor behaviour. In this review we will discuss different levels of morphofunctional plasticity effected by adult-born neurons and their functional role in the adult OB and also highlight the possibility that different subpopulations of adult-born cells may fulfill distinct functions in the OB neuronal network and odor behaviour. PMID:26839709

  7. The frequency preference of neurons and synapses in a recurrent oscillatory network.

    PubMed

    Tseng, Hua-an; Martinez, Diana; Nadim, Farzan

    2014-09-17

    A variety of neurons and synapses shows a maximal response at a preferred frequency, generally considered to be important in shaping network activity. We are interested in whether all neurons and synapses in a recurrent oscillatory network can have preferred frequencies and, if so, whether these frequencies are the same or correlated, and whether they influence the network activity. We address this question using identified neurons in the pyloric network of the crab Cancer borealis. Previous work has shown that the pyloric pacemaker neurons exhibit membrane potential resonance whose resonance frequency is correlated with the network frequency. The follower lateral pyloric (LP) neuron makes reciprocally inhibitory synapses with the pacemakers. We find that LP shows resonance at a higher frequency than the pacemakers and the network frequency falls between the two. We also find that the reciprocal synapses between the pacemakers and LP have preferred frequencies but at significantly lower values. The preferred frequency of the LP to pacemaker synapse is correlated with the presynaptic preferred frequency, which is most pronounced when the peak voltage of the LP waveform is within the dynamic range of the synaptic activation curve and a shift in the activation curve by the modulatory neuropeptide proctolin shifts the frequency preference. Proctolin also changes the power of the LP neuron resonance without significantly changing the resonance frequency. These results indicate that different neuron types and synapses in a network may have distinct preferred frequencies, which are subject to neuromodulation and may interact to shape network oscillations. Copyright © 2014 the authors 0270-6474/14/3412933-13$15.00/0.

  8. Complexity and multifractality of neuronal noise in mouse and human hippocampal epileptiform dynamics.

    PubMed

    Serletis, Demitre; Bardakjian, Berj L; Valiante, Taufik A; Carlen, Peter L

    2012-10-01

    Fractal methods offer an invaluable means of investigating turbulent nonlinearity in non-stationary biomedical recordings from the brain. Here, we investigate properties of complexity (i.e. the correlation dimension, maximum Lyapunov exponent, 1/f(γ) noise and approximate entropy) and multifractality in background neuronal noise-like activity underlying epileptiform transitions recorded at the intracellular and local network scales from two in vitro models: the whole-intact mouse hippocampus and lesional human hippocampal slices. Our results show evidence for reduced dynamical complexity and multifractal signal features following transition to the ictal epileptiform state. These findings suggest that pathological breakdown in multifractal complexity coincides with loss of signal variability or heterogeneity, consistent with an unhealthy ictal state that is far from the equilibrium of turbulent yet healthy fractal dynamics in the brain. Thus, it appears that background noise-like activity successfully captures complex and multifractal signal features that may, at least in part, be used to classify and identify brain state transitions in the healthy and epileptic brain, offering potential promise for therapeutic neuromodulatory strategies for afflicted patients suffering from epilepsy and other related neurological disorders.

  9. Encoding of Olfactory Information with Oscillating Neural Assemblies

    NASA Astrophysics Data System (ADS)

    Laurent, Gilles; Davidowitz, Hananel

    1994-09-01

    In the brain, fast oscillations of local field potentials, which are thought to arise from the coherent and rhythmic activity of large numbers of neurons, were observed first in the olfactory system and have since been described in many neocortical areas. The importance of these oscillations in information coding, however, is controversial. Here, local field potential and intracellular recordings were obtained from the antennal lobe and mushroom body of the locust Schistocerca americana. Different odors evoked coherent oscillations in different, but usually overlapping, ensembles of neurons. The phase of firing of individual neurons relative to the population was not dependent on the odor. The components of a coherently oscillating ensemble of neurons changed over the duration of a single exposure to an odor. It is thus proposed that odors are encoded by specific but dynamic assemblies of coherently oscillating neurons. Such distributed and temporal representation of complex sensory signals may facilitate combinatorial coding and associative learning in these, and possibly other, sensory networks.

  10. Functional cortical neurons and astrocytes from human pluripotent stem cells in 3D culture.

    PubMed

    Paşca, Anca M; Sloan, Steven A; Clarke, Laura E; Tian, Yuan; Makinson, Christopher D; Huber, Nina; Kim, Chul Hoon; Park, Jin-Young; O'Rourke, Nancy A; Nguyen, Khoa D; Smith, Stephen J; Huguenard, John R; Geschwind, Daniel H; Barres, Ben A; Paşca, Sergiu P

    2015-07-01

    The human cerebral cortex develops through an elaborate succession of cellular events that, when disrupted, can lead to neuropsychiatric disease. The ability to reprogram somatic cells into pluripotent cells that can be differentiated in vitro provides a unique opportunity to study normal and abnormal corticogenesis. Here, we present a simple and reproducible 3D culture approach for generating a laminated cerebral cortex-like structure, named human cortical spheroids (hCSs), from pluripotent stem cells. hCSs contain neurons from both deep and superficial cortical layers and map transcriptionally to in vivo fetal development. These neurons are electrophysiologically mature, display spontaneous activity, are surrounded by nonreactive astrocytes and form functional synapses. Experiments in acute hCS slices demonstrate that cortical neurons participate in network activity and produce complex synaptic events. These 3D cultures should allow a detailed interrogation of human cortical development, function and disease, and may prove a versatile platform for generating other neuronal and glial subtypes in vitro.

  11. Fitting Neuron Models to Spike Trains

    PubMed Central

    Rossant, Cyrille; Goodman, Dan F. M.; Fontaine, Bertrand; Platkiewicz, Jonathan; Magnusson, Anna K.; Brette, Romain

    2011-01-01

    Computational modeling is increasingly used to understand the function of neural circuits in systems neuroscience. These studies require models of individual neurons with realistic input–output properties. Recently, it was found that spiking models can accurately predict the precisely timed spike trains produced by cortical neurons in response to somatically injected currents, if properly fitted. This requires fitting techniques that are efficient and flexible enough to easily test different candidate models. We present a generic solution, based on the Brian simulator (a neural network simulator in Python), which allows the user to define and fit arbitrary neuron models to electrophysiological recordings. It relies on vectorization and parallel computing techniques to achieve efficiency. We demonstrate its use on neural recordings in the barrel cortex and in the auditory brainstem, and confirm that simple adaptive spiking models can accurately predict the response of cortical neurons. Finally, we show how a complex multicompartmental model can be reduced to a simple effective spiking model. PMID:21415925

  12. On the Dynamics of the Spontaneous Activity in Neuronal Networks

    PubMed Central

    Bonifazi, Paolo; Ruaro, Maria Elisabetta; Torre, Vincent

    2007-01-01

    Most neuronal networks, even in the absence of external stimuli, produce spontaneous bursts of spikes separated by periods of reduced activity. The origin and functional role of these neuronal events are still unclear. The present work shows that the spontaneous activity of two very different networks, intact leech ganglia and dissociated cultures of rat hippocampal neurons, share several features. Indeed, in both networks: i) the inter-spike intervals distribution of the spontaneous firing of single neurons is either regular or periodic or bursting, with the fraction of bursting neurons depending on the network activity; ii) bursts of spontaneous spikes have the same broad distributions of size and duration; iii) the degree of correlated activity increases with the bin width, and the power spectrum of the network firing rate has a 1/f behavior at low frequencies, indicating the existence of long-range temporal correlations; iv) the activity of excitatory synaptic pathways mediated by NMDA receptors is necessary for the onset of the long-range correlations and for the presence of large bursts; v) blockage of inhibitory synaptic pathways mediated by GABAA receptors causes instead an increase in the correlation among neurons and leads to a burst distribution composed only of very small and very large bursts. These results suggest that the spontaneous electrical activity in neuronal networks with different architectures and functions can have very similar properties and common dynamics. PMID:17502919

  13. Functional analysis of neuronal microRNAs in Caenorhabditis elegans dauer formation by combinational genetics and Neuronal miRISC immunoprecipitation.

    PubMed

    Than, Minh T; Kudlow, Brian A; Han, Min

    2013-06-01

    Identifying the physiological functions of microRNAs (miRNAs) is often challenging because miRNAs commonly impact gene expression under specific physiological conditions through complex miRNA::mRNA interaction networks and in coordination with other means of gene regulation, such as transcriptional regulation and protein degradation. Such complexity creates difficulties in dissecting miRNA functions through traditional genetic methods using individual miRNA mutations. To investigate the physiological functions of miRNAs in neurons, we combined a genetic "enhancer" approach complemented by biochemical analysis of neuronal miRNA-induced silencing complexes (miRISCs) in C. elegans. Total miRNA function can be compromised by mutating one of the two GW182 proteins (AIN-1), an important component of miRISC. We found that combining an ain-1 mutation with a mutation in unc-3, a neuronal transcription factor, resulted in an inappropriate entrance into the stress-induced, alternative larval stage known as dauer, indicating a role of miRNAs in preventing aberrant dauer formation. Analysis of this genetic interaction suggests that neuronal miRNAs perform such a role partly by regulating endogenous cyclic guanosine monophosphate (cGMP) signaling, potentially influencing two other dauer-regulating pathways. Through tissue-specific immunoprecipitations of miRISC, we identified miRNAs and their likely target mRNAs within neuronal tissue. We verified the biological relevance of several of these miRNAs and found that many miRNAs likely regulate dauer formation through multiple dauer-related targets. Further analysis of target mRNAs suggests potential miRNA involvement in various neuronal processes, but the importance of these miRNA::mRNA interactions remains unclear. Finally, we found that neuronal genes may be more highly regulated by miRNAs than intestinal genes. Overall, our study identifies miRNAs and their targets, and a physiological function of these miRNAs in neurons. It also suggests that compromising other aspects of gene expression, along with miRISC, can be an effective approach to reveal miRNA functions in specific tissues under specific physiological conditions.

  14. Numbers And Gains Of Neurons In Winner-Take-All Networks

    NASA Technical Reports Server (NTRS)

    Brown, Timothy X.

    1993-01-01

    Report presents theoretical study of gains required in neurons to implement winner-take-all electronic neural network of given size and related question of maximum size of winner-take-all network in which neurons have specified sigmoid transfer or response function with specified gain.

  15. Simultaneous stability and sensitivity in model cortical networks is achieved through anti-correlations between the in- and out-degree of connectivity

    PubMed Central

    Vasquez, Juan C.; Houweling, Arthur R.; Tiesinga, Paul

    2013-01-01

    Neuronal networks in rodent barrel cortex are characterized by stable low baseline firing rates. However, they are sensitive to the action potentials of single neurons as suggested by recent single-cell stimulation experiments that reported quantifiable behavioral responses in response to short spike trains elicited in single neurons. Hence, these networks are stable against internally generated fluctuations in firing rate but at the same time remain sensitive to similarly-sized externally induced perturbations. We investigated stability and sensitivity in a simple recurrent network of stochastic binary neurons and determined numerically the effects of correlation between the number of afferent (“in-degree”) and efferent (“out-degree”) connections in neurons. The key advance reported in this work is that anti-correlation between in-/out-degree distributions increased the stability of the network in comparison to networks with no correlation or positive correlations, while being able to achieve the same level of sensitivity. The experimental characterization of degree distributions is difficult because all pre-synaptic and post-synaptic neurons have to be identified and counted. We explored whether the statistics of network motifs, which requires the characterization of connections between small subsets of neurons, could be used to detect evidence for degree anti-correlations. We find that the sample frequency of the 3-neuron “ring” motif (1→2→3→1), can be used to detect degree anti-correlation for sub-networks of size 30 using about 50 samples, which is of significance because the necessary measurements are achievable experimentally in the near future. Taken together, we hypothesize that barrel cortex networks exhibit degree anti-correlations and specific network motif statistics. PMID:24223550

  16. 1D-3D hybrid modeling-from multi-compartment models to full resolution models in space and time.

    PubMed

    Grein, Stephan; Stepniewski, Martin; Reiter, Sebastian; Knodel, Markus M; Queisser, Gillian

    2014-01-01

    Investigation of cellular and network dynamics in the brain by means of modeling and simulation has evolved into a highly interdisciplinary field, that uses sophisticated modeling and simulation approaches to understand distinct areas of brain function. Depending on the underlying complexity, these models vary in their level of detail, in order to cope with the attached computational cost. Hence for large network simulations, single neurons are typically reduced to time-dependent signal processors, dismissing the spatial aspect of each cell. For single cell or networks with relatively small numbers of neurons, general purpose simulators allow for space and time-dependent simulations of electrical signal processing, based on the cable equation theory. An emerging field in Computational Neuroscience encompasses a new level of detail by incorporating the full three-dimensional morphology of cells and organelles into three-dimensional, space and time-dependent, simulations. While every approach has its advantages and limitations, such as computational cost, integrated and methods-spanning simulation approaches, depending on the network size could establish new ways to investigate the brain. In this paper we present a hybrid simulation approach, that makes use of reduced 1D-models using e.g., the NEURON simulator-which couples to fully resolved models for simulating cellular and sub-cellular dynamics, including the detailed three-dimensional morphology of neurons and organelles. In order to couple 1D- and 3D-simulations, we present a geometry-, membrane potential- and intracellular concentration mapping framework, with which graph- based morphologies, e.g., in the swc- or hoc-format, are mapped to full surface and volume representations of the neuron and computational data from 1D-simulations can be used as boundary conditions for full 3D simulations and vice versa. Thus, established models and data, based on general purpose 1D-simulators, can be directly coupled to the emerging field of fully resolved, highly detailed 3D-modeling approaches. We present the developed general framework for 1D/3D hybrid modeling and apply it to investigate electrically active neurons and their intracellular spatio-temporal calcium dynamics.

  17. 1D-3D hybrid modeling—from multi-compartment models to full resolution models in space and time

    PubMed Central

    Grein, Stephan; Stepniewski, Martin; Reiter, Sebastian; Knodel, Markus M.; Queisser, Gillian

    2014-01-01

    Investigation of cellular and network dynamics in the brain by means of modeling and simulation has evolved into a highly interdisciplinary field, that uses sophisticated modeling and simulation approaches to understand distinct areas of brain function. Depending on the underlying complexity, these models vary in their level of detail, in order to cope with the attached computational cost. Hence for large network simulations, single neurons are typically reduced to time-dependent signal processors, dismissing the spatial aspect of each cell. For single cell or networks with relatively small numbers of neurons, general purpose simulators allow for space and time-dependent simulations of electrical signal processing, based on the cable equation theory. An emerging field in Computational Neuroscience encompasses a new level of detail by incorporating the full three-dimensional morphology of cells and organelles into three-dimensional, space and time-dependent, simulations. While every approach has its advantages and limitations, such as computational cost, integrated and methods-spanning simulation approaches, depending on the network size could establish new ways to investigate the brain. In this paper we present a hybrid simulation approach, that makes use of reduced 1D-models using e.g., the NEURON simulator—which couples to fully resolved models for simulating cellular and sub-cellular dynamics, including the detailed three-dimensional morphology of neurons and organelles. In order to couple 1D- and 3D-simulations, we present a geometry-, membrane potential- and intracellular concentration mapping framework, with which graph- based morphologies, e.g., in the swc- or hoc-format, are mapped to full surface and volume representations of the neuron and computational data from 1D-simulations can be used as boundary conditions for full 3D simulations and vice versa. Thus, established models and data, based on general purpose 1D-simulators, can be directly coupled to the emerging field of fully resolved, highly detailed 3D-modeling approaches. We present the developed general framework for 1D/3D hybrid modeling and apply it to investigate electrically active neurons and their intracellular spatio-temporal calcium dynamics. PMID:25120463

  18. Volterra representation enables modeling of complex synaptic nonlinear dynamics in large-scale simulations.

    PubMed

    Hu, Eric Y; Bouteiller, Jean-Marie C; Song, Dong; Baudry, Michel; Berger, Theodore W

    2015-01-01

    Chemical synapses are comprised of a wide collection of intricate signaling pathways involving complex dynamics. These mechanisms are often reduced to simple spikes or exponential representations in order to enable computer simulations at higher spatial levels of complexity. However, these representations cannot capture important nonlinear dynamics found in synaptic transmission. Here, we propose an input-output (IO) synapse model capable of generating complex nonlinear dynamics while maintaining low computational complexity. This IO synapse model is an extension of a detailed mechanistic glutamatergic synapse model capable of capturing the input-output relationships of the mechanistic model using the Volterra functional power series. We demonstrate that the IO synapse model is able to successfully track the nonlinear dynamics of the synapse up to the third order with high accuracy. We also evaluate the accuracy of the IO synapse model at different input frequencies and compared its performance with that of kinetic models in compartmental neuron models. Our results demonstrate that the IO synapse model is capable of efficiently replicating complex nonlinear dynamics that were represented in the original mechanistic model and provide a method to replicate complex and diverse synaptic transmission within neuron network simulations.

  19. Volterra representation enables modeling of complex synaptic nonlinear dynamics in large-scale simulations

    PubMed Central

    Hu, Eric Y.; Bouteiller, Jean-Marie C.; Song, Dong; Baudry, Michel; Berger, Theodore W.

    2015-01-01

    Chemical synapses are comprised of a wide collection of intricate signaling pathways involving complex dynamics. These mechanisms are often reduced to simple spikes or exponential representations in order to enable computer simulations at higher spatial levels of complexity. However, these representations cannot capture important nonlinear dynamics found in synaptic transmission. Here, we propose an input-output (IO) synapse model capable of generating complex nonlinear dynamics while maintaining low computational complexity. This IO synapse model is an extension of a detailed mechanistic glutamatergic synapse model capable of capturing the input-output relationships of the mechanistic model using the Volterra functional power series. We demonstrate that the IO synapse model is able to successfully track the nonlinear dynamics of the synapse up to the third order with high accuracy. We also evaluate the accuracy of the IO synapse model at different input frequencies and compared its performance with that of kinetic models in compartmental neuron models. Our results demonstrate that the IO synapse model is capable of efficiently replicating complex nonlinear dynamics that were represented in the original mechanistic model and provide a method to replicate complex and diverse synaptic transmission within neuron network simulations. PMID:26441622

  20. GaAs Optoelectronic Integrated-Circuit Neurons

    NASA Technical Reports Server (NTRS)

    Lin, Steven H.; Kim, Jae H.; Psaltis, Demetri

    1992-01-01

    Monolithic GaAs optoelectronic integrated circuits developed for use as artificial neurons. Neural-network computer contains planar arrays of optoelectronic neurons, and variable synaptic connections between neurons effected by diffraction of light from volume hologram in photorefractive material. Basic principles of neural-network computers explained more fully in "Optoelectronic Integrated Circuits For Neural Networks" (NPO-17652). In present circuits, devices replaced by metal/semiconductor field effect transistors (MESFET's), which consume less power.

Top