Sample records for dynamic neural fields

  1. Field-theoretic approach to fluctuation effects in neural networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buice, Michael A.; Cowan, Jack D.; Mathematics Department, University of Chicago, Chicago, Illinois 60637

    A well-defined stochastic theory for neural activity, which permits the calculation of arbitrary statistical moments and equations governing them, is a potentially valuable tool for theoretical neuroscience. We produce such a theory by analyzing the dynamics of neural activity using field theoretic methods for nonequilibrium statistical processes. Assuming that neural network activity is Markovian, we construct the effective spike model, which describes both neural fluctuations and response. This analysis leads to a systematic expansion of corrections to mean field theory, which for the effective spike model is a simple version of the Wilson-Cowan equation. We argue that neural activity governedmore » by this model exhibits a dynamical phase transition which is in the universality class of directed percolation. More general models (which may incorporate refractoriness) can exhibit other universality classes, such as dynamic isotropic percolation. Because of the extremely high connectivity in typical networks, it is expected that higher-order terms in the systematic expansion are small for experimentally accessible measurements, and thus, consistent with measurements in neocortical slice preparations, we expect mean field exponents for the transition. We provide a quantitative criterion for the relative magnitude of each term in the systematic expansion, analogous to the Ginsburg criterion. Experimental identification of dynamic universality classes in vivo is an outstanding and important question for neuroscience.« less

  2. Dynamics of absence seizures

    NASA Astrophysics Data System (ADS)

    Deeba, Farah; Sanz-Leon, Paula; Robinson, Peter

    A neural field model of the corticothalamic system is used to investigate the dynamics of absence seizures in the presence of temporally varying connection strength between the cerebral cortex and thalamus. Variation of connection strength from cortex to thalamus drives the system into seizure once a threshold is passed and a supercritical Hopf bifurcation occurs. The dynamics and spectral characteristics of the resulting seizures are explored as functions of maximum connection strength, time above threshold, and ramp rate. The results enable spectral and temporal characteristics of seizures to be related to underlying physiological variations via nonlinear dynamics and neural field theory. Notably, this analysis adds to neural field modeling of a wide variety of brain activity phenomena and measurements in recent years. Australian Research Council Grants FL1401000225 and CE140100007.

  3. ChainMail based neural dynamics modeling of soft tissue deformation for surgical simulation.

    PubMed

    Zhang, Jinao; Zhong, Yongmin; Smith, Julian; Gu, Chengfan

    2017-07-20

    Realistic and real-time modeling and simulation of soft tissue deformation is a fundamental research issue in the field of surgical simulation. In this paper, a novel cellular neural network approach is presented for modeling and simulation of soft tissue deformation by combining neural dynamics of cellular neural network with ChainMail mechanism. The proposed method formulates the problem of elastic deformation into cellular neural network activities to avoid the complex computation of elasticity. The local position adjustments of ChainMail are incorporated into the cellular neural network as the local connectivity of cells, through which the dynamic behaviors of soft tissue deformation are transformed into the neural dynamics of cellular neural network. Experiments demonstrate that the proposed neural network approach is capable of modeling the soft tissues' nonlinear deformation and typical mechanical behaviors. The proposed method not only improves ChainMail's linear deformation with the nonlinear characteristics of neural dynamics but also enables the cellular neural network to follow the principle of continuum mechanics to simulate soft tissue deformation.

  4. The Complexity of Dynamics in Small Neural Circuits

    PubMed Central

    Panzeri, Stefano

    2016-01-01

    Mean-field approximations are a powerful tool for studying large neural networks. However, they do not describe well the behavior of networks composed of a small number of neurons. In this case, major differences between the mean-field approximation and the real behavior of the network can arise. Yet, many interesting problems in neuroscience involve the study of mesoscopic networks composed of a few tens of neurons. Nonetheless, mathematical methods that correctly describe networks of small size are still rare, and this prevents us to make progress in understanding neural dynamics at these intermediate scales. Here we develop a novel systematic analysis of the dynamics of arbitrarily small networks composed of homogeneous populations of excitatory and inhibitory firing-rate neurons. We study the local bifurcations of their neural activity with an approach that is largely analytically tractable, and we numerically determine the global bifurcations. We find that for strong inhibition these networks give rise to very complex dynamics, caused by the formation of multiple branching solutions of the neural dynamics equations that emerge through spontaneous symmetry-breaking. This qualitative change of the neural dynamics is a finite-size effect of the network, that reveals qualitative and previously unexplored differences between mesoscopic cortical circuits and their mean-field approximation. The most important consequence of spontaneous symmetry-breaking is the ability of mesoscopic networks to regulate their degree of functional heterogeneity, which is thought to help reducing the detrimental effect of noise correlations on cortical information processing. PMID:27494737

  5. A Neural Dynamic Architecture for Reaching and Grasping Integrates Perception and Movement Generation and Enables On-Line Updating.

    PubMed

    Knips, Guido; Zibner, Stephan K U; Reimann, Hendrik; Schöner, Gregor

    2017-01-01

    Reaching for objects and grasping them is a fundamental skill for any autonomous robot that interacts with its environment. Although this skill seems trivial to adults, who effortlessly pick up even objects they have never seen before, it is hard for other animals, for human infants, and for most autonomous robots. Any time during movement preparation and execution, human reaching movement are updated if the visual scene changes (with a delay of about 100 ms). The capability for online updating highlights how tightly perception, movement planning, and movement generation are integrated in humans. Here, we report on an effort to reproduce this tight integration in a neural dynamic process model of reaching and grasping that covers the complete path from visual perception to movement generation within a unified modeling framework, Dynamic Field Theory. All requisite processes are realized as time-continuous dynamical systems that model the evolution in time of neural population activation. Population level neural processes bring about the attentional selection of objects, the estimation of object shape and pose, and the mapping of pose parameters to suitable movement parameters. Once a target object has been selected, its pose parameters couple into the neural dynamics of movement generation so that changes of pose are propagated through the architecture to update the performed movement online. Implementing the neural architecture on an anthropomorphic robot arm equipped with a Kinect sensor, we evaluate the model by grasping wooden objects. Their size, shape, and pose are estimated from a neural model of scene perception that is based on feature fields. The sequential organization of a reach and grasp act emerges from a sequence of dynamic instabilities within a neural dynamics of behavioral organization, that effectively switches the neural controllers from one phase of the action to the next. Trajectory formation itself is driven by a dynamical systems version of the potential field approach. We highlight the emergent capacity for online updating by showing that a shift or rotation of the object during the reaching phase leads to the online adaptation of the movement plan and successful completion of the grasp.

  6. A Neural Dynamic Architecture for Reaching and Grasping Integrates Perception and Movement Generation and Enables On-Line Updating

    PubMed Central

    Knips, Guido; Zibner, Stephan K. U.; Reimann, Hendrik; Schöner, Gregor

    2017-01-01

    Reaching for objects and grasping them is a fundamental skill for any autonomous robot that interacts with its environment. Although this skill seems trivial to adults, who effortlessly pick up even objects they have never seen before, it is hard for other animals, for human infants, and for most autonomous robots. Any time during movement preparation and execution, human reaching movement are updated if the visual scene changes (with a delay of about 100 ms). The capability for online updating highlights how tightly perception, movement planning, and movement generation are integrated in humans. Here, we report on an effort to reproduce this tight integration in a neural dynamic process model of reaching and grasping that covers the complete path from visual perception to movement generation within a unified modeling framework, Dynamic Field Theory. All requisite processes are realized as time-continuous dynamical systems that model the evolution in time of neural population activation. Population level neural processes bring about the attentional selection of objects, the estimation of object shape and pose, and the mapping of pose parameters to suitable movement parameters. Once a target object has been selected, its pose parameters couple into the neural dynamics of movement generation so that changes of pose are propagated through the architecture to update the performed movement online. Implementing the neural architecture on an anthropomorphic robot arm equipped with a Kinect sensor, we evaluate the model by grasping wooden objects. Their size, shape, and pose are estimated from a neural model of scene perception that is based on feature fields. The sequential organization of a reach and grasp act emerges from a sequence of dynamic instabilities within a neural dynamics of behavioral organization, that effectively switches the neural controllers from one phase of the action to the next. Trajectory formation itself is driven by a dynamical systems version of the potential field approach. We highlight the emergent capacity for online updating by showing that a shift or rotation of the object during the reaching phase leads to the online adaptation of the movement plan and successful completion of the grasp. PMID:28303100

  7. Contributions of Dynamic Systems Theory to Cognitive Development

    ERIC Educational Resources Information Center

    Spencer, John P.; Austin, Andrew; Schutte, Anne R.

    2012-01-01

    We examine the contributions of dynamic systems theory to the field of cognitive development, focusing on modeling using dynamic neural fields. After introducing central concepts of dynamic field theory (DFT), we probe empirical predictions and findings around two examples--the DFT of infant perseverative reaching that explains Piaget's A-not-B…

  8. A Neural Dynamic Model Generates Descriptions of Object-Oriented Actions.

    PubMed

    Richter, Mathis; Lins, Jonas; Schöner, Gregor

    2017-01-01

    Describing actions entails that relations between objects are discovered. A pervasively neural account of this process requires that fundamental problems are solved: the neural pointer problem, the binding problem, and the problem of generating discrete processing steps from time-continuous neural processes. We present a prototypical solution to these problems in a neural dynamic model that comprises dynamic neural fields holding representations close to sensorimotor surfaces as well as dynamic neural nodes holding discrete, language-like representations. Making the connection between these two types of representations enables the model to describe actions as well as to perceptually ground movement phrases-all based on real visual input. We demonstrate how the dynamic neural processes autonomously generate the processing steps required to describe or ground object-oriented actions. By solving the fundamental problems of neural pointing, binding, and emergent discrete processing, the model may be a first but critical step toward a systematic neural processing account of higher cognition. Copyright © 2017 The Authors. Topics in Cognitive Science published by Wiley Periodicals, Inc. on behalf of Cognitive Science Society.

  9. Model-based functional neuroimaging using dynamic neural fields: An integrative cognitive neuroscience approach

    PubMed Central

    Wijeakumar, Sobanawartiny; Ambrose, Joseph P.; Spencer, John P.; Curtu, Rodica

    2017-01-01

    A fundamental challenge in cognitive neuroscience is to develop theoretical frameworks that effectively span the gap between brain and behavior, between neuroscience and psychology. Here, we attempt to bridge this divide by formalizing an integrative cognitive neuroscience approach using dynamic field theory (DFT). We begin by providing an overview of how DFT seeks to understand the neural population dynamics that underlie cognitive processes through previous applications and comparisons to other modeling approaches. We then use previously published behavioral and neural data from a response selection Go/Nogo task as a case study for model simulations. Results from this study served as the ‘standard’ for comparisons with a model-based fMRI approach using dynamic neural fields (DNF). The tutorial explains the rationale and hypotheses involved in the process of creating the DNF architecture and fitting model parameters. Two DNF models, with similar structure and parameter sets, are then compared. Both models effectively simulated reaction times from the task as we varied the number of stimulus-response mappings and the proportion of Go trials. Next, we directly simulated hemodynamic predictions from the neural activation patterns from each model. These predictions were tested using general linear models (GLMs). Results showed that the DNF model that was created by tuning parameters to capture simultaneously trends in neural activation and behavioral data quantitatively outperformed a Standard GLM analysis of the same dataset. Further, by using the GLM results to assign functional roles to particular clusters in the brain, we illustrate how DNF models shed new light on the neural populations’ dynamics within particular brain regions. Thus, the present study illustrates how an interactive cognitive neuroscience model can be used in practice to bridge the gap between brain and behavior. PMID:29118459

  10. Dynamic Neural State Identification in Deep Brain Local Field Potentials of Neuropathic Pain.

    PubMed

    Luo, Huichun; Huang, Yongzhi; Du, Xueying; Zhang, Yunpeng; Green, Alexander L; Aziz, Tipu Z; Wang, Shouyan

    2018-01-01

    In neuropathic pain, the neurophysiological and neuropathological function of the ventro-posterolateral nucleus of the thalamus (VPL) and the periventricular gray/periaqueductal gray area (PVAG) involves multiple frequency oscillations. Moreover, oscillations related to pain perception and modulation change dynamically over time. Fluctuations in these neural oscillations reflect the dynamic neural states of the nucleus. In this study, an approach to classifying the synchronization level was developed to dynamically identify the neural states. An oscillation extraction model based on windowed wavelet packet transform was designed to characterize the activity level of oscillations. The wavelet packet coefficients sparsely represented the activity level of theta and alpha oscillations in local field potentials (LFPs). Then, a state discrimination model was designed to calculate an adaptive threshold to determine the activity level of oscillations. Finally, the neural state was represented by the activity levels of both theta and alpha oscillations. The relationship between neural states and pain relief was further evaluated. The performance of the state identification approach achieved sensitivity and specificity beyond 80% in simulation signals. Neural states of the PVAG and VPL were dynamically identified from LFPs of neuropathic pain patients. The occurrence of neural states based on theta and alpha oscillations were correlated to the degree of pain relief by deep brain stimulation. In the PVAG LFPs, the occurrence of the state with high activity levels of theta oscillations independent of alpha and the state with low-level alpha and high-level theta oscillations were significantly correlated with pain relief by deep brain stimulation. This study provides a reliable approach to identifying the dynamic neural states in LFPs with a low signal-to-noise ratio by using sparse representation based on wavelet packet transform. Furthermore, it may advance closed-loop deep brain stimulation based on neural states integrating multiple neural oscillations.

  11. Dynamic Neural State Identification in Deep Brain Local Field Potentials of Neuropathic Pain

    PubMed Central

    Luo, Huichun; Huang, Yongzhi; Du, Xueying; Zhang, Yunpeng; Green, Alexander L.; Aziz, Tipu Z.; Wang, Shouyan

    2018-01-01

    In neuropathic pain, the neurophysiological and neuropathological function of the ventro-posterolateral nucleus of the thalamus (VPL) and the periventricular gray/periaqueductal gray area (PVAG) involves multiple frequency oscillations. Moreover, oscillations related to pain perception and modulation change dynamically over time. Fluctuations in these neural oscillations reflect the dynamic neural states of the nucleus. In this study, an approach to classifying the synchronization level was developed to dynamically identify the neural states. An oscillation extraction model based on windowed wavelet packet transform was designed to characterize the activity level of oscillations. The wavelet packet coefficients sparsely represented the activity level of theta and alpha oscillations in local field potentials (LFPs). Then, a state discrimination model was designed to calculate an adaptive threshold to determine the activity level of oscillations. Finally, the neural state was represented by the activity levels of both theta and alpha oscillations. The relationship between neural states and pain relief was further evaluated. The performance of the state identification approach achieved sensitivity and specificity beyond 80% in simulation signals. Neural states of the PVAG and VPL were dynamically identified from LFPs of neuropathic pain patients. The occurrence of neural states based on theta and alpha oscillations were correlated to the degree of pain relief by deep brain stimulation. In the PVAG LFPs, the occurrence of the state with high activity levels of theta oscillations independent of alpha and the state with low-level alpha and high-level theta oscillations were significantly correlated with pain relief by deep brain stimulation. This study provides a reliable approach to identifying the dynamic neural states in LFPs with a low signal-to-noise ratio by using sparse representation based on wavelet packet transform. Furthermore, it may advance closed-loop deep brain stimulation based on neural states integrating multiple neural oscillations. PMID:29695951

  12. Developing Dynamic Field Theory Architectures for Embodied Cognitive Systems with cedar.

    PubMed

    Lomp, Oliver; Richter, Mathis; Zibner, Stephan K U; Schöner, Gregor

    2016-01-01

    Embodied artificial cognitive systems, such as autonomous robots or intelligent observers, connect cognitive processes to sensory and effector systems in real time. Prime candidates for such embodied intelligence are neurally inspired architectures. While components such as forward neural networks are well established, designing pervasively autonomous neural architectures remains a challenge. This includes the problem of tuning the parameters of such architectures so that they deliver specified functionality under variable environmental conditions and retain these functions as the architectures are expanded. The scaling and autonomy problems are solved, in part, by dynamic field theory (DFT), a theoretical framework for the neural grounding of sensorimotor and cognitive processes. In this paper, we address how to efficiently build DFT architectures that control embodied agents and how to tune their parameters so that the desired cognitive functions emerge while such agents are situated in real environments. In DFT architectures, dynamic neural fields or nodes are assigned dynamic regimes, that is, attractor states and their instabilities, from which cognitive function emerges. Tuning thus amounts to determining values of the dynamic parameters for which the components of a DFT architecture are in the specified dynamic regime under the appropriate environmental conditions. The process of tuning is facilitated by the software framework cedar , which provides a graphical interface to build and execute DFT architectures. It enables to change dynamic parameters online and visualize the activation states of any component while the agent is receiving sensory inputs in real time. Using a simple example, we take the reader through the workflow of conceiving of DFT architectures, implementing them on embodied agents, tuning their parameters, and assessing performance while the system is coupled to real sensory inputs.

  13. Developing Dynamic Field Theory Architectures for Embodied Cognitive Systems with cedar

    PubMed Central

    Lomp, Oliver; Richter, Mathis; Zibner, Stephan K. U.; Schöner, Gregor

    2016-01-01

    Embodied artificial cognitive systems, such as autonomous robots or intelligent observers, connect cognitive processes to sensory and effector systems in real time. Prime candidates for such embodied intelligence are neurally inspired architectures. While components such as forward neural networks are well established, designing pervasively autonomous neural architectures remains a challenge. This includes the problem of tuning the parameters of such architectures so that they deliver specified functionality under variable environmental conditions and retain these functions as the architectures are expanded. The scaling and autonomy problems are solved, in part, by dynamic field theory (DFT), a theoretical framework for the neural grounding of sensorimotor and cognitive processes. In this paper, we address how to efficiently build DFT architectures that control embodied agents and how to tune their parameters so that the desired cognitive functions emerge while such agents are situated in real environments. In DFT architectures, dynamic neural fields or nodes are assigned dynamic regimes, that is, attractor states and their instabilities, from which cognitive function emerges. Tuning thus amounts to determining values of the dynamic parameters for which the components of a DFT architecture are in the specified dynamic regime under the appropriate environmental conditions. The process of tuning is facilitated by the software framework cedar, which provides a graphical interface to build and execute DFT architectures. It enables to change dynamic parameters online and visualize the activation states of any component while the agent is receiving sensory inputs in real time. Using a simple example, we take the reader through the workflow of conceiving of DFT architectures, implementing them on embodied agents, tuning their parameters, and assessing performance while the system is coupled to real sensory inputs. PMID:27853431

  14. Notochord-derived Shh concentrates in close association with the apically positioned basal body in neural target cells and forms a dynamic gradient during neural patterning.

    PubMed

    Chamberlain, Chester E; Jeong, Juhee; Guo, Chaoshe; Allen, Benjamin L; McMahon, Andrew P

    2008-03-01

    Sonic hedgehog (Shh) ligand secreted by the notochord induces distinct ventral cell identities in the adjacent neural tube by a concentration-dependent mechanism. To study this process, we genetically engineered mice that produce bioactive, fluorescently labeled Shh from the endogenous locus. We show that Shh ligand concentrates in close association with the apically positioned basal body of neural target cells, forming a dynamic, punctate gradient in the ventral neural tube. Both ligand lipidation and target field response influence the gradient profile, but not the ability of Shh to concentrate around the basal body. Further, subcellular analysis suggests that Shh from the notochord might traffic into the neural target field by means of an apical-to-basal-oriented microtubule scaffold. This study, in which we directly observe, measure, localize and modify notochord-derived Shh ligand in the context of neural patterning, provides several new insights into mechanisms of Shh morphogen action.

  15. Integration and segregation in auditory streaming

    NASA Astrophysics Data System (ADS)

    Almonte, Felix; Jirsa, Viktor K.; Large, Edward W.; Tuller, Betty

    2005-12-01

    We aim to capture the perceptual dynamics of auditory streaming using a neurally inspired model of auditory processing. Traditional approaches view streaming as a competition of streams, realized within a tonotopically organized neural network. In contrast, we view streaming to be a dynamic integration process which resides at locations other than the sensory specific neural subsystems. This process finds its realization in the synchronization of neural ensembles or in the existence of informational convergence zones. Our approach uses two interacting dynamical systems, in which the first system responds to incoming acoustic stimuli and transforms them into a spatiotemporal neural field dynamics. The second system is a classification system coupled to the neural field and evolves to a stationary state. These states are identified with a single perceptual stream or multiple streams. Several results in human perception are modelled including temporal coherence and fission boundaries [L.P.A.S. van Noorden, Temporal coherence in the perception of tone sequences, Ph.D. Thesis, Eindhoven University of Technology, The Netherlands, 1975], and crossing of motions [A.S. Bregman, Auditory Scene Analysis: The Perceptual Organization of Sound, MIT Press, 1990]. Our model predicts phenomena such as the existence of two streams with the same pitch, which cannot be explained by the traditional stream competition models. An experimental study is performed to provide proof of existence of this phenomenon. The model elucidates possible mechanisms that may underlie perceptual phenomena.

  16. Contributions of Dynamic Systems Theory to Cognitive Development

    PubMed Central

    Spencer, John P.; Austin, Andrew; Schutte, Anne R.

    2015-01-01

    This paper examines the contributions of dynamic systems theory to the field of cognitive development, focusing on modeling using dynamic neural fields. A brief overview highlights the contributions of dynamic systems theory and the central concepts of dynamic field theory (DFT). We then probe empirical predictions and findings generated by DFT around two examples—the DFT of infant perseverative reaching that explains the Piagetian A-not-B error, and the DFT of spatial memory that explain changes in spatial cognition in early development. A systematic review of the literature around these examples reveals that computational modeling is having an impact on empirical research in cognitive development; however, this impact does not extend to neural and clinical research. Moreover, there is a tendency for researchers to interpret models narrowly, anchoring them to specific tasks. We conclude on an optimistic note, encouraging both theoreticians and experimentalists to work toward a more theory-driven future. PMID:26052181

  17. Dynamic plasticity in coupled avian midbrain maps

    NASA Astrophysics Data System (ADS)

    Atwal, Gurinder Singh

    2004-12-01

    Internal mapping of the external environment is carried out using the receptive fields of topographic neurons in the brain, and in a normal barn owl the aural and visual subcortical maps are aligned from early experiences. However, instantaneous misalignment of the aural and visual stimuli has been observed to result in adaptive behavior, manifested by functional and anatomical changes of the auditory processing system. Using methods of information theory and statistical mechanics a model of the adaptive dynamics of the aural receptive field is presented and analyzed. The dynamics is determined by maximizing the mutual information between the neural output and the weighted sensory neural inputs, admixed with noise, subject to biophysical constraints. The reduced costs of neural rewiring, as in the case of young barn owls, reveal two qualitatively different types of receptive field adaptation depending on the magnitude of the audiovisual misalignment. By letting the misalignment increase with time, it is shown that the ability to adapt can be increased even when neural rewiring costs are high, in agreement with recent experimental reports of the increased plasticity of the auditory space map in adult barn owls due to incremental learning. Finally, a critical speed of misalignment is identified, demarcating the crossover from adaptive to nonadaptive behavior.

  18. The receptive field is dead. Long live the receptive field?

    PubMed Central

    Fairhall, Adrienne

    2014-01-01

    Advances in experimental techniques, including behavioral paradigms using rich stimuli under closed loop conditions and the interfacing of neural systems with external inputs and outputs, reveal complex dynamics in the neural code and require a revisiting of standard concepts of representation. High-throughput recording and imaging methods along with the ability to observe and control neuronal subpopulations allow increasingly detailed access to the neural circuitry that subserves these representations and the computations they support. How do we harness theory to build biologically grounded models of complex neural function? PMID:24618227

  19. The Dynamic Brain: From Spiking Neurons to Neural Masses and Cortical Fields

    PubMed Central

    Deco, Gustavo; Jirsa, Viktor K.; Robinson, Peter A.; Breakspear, Michael; Friston, Karl

    2008-01-01

    The cortex is a complex system, characterized by its dynamics and architecture, which underlie many functions such as action, perception, learning, language, and cognition. Its structural architecture has been studied for more than a hundred years; however, its dynamics have been addressed much less thoroughly. In this paper, we review and integrate, in a unifying framework, a variety of computational approaches that have been used to characterize the dynamics of the cortex, as evidenced at different levels of measurement. Computational models at different space–time scales help us understand the fundamental mechanisms that underpin neural processes and relate these processes to neuroscience data. Modeling at the single neuron level is necessary because this is the level at which information is exchanged between the computing elements of the brain; the neurons. Mesoscopic models tell us how neural elements interact to yield emergent behavior at the level of microcolumns and cortical columns. Macroscopic models can inform us about whole brain dynamics and interactions between large-scale neural systems such as cortical regions, the thalamus, and brain stem. Each level of description relates uniquely to neuroscience data, from single-unit recordings, through local field potentials to functional magnetic resonance imaging (fMRI), electroencephalogram (EEG), and magnetoencephalogram (MEG). Models of the cortex can establish which types of large-scale neuronal networks can perform computations and characterize their emergent properties. Mean-field and related formulations of dynamics also play an essential and complementary role as forward models that can be inverted given empirical data. This makes dynamic models critical in integrating theory and experiments. We argue that elaborating principled and informed models is a prerequisite for grounding empirical neuroscience in a cogent theoretical framework, commensurate with the achievements in the physical sciences. PMID:18769680

  20. A Neural Theory of Visual Attention: Bridging Cognition and Neurophysiology

    ERIC Educational Resources Information Center

    Bundesen, Claus; Habekost, Thomas; Kyllingsbaek, Soren

    2005-01-01

    A neural theory of visual attention (NTVA) is presented. NTVA is a neural interpretation of C. Bundesen's (1990) theory of visual attention (TVA). In NTVA, visual processing capacity is distributed across stimuli by dynamic remapping of receptive fields of cortical cells such that more processing resources (cells) are devoted to behaviorally…

  1. Neural networks with excitatory and inhibitory components: Direct and inverse problems by a mean-field approach

    NASA Astrophysics Data System (ADS)

    di Volo, Matteo; Burioni, Raffaella; Casartelli, Mario; Livi, Roberto; Vezzani, Alessandro

    2016-01-01

    We study the dynamics of networks with inhibitory and excitatory leak-integrate-and-fire neurons with short-term synaptic plasticity in the presence of depressive and facilitating mechanisms. The dynamics is analyzed by a heterogeneous mean-field approximation, which allows us to keep track of the effects of structural disorder in the network. We describe the complex behavior of different classes of excitatory and inhibitory components, which give rise to a rich dynamical phase diagram as a function of the fraction of inhibitory neurons. Using the same mean-field approach, we study and solve a global inverse problem: reconstructing the degree probability distributions of the inhibitory and excitatory components and the fraction of inhibitory neurons from the knowledge of the average synaptic activity field. This approach unveils new perspectives on the numerical study of neural network dynamics and the possibility of using these models as a test bed for the analysis of experimental data.

  2. Synthesis of recurrent neural networks for dynamical system simulation.

    PubMed

    Trischler, Adam P; D'Eleuterio, Gabriele M T

    2016-08-01

    We review several of the most widely used techniques for training recurrent neural networks to approximate dynamical systems, then describe a novel algorithm for this task. The algorithm is based on an earlier theoretical result that guarantees the quality of the network approximation. We show that a feedforward neural network can be trained on the vector-field representation of a given dynamical system using backpropagation, then recast it as a recurrent network that replicates the original system's dynamics. After detailing this algorithm and its relation to earlier approaches, we present numerical examples that demonstrate its capabilities. One of the distinguishing features of our approach is that both the original dynamical systems and the recurrent networks that simulate them operate in continuous time. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Scale-Free Neural and Physiological Dynamics in Naturalistic Stimuli Processing

    PubMed Central

    Lin, Amy

    2016-01-01

    Abstract Neural activity recorded at multiple spatiotemporal scales is dominated by arrhythmic fluctuations without a characteristic temporal periodicity. Such activity often exhibits a 1/f-type power spectrum, in which power falls off with increasing frequency following a power-law function: P(f)∝1/fβ, which is indicative of scale-free dynamics. Two extensively studied forms of scale-free neural dynamics in the human brain are slow cortical potentials (SCPs)—the low-frequency (<5 Hz) component of brain field potentials—and the amplitude fluctuations of α oscillations, both of which have been shown to carry important functional roles. In addition, scale-free dynamics characterize normal human physiology such as heartbeat dynamics. However, the exact relationships among these scale-free neural and physiological dynamics remain unclear. We recorded simultaneous magnetoencephalography and electrocardiography in healthy subjects in the resting state and while performing a discrimination task on scale-free dynamical auditory stimuli that followed different scale-free statistics. We observed that long-range temporal correlation (captured by the power-law exponent β) in SCPs positively correlated with that of heartbeat dynamics across time within an individual and negatively correlated with that of α-amplitude fluctuations across individuals. In addition, across individuals, long-range temporal correlation of both SCP and α-oscillation amplitude predicted subjects’ discrimination performance in the auditory task, albeit through antagonistic relationships. These findings reveal interrelations among different scale-free neural and physiological dynamics and initial evidence for the involvement of scale-free neural dynamics in the processing of natural stimuli, which often exhibit scale-free dynamics. PMID:27822495

  4. Outline of a general theory of behavior and brain coordination.

    PubMed

    Kelso, J A Scott; Dumas, Guillaume; Tognoli, Emmanuelle

    2013-01-01

    Much evidence suggests that dynamic laws of neurobehavioral coordination are sui generis: they deal with collective properties that are repeatable from one system to another and emerge from microscopic dynamics but may not (even in principle) be deducible from them. Nevertheless, it is useful to try to understand the relationship between different levels while all the time respecting the autonomy of each. We report a program of research that uses the theoretical concepts of coordination dynamics and quantitative measurements of simple, well-defined experimental model systems to explicitly relate neural and behavioral levels of description in human beings. Our approach is both top-down and bottom-up and aims at ending up in the same place: top-down to derive behavioral patterns from neural fields, and bottom-up to generate neural field patterns from bidirectional coupling between astrocytes and neurons. Much progress can be made by recognizing that the two approaches--reductionism and emergentism--are complementary. A key to understanding is to couch the coordination of very different things--from molecules to thoughts--in the common language of coordination dynamics. Copyright © 2012 Elsevier Ltd. All rights reserved.

  5. Outline of a General Theory of Behavior and Brain Coordination

    PubMed Central

    Kelso, J. A. Scott; Dumas, Guillaume; Tognoli, Emmanuelle

    2012-01-01

    Much evidence suggests that dynamic laws of neurobehavioral coordination are sui generis: they deal with collective properties that are repeatable from one system to another and emerge from microscopic dynamics but may not (even in principle) be deducible from them. Nevertheless, it is useful to try to understand the relationship between different levels while all the time respecting the autonomy of each. We report a program of research that uses the theoretical concepts of coordination dynamics and quantitative measurements of simple, well-defined experimental model systems to explicitly relate neural and behavioral levels of description in human beings. Our approach is both top-down and bottom-up and aims at ending up in the same place: top-down to derive behavioral patterns from neural fields, and bottom-up to generate neural field patterns from bidirectional coupling between astrocytes and neurons. Much progress can be made by recognizing that the two approaches —reductionism and emergentism— are complementary. A key to understanding is to couch the coordination of very different things —from molecules to thoughts— in the common language of coordination dynamics. PMID:23084845

  6. Force Field for Water Based on Neural Network.

    PubMed

    Wang, Hao; Yang, Weitao

    2018-05-18

    We developed a novel neural network based force field for water based on training with high level ab initio theory. The force field was built based on electrostatically embedded many-body expansion method truncated at binary interactions. Many-body expansion method is a common strategy to partition the total Hamiltonian of large systems into a hierarchy of few-body terms. Neural networks were trained to represent electrostatically embedded one-body and two-body interactions, which require as input only one and two water molecule calculations at the level of ab initio electronic structure method CCSD/aug-cc-pVDZ embedded in the molecular mechanics water environment, making it efficient as a general force field construction approach. Structural and dynamic properties of liquid water calculated with our force field show good agreement with experimental results. We constructed two sets of neural network based force fields: non-polarizable and polarizable force fields. Simulation results show that the non-polarizable force field using fixed TIP3P charges has already behaved well, since polarization effects and many-body effects are implicitly included due to the electrostatic embedding scheme. Our results demonstrate that the electrostatically embedded many-body expansion combined with neural network provides a promising and systematic way to build the next generation force fields at high accuracy and low computational costs, especially for large systems.

  7. Constructive autoassociative neural network for facial recognition.

    PubMed

    Fernandes, Bruno J T; Cavalcanti, George D C; Ren, Tsang I

    2014-01-01

    Autoassociative artificial neural networks have been used in many different computer vision applications. However, it is difficult to define the most suitable neural network architecture because this definition is based on previous knowledge and depends on the problem domain. To address this problem, we propose a constructive autoassociative neural network called CANet (Constructive Autoassociative Neural Network). CANet integrates the concepts of receptive fields and autoassociative memory in a dynamic architecture that changes the configuration of the receptive fields by adding new neurons in the hidden layer, while a pruning algorithm removes neurons from the output layer. Neurons in the CANet output layer present lateral inhibitory connections that improve the recognition rate. Experiments in face recognition and facial expression recognition show that the CANet outperforms other methods presented in the literature.

  8. A unified dynamic neural field model of goal directed eye movements

    NASA Astrophysics Data System (ADS)

    Quinton, J. C.; Goffart, L.

    2018-01-01

    Primates heavily rely on their visual system, which exploits signals of graded precision based on the eccentricity of the target in the visual field. The interactions with the environment involve actively selecting and focusing on visual targets or regions of interest, instead of contemplating an omnidirectional visual flow. Eye-movements specifically allow foveating targets and track their motion. Once a target is brought within the central visual field, eye-movements are usually classified into catch-up saccades (jumping from one orientation or fixation to another) and smooth pursuit (continuously tracking a target with low velocity). Building on existing dynamic neural field equations, we introduce a novel model that incorporates internal projections to better estimate the current target location (associated to a peak of activity). Such estimate is then used to trigger an eye movement, leading to qualitatively different behaviours depending on the dynamics of the whole oculomotor system: (1) fixational eye-movements due to small variations in the weights of projections when the target is stationary, (2) interceptive and catch-up saccades when peaks build and relax on the neural field, (3) smooth pursuit when the peak stabilises near the centre of the field, the system reaching a fixed point attractor. Learning is nevertheless required for tracking a rapidly moving target, and the proposed model thus replicates recent results in the monkey, in which repeated exercise permits the maintenance of the target within in the central visual field at its current (here-and-now) location, despite the delays involved in transmitting retinal signals to the oculomotor neurons.

  9. Obstacle Avoidance and Target Acquisition for Robot Navigation Using a Mixed Signal Analog/Digital Neuromorphic Processing System

    PubMed Central

    Milde, Moritz B.; Blum, Hermann; Dietmüller, Alexander; Sumislawska, Dora; Conradt, Jörg; Indiveri, Giacomo; Sandamirskaya, Yulia

    2017-01-01

    Neuromorphic hardware emulates dynamics of biological neural networks in electronic circuits offering an alternative to the von Neumann computing architecture that is low-power, inherently parallel, and event-driven. This hardware allows to implement neural-network based robotic controllers in an energy-efficient way with low latency, but requires solving the problem of device variability, characteristic for analog electronic circuits. In this work, we interfaced a mixed-signal analog-digital neuromorphic processor ROLLS to a neuromorphic dynamic vision sensor (DVS) mounted on a robotic vehicle and developed an autonomous neuromorphic agent that is able to perform neurally inspired obstacle-avoidance and target acquisition. We developed a neural network architecture that can cope with device variability and verified its robustness in different environmental situations, e.g., moving obstacles, moving target, clutter, and poor light conditions. We demonstrate how this network, combined with the properties of the DVS, allows the robot to avoid obstacles using a simple biologically-inspired dynamics. We also show how a Dynamic Neural Field for target acquisition can be implemented in spiking neuromorphic hardware. This work demonstrates an implementation of working obstacle avoidance and target acquisition using mixed signal analog/digital neuromorphic hardware. PMID:28747883

  10. Obstacle Avoidance and Target Acquisition for Robot Navigation Using a Mixed Signal Analog/Digital Neuromorphic Processing System.

    PubMed

    Milde, Moritz B; Blum, Hermann; Dietmüller, Alexander; Sumislawska, Dora; Conradt, Jörg; Indiveri, Giacomo; Sandamirskaya, Yulia

    2017-01-01

    Neuromorphic hardware emulates dynamics of biological neural networks in electronic circuits offering an alternative to the von Neumann computing architecture that is low-power, inherently parallel, and event-driven. This hardware allows to implement neural-network based robotic controllers in an energy-efficient way with low latency, but requires solving the problem of device variability, characteristic for analog electronic circuits. In this work, we interfaced a mixed-signal analog-digital neuromorphic processor ROLLS to a neuromorphic dynamic vision sensor (DVS) mounted on a robotic vehicle and developed an autonomous neuromorphic agent that is able to perform neurally inspired obstacle-avoidance and target acquisition. We developed a neural network architecture that can cope with device variability and verified its robustness in different environmental situations, e.g., moving obstacles, moving target, clutter, and poor light conditions. We demonstrate how this network, combined with the properties of the DVS, allows the robot to avoid obstacles using a simple biologically-inspired dynamics. We also show how a Dynamic Neural Field for target acquisition can be implemented in spiking neuromorphic hardware. This work demonstrates an implementation of working obstacle avoidance and target acquisition using mixed signal analog/digital neuromorphic hardware.

  11. An analysis of neural receptive field plasticity by point process adaptive filtering

    PubMed Central

    Brown, Emery N.; Nguyen, David P.; Frank, Loren M.; Wilson, Matthew A.; Solo, Victor

    2001-01-01

    Neural receptive fields are plastic: with experience, neurons in many brain regions change their spiking responses to relevant stimuli. Analysis of receptive field plasticity from experimental measurements is crucial for understanding how neural systems adapt their representations of relevant biological information. Current analysis methods using histogram estimates of spike rate functions in nonoverlapping temporal windows do not track the evolution of receptive field plasticity on a fine time scale. Adaptive signal processing is an established engineering paradigm for estimating time-varying system parameters from experimental measurements. We present an adaptive filter algorithm for tracking neural receptive field plasticity based on point process models of spike train activity. We derive an instantaneous steepest descent algorithm by using as the criterion function the instantaneous log likelihood of a point process spike train model. We apply the point process adaptive filter algorithm in a study of spatial (place) receptive field properties of simulated and actual spike train data from rat CA1 hippocampal neurons. A stability analysis of the algorithm is sketched in the Appendix. The adaptive algorithm can update the place field parameter estimates on a millisecond time scale. It reliably tracked the migration, changes in scale, and changes in maximum firing rate characteristic of hippocampal place fields in a rat running on a linear track. Point process adaptive filtering offers an analytic method for studying the dynamics of neural receptive fields. PMID:11593043

  12. A dynamic neural field model of temporal order judgments.

    PubMed

    Hecht, Lauren N; Spencer, John P; Vecera, Shaun P

    2015-12-01

    Temporal ordering of events is biased, or influenced, by perceptual organization-figure-ground organization-and by spatial attention. For example, within a region assigned figural status or at an attended location, onset events are processed earlier (Lester, Hecht, & Vecera, 2009; Shore, Spence, & Klein, 2001), and offset events are processed for longer durations (Hecht & Vecera, 2011; Rolke, Ulrich, & Bausenhart, 2006). Here, we present an extension of a dynamic field model of change detection (Johnson, Spencer, Luck, & Schöner, 2009; Johnson, Spencer, & Schöner, 2009) that accounts for both the onset and offset performance for figural and attended regions. The model posits that neural populations processing the figure are more active, resulting in a peak of activation that quickly builds toward a detection threshold when the onset of a target is presented. This same enhanced activation for some neural populations is maintained when a present target is removed, creating delays in the perception of the target's offset. We discuss the broader implications of this model, including insights regarding how neural activation can be generated in response to the disappearance of information. (c) 2015 APA, all rights reserved).

  13. Artificial Neural Network L* from different magnetospheric field models

    NASA Astrophysics Data System (ADS)

    Yu, Y.; Koller, J.; Zaharia, S. G.; Jordanova, V. K.

    2011-12-01

    The third adiabatic invariant L* plays an important role in modeling and understanding the radiation belt dynamics. The popular way to numerically obtain the L* value follows the recipe described by Roederer [1970], which is, however, slow and computational expensive. This work focuses on a new technique, which can compute the L* value in microseconds without losing much accuracy: artificial neural networks. Since L* is related to the magnetic flux enclosed by a particle drift shell, global magnetic field information needed to trace the drift shell is required. A series of currently popular empirical magnetic field models are applied to create the L* data pool using 1 million data samples which are randomly selected within a solar cycle and within the global magnetosphere. The networks, trained from the above L* data pool, can thereby be used for fairly efficient L* calculation given input parameters valid within the trained temporal and spatial range. Besides the empirical magnetospheric models, a physics-based self-consistent inner magnetosphere model (RAM-SCB) developed at LANL is also utilized to calculate L* values and then to train the L* neural network. This model better predicts the magnetospheric configuration and therefore can significantly improve the L*. The above neural network L* technique will enable, for the first time, comprehensive solar-cycle long studies of radiation belt processes. However, neural networks trained from different magnetic field models can result in different L* values, which could cause mis-interpretation of radiation belt dynamics, such as where the source of the radiation belt charged particle is and which mechanism is dominant in accelerating the particles. Such a fact calls for attention to cautiously choose a magnetospheric field model for the L* calculation.

  14. Neural dynamics of reward probability coding: a Magnetoencephalographic study in humans

    PubMed Central

    Thomas, Julie; Vanni-Mercier, Giovanna; Dreher, Jean-Claude

    2013-01-01

    Prediction of future rewards and discrepancy between actual and expected outcomes (prediction error) are crucial signals for adaptive behavior. In humans, a number of fMRI studies demonstrated that reward probability modulates these two signals in a large brain network. Yet, the spatio-temporal dynamics underlying the neural coding of reward probability remains unknown. Here, using magnetoencephalography, we investigated the neural dynamics of prediction and reward prediction error computations while subjects learned to associate cues of slot machines with monetary rewards with different probabilities. We showed that event-related magnetic fields (ERFs) arising from the visual cortex coded the expected reward value 155 ms after the cue, demonstrating that reward value signals emerge early in the visual stream. Moreover, a prediction error was reflected in ERF peaking 300 ms after the rewarded outcome and showing decreasing amplitude with higher reward probability. This prediction error signal was generated in a network including the anterior and posterior cingulate cortex. These findings pinpoint the spatio-temporal characteristics underlying reward probability coding. Together, our results provide insights into the neural dynamics underlying the ability to learn probabilistic stimuli-reward contingencies. PMID:24302894

  15. Integrating the behavioral and neural dynamics of response selection in a dual-task paradigm: a dynamic neural field model of Dux et al. (2009).

    PubMed

    Buss, Aaron T; Wifall, Tim; Hazeltine, Eliot; Spencer, John P

    2014-02-01

    People are typically slower when executing two tasks than when only performing a single task. These dual-task costs are initially robust but are reduced with practice. Dux et al. (2009) explored the neural basis of dual-task costs and learning using fMRI. Inferior frontal junction (IFJ) showed a larger hemodynamic response on dual-task trials compared with single-task trial early in learning. As dual-task costs were eliminated, dual-task hemodynamics in IFJ reduced to single-task levels. Dux and colleagues concluded that the reduction of dual-task costs is accomplished through increased efficiency of information processing in IFJ. We present a dynamic field theory of response selection that addresses two questions regarding these results. First, what mechanism leads to the reduction of dual-task costs and associated changes in hemodynamics? We show that a simple Hebbian learning mechanism is able to capture the quantitative details of learning at both the behavioral and neural levels. Second, is efficiency isolated to cognitive control areas such as IFJ, or is it also evident in sensory motor areas? To investigate this, we restrict Hebbian learning to different parts of the neural model. None of the restricted learning models showed the same reductions in dual-task costs as the unrestricted learning model, suggesting that efficiency is distributed across cognitive control and sensory motor processing systems.

  16. Quantitative theory of driven nonlinear brain dynamics.

    PubMed

    Roberts, J A; Robinson, P A

    2012-09-01

    Strong periodic stimuli such as bright flashing lights evoke nonlinear responses in the brain and interact nonlinearly with ongoing cortical activity, but the underlying mechanisms for these phenomena are poorly understood at present. The dominant features of these experimentally observed dynamics are reproduced by the dynamics of a quantitative neural field model subject to periodic drive. Model power spectra over a range of drive frequencies show agreement with multiple features of experimental measurements, exhibiting nonlinear effects including entrainment over a range of frequencies around the natural alpha frequency f(α), subharmonic entrainment near 2f(α), and harmonic generation. Further analysis of the driven dynamics as a function of the drive parameters reveals rich nonlinear dynamics that is predicted to be observable in future experiments at high drive amplitude, including period doubling, bistable phase-locking, hysteresis, wave mixing, and chaos indicated by positive Lyapunov exponents. Moreover, photosensitive seizures are predicted for physiologically realistic model parameters yielding bistability between healthy and seizure dynamics. These results demonstrate the applicability of neural field models to the new regime of periodically driven nonlinear dynamics, enabling interpretation of experimental data in terms of specific generating mechanisms and providing new tests of the theory. Copyright © 2012 Elsevier Inc. All rights reserved.

  17. Single-trial dynamics of motor cortex and their applications to brain-machine interfaces

    PubMed Central

    Kao, Jonathan C.; Nuyujukian, Paul; Ryu, Stephen I.; Churchland, Mark M.; Cunningham, John P.; Shenoy, Krishna V.

    2015-01-01

    Increasing evidence suggests that neural population responses have their own internal drive, or dynamics, that describe how the neural population evolves through time. An important prediction of neural dynamical models is that previously observed neural activity is informative of noisy yet-to-be-observed activity on single-trials, and may thus have a denoising effect. To investigate this prediction, we built and characterized dynamical models of single-trial motor cortical activity. We find these models capture salient dynamical features of the neural population and are informative of future neural activity on single trials. To assess how neural dynamics may beneficially denoise single-trial neural activity, we incorporate neural dynamics into a brain–machine interface (BMI). In online experiments, we find that a neural dynamical BMI achieves substantially higher performance than its non-dynamical counterpart. These results provide evidence that neural dynamics beneficially inform the temporal evolution of neural activity on single trials and may directly impact the performance of BMIs. PMID:26220660

  18. The dynamical analysis of modified two-compartment neuron model and FPGA implementation

    NASA Astrophysics Data System (ADS)

    Lin, Qianjin; Wang, Jiang; Yang, Shuangming; Yi, Guosheng; Deng, Bin; Wei, Xile; Yu, Haitao

    2017-10-01

    The complexity of neural models is increasing with the investigation of larger biological neural network, more various ionic channels and more detailed morphologies, and the implementation of biological neural network is a task with huge computational complexity and power consumption. This paper presents an efficient digital design using piecewise linearization on field programmable gate array (FPGA), to succinctly implement the reduced two-compartment model which retains essential features of more complicated models. The design proposes an approximate neuron model which is composed of a set of piecewise linear equations, and it can reproduce different dynamical behaviors to depict the mechanisms of a single neuron model. The consistency of hardware implementation is verified in terms of dynamical behaviors and bifurcation analysis, and the simulation results including varied ion channel characteristics coincide with the biological neuron model with a high accuracy. Hardware synthesis on FPGA demonstrates that the proposed model has reliable performance and lower hardware resource compared with the original two-compartment model. These investigations are conducive to scalability of biological neural network in reconfigurable large-scale neuromorphic system.

  19. Sensorimotor Learning Biases Choice Behavior: A Learning Neural Field Model for Decision Making

    PubMed Central

    Schöner, Gregor; Gail, Alexander

    2012-01-01

    According to a prominent view of sensorimotor processing in primates, selection and specification of possible actions are not sequential operations. Rather, a decision for an action emerges from competition between different movement plans, which are specified and selected in parallel. For action choices which are based on ambiguous sensory input, the frontoparietal sensorimotor areas are considered part of the common underlying neural substrate for selection and specification of action. These areas have been shown capable of encoding alternative spatial motor goals in parallel during movement planning, and show signatures of competitive value-based selection among these goals. Since the same network is also involved in learning sensorimotor associations, competitive action selection (decision making) should not only be driven by the sensory evidence and expected reward in favor of either action, but also by the subject's learning history of different sensorimotor associations. Previous computational models of competitive neural decision making used predefined associations between sensory input and corresponding motor output. Such hard-wiring does not allow modeling of how decisions are influenced by sensorimotor learning or by changing reward contingencies. We present a dynamic neural field model which learns arbitrary sensorimotor associations with a reward-driven Hebbian learning algorithm. We show that the model accurately simulates the dynamics of action selection with different reward contingencies, as observed in monkey cortical recordings, and that it correctly predicted the pattern of choice errors in a control experiment. With our adaptive model we demonstrate how network plasticity, which is required for association learning and adaptation to new reward contingencies, can influence choice behavior. The field model provides an integrated and dynamic account for the operations of sensorimotor integration, working memory and action selection required for decision making in ambiguous choice situations. PMID:23166483

  20. Spatiotemporal canards in neural field equations

    NASA Astrophysics Data System (ADS)

    Avitabile, D.; Desroches, M.; Knobloch, E.

    2017-04-01

    Canards are special solutions to ordinary differential equations that follow invariant repelling slow manifolds for long time intervals. In realistic biophysical single-cell models, canards are responsible for several complex neural rhythms observed experimentally, but their existence and role in spatially extended systems is largely unexplored. We identify and describe a type of coherent structure in which a spatial pattern displays temporal canard behavior. Using interfacial dynamics and geometric singular perturbation theory, we classify spatiotemporal canards and give conditions for the existence of folded-saddle and folded-node canards. We find that spatiotemporal canards are robust to changes in the synaptic connectivity and firing rate. The theory correctly predicts the existence of spatiotemporal canards with octahedral symmetry in a neural field model posed on the unit sphere.

  1. Characterizing Deep Brain Stimulation effects in computationally efficient neural network models.

    PubMed

    Latteri, Alberta; Arena, Paolo; Mazzone, Paolo

    2011-04-15

    Recent studies on the medical treatment of Parkinson's disease (PD) led to the introduction of the so called Deep Brain Stimulation (DBS) technique. This particular therapy allows to contrast actively the pathological activity of various Deep Brain structures, responsible for the well known PD symptoms. This technique, frequently joined to dopaminergic drugs administration, replaces the surgical interventions implemented to contrast the activity of specific brain nuclei, called Basal Ganglia (BG). This clinical protocol gave the possibility to analyse and inspect signals measured from the electrodes implanted into the deep brain regions. The analysis of these signals led to the possibility to study the PD as a specific case of dynamical synchronization in biological neural networks, with the advantage to apply the theoretical analysis developed in such scientific field to find efficient treatments to face with this important disease. Experimental results in fact show that the PD neurological diseases are characterized by a pathological signal synchronization in BG. Parkinsonian tremor, for example, is ascribed to be caused by neuron populations of the Thalamic and Striatal structures that undergo an abnormal synchronization. On the contrary, in normal conditions, the activity of the same neuron populations do not appear to be correlated and synchronized. To study in details the effect of the stimulation signal on a pathological neural medium, efficient models of these neural structures were built, which are able to show, without any external input, the intrinsic properties of a pathological neural tissue, mimicking the BG synchronized dynamics.We start considering a model already introduced in the literature to investigate the effects of electrical stimulation on pathologically synchronized clusters of neurons. This model used Morris Lecar type neurons. This neuron model, although having a high level of biological plausibility, requires a large computational effort to simulate large scale networks. For this reason we considered a reduced order model, the Izhikevich one, which is computationally much lighter. The comparison between neural lattices built using both neuron models provided comparable results, both without traditional stimulation and in presence of all the stimulation protocols. This was a first result toward the study and simulation of the large scale neural networks involved in pathological dynamics.Using the reduced order model an inspection on the activity of two neural lattices was also carried out at the aim to analyze how the stimulation in one area could affect the dynamics in another area, like the usual medical treatment protocols require.The study of population dynamics that was carried out allowed us to investigate, through simulations, the positive effects of the stimulation signals in terms of desynchronization of the neural dynamics. The results obtained constitute a significant added value to the analysis of synchronization and desynchronization effects due to neural stimulation. This work gives the opportunity to more efficiently study the effect of stimulation in large scale yet computationally efficient neural networks. Results were compared both with the other mathematical models, using Morris Lecar and Izhikevich neurons, and with simulated Local Field Potentials (LFP).

  2. Moving to higher ground: The dynamic field theory and the dynamics of visual cognition

    PubMed Central

    Johnson, Jeffrey S.; Spencer, John P.; Schöner, Gregor

    2009-01-01

    In the present report, we describe a new dynamic field theory that captures the dynamics of visuo-spatial cognition. This theory grew out of the dynamic systems approach to motor control and development, and is grounded in neural principles. The initial application of dynamic field theory to issues in visuo-spatial cognition extended concepts of the motor approach to decision making in a sensori-motor context, and, more recently, to the dynamics of spatial cognition. Here we extend these concepts still further to address topics in visual cognition, including visual working memory for non-spatial object properties, the processes that underlie change detection, and the ‘binding problem’ in vision. In each case, we demonstrate that the general principles of the dynamic field approach can unify findings in the literature and generate novel predictions. We contend that the application of these concepts to visual cognition avoids the pitfalls of reductionist approaches in cognitive science, and points toward a formal integration of brains, bodies, and behavior. PMID:19173013

  3. Synchronization and long-time memory in neural networks with inhibitory hubs and synaptic plasticity

    NASA Astrophysics Data System (ADS)

    Bertolotti, Elena; Burioni, Raffaella; di Volo, Matteo; Vezzani, Alessandro

    2017-01-01

    We investigate the dynamical role of inhibitory and highly connected nodes (hub) in synchronization and input processing of leaky-integrate-and-fire neural networks with short term synaptic plasticity. We take advantage of a heterogeneous mean-field approximation to encode the role of network structure and we tune the fraction of inhibitory neurons fI and their connectivity level to investigate the cooperation between hub features and inhibition. We show that, depending on fI, highly connected inhibitory nodes strongly drive the synchronization properties of the overall network through dynamical transitions from synchronous to asynchronous regimes. Furthermore, a metastable regime with long memory of external inputs emerges for a specific fraction of hub inhibitory neurons, underlining the role of inhibition and connectivity also for input processing in neural networks.

  4. Wide-field optical mapping of neural activity and brain haemodynamics: considerations and novel approaches

    PubMed Central

    Ma, Ying; Shaik, Mohammed A.; Kozberg, Mariel G.; Thibodeaux, David N.; Zhao, Hanzhi T.; Yu, Hang

    2016-01-01

    Although modern techniques such as two-photon microscopy can now provide cellular-level three-dimensional imaging of the intact living brain, the speed and fields of view of these techniques remain limited. Conversely, two-dimensional wide-field optical mapping (WFOM), a simpler technique that uses a camera to observe large areas of the exposed cortex under visible light, can detect changes in both neural activity and haemodynamics at very high speeds. Although WFOM may not provide single-neuron or capillary-level resolution, it is an attractive and accessible approach to imaging large areas of the brain in awake, behaving mammals at speeds fast enough to observe widespread neural firing events, as well as their dynamic coupling to haemodynamics. Although such wide-field optical imaging techniques have a long history, the advent of genetically encoded fluorophores that can report neural activity with high sensitivity, as well as modern technologies such as light emitting diodes and sensitive and high-speed digital cameras have driven renewed interest in WFOM. To facilitate the wider adoption and standardization of WFOM approaches for neuroscience and neurovascular coupling research, we provide here an overview of the basic principles of WFOM, considerations for implementation of wide-field fluorescence imaging of neural activity, spectroscopic analysis and interpretation of results. This article is part of the themed issue ‘Interpreting BOLD: a dialogue between cognitive and cellular neuroscience’. PMID:27574312

  5. Nanoscale live cell optical imaging of the dynamics of intracellular microvesicles in neural cells.

    PubMed

    Lee, Sohee; Heo, Chaejeong; Suh, Minah; Lee, Young Hee

    2013-11-01

    Recent advances in biotechnology and imaging technology have provided great opportunities to investigate cellular dynamics. Conventional imaging methods such as transmission electron microscopy, scanning electron microscopy, and atomic force microscopy are powerful techniques for cellular imaging, even at the nanoscale level. However, these techniques have limitations applications in live cell imaging because of the experimental preparation required, namely cell fixation, and the innately small field of view. In this study, we developed a nanoscale optical imaging (NOI) system that combines a conventional optical microscope with a high resolution dark-field condenser (Cytoviva, Inc.) and halogen illuminator. The NOI system's maximum resolution for live cell imaging is around 100 nm. We utilized NOI to investigate the dynamics of intracellular microvesicles of neural cells without immunocytological analysis. In particular, we studied direct, active random, and moderate random dynamic motions of intracellular microvesicles and visualized lysosomal vesicle changes after treatment of cells with a lysosomal inhibitor (NH4Cl). Our results indicate that the NOI system is a feasible, high-resolution optical imaging system for live small organelles that does not require complicated optics or immunocytological staining processes.

  6. Neural mechanisms of the mind, Aristotle, Zadeh, and fMRI.

    PubMed

    Perlovsky, Leonid I

    2010-05-01

    Processes in the mind: perception, cognition, concepts, instincts, emotions, and higher cognitive abilities for abstract thinking, beautiful music are considered here within a neural modeling fields (NMFs) paradigm. Its fundamental mathematical mechanism is a process "from vague-fuzzy to crisp," called dynamic logic (DL). This paper discusses why this paradigm is necessary mathematically, and relates it to a psychological description of the mind. Surprisingly, the process from "vague to crisp" corresponds to Aristotelian understanding of mental functioning. Recent functional magnetic resonance imaging (fMRI) measurements confirmed this process in neural mechanisms of perception.

  7. An introduction to neural networks surgery, a field of neuromodulation which is based on advances in neural networks science and digitised brain imaging.

    PubMed

    Sakas, D E; Panourias, I G; Simpson, B A

    2007-01-01

    Operative Neuromodulation is the field of altering electrically or chemically the signal transmission in the nervous system by implanted devices in order to excite, inhibit or tune the activities of neurons or neural networks and produce therapeutic effects. The present article reviews relevant literature on procedures or devices applied either in contact with the cerebral cortex or cranial nerves or in deep sites inside the brain in order to treat various refractory neurological conditions such as: a) chronic pain (facial, somatic, deafferentation, phantom limb), b) movement disorders (Parkinson's disease, dystonia, Tourette syndrome), c) epilepsy, d) psychiatric disease, e) hearing deficits, and f) visual loss. These data indicate that in operative neuromodulation, a new field emerges that is based on neural networks research and on advances in digitised stereometric brain imaging which allow precise localisation of cerebral neural networks and their relay stations; this field can be described as Neural networks surgery because it aims to act extrinsically or intrinsically on neural networks and to alter therapeutically the neural signal transmission with the use of implantable electrical or electronic devices. The authors also review neurotechnology literature relevant to neuroengineering, nanotechnologies, brain computer interfaces, hybrid cultured probes, neuromimetics, neuroinformatics, neurocomputation, and computational neuromodulation; the latter field is dedicated to the study of the biophysical and mathematical characteristics of electrochemical neuromodulation. The article also brings forward particularly interesting lines of research such as the carbon nanofibers electrode arrays for simultaneous electrochemical recording and stimulation, closed-loop systems for responsive neuromodulation, and the intracortical electrodes for restoring hearing or vision. The present review of cerebral neuromodulatory procedures highlights the transition from the conventional neurosurgery of resective or ablative techniques to a highly selective "surgery of networks". The dynamics of the convergence of the above biomedical and technological fields with biological restorative approaches have important implications for patients with severe neurological disorders.

  8. Dynamic Alignment Models for Neural Coding

    PubMed Central

    Kollmorgen, Sepp; Hahnloser, Richard H. R.

    2014-01-01

    Recently, there have been remarkable advances in modeling the relationships between the sensory environment, neuronal responses, and behavior. However, most models cannot encompass variable stimulus-response relationships such as varying response latencies and state or context dependence of the neural code. Here, we consider response modeling as a dynamic alignment problem and model stimulus and response jointly by a mixed pair hidden Markov model (MPH). In MPHs, multiple stimulus-response relationships (e.g., receptive fields) are represented by different states or groups of states in a Markov chain. Each stimulus-response relationship features temporal flexibility, allowing modeling of variable response latencies, including noisy ones. We derive algorithms for learning of MPH parameters and for inference of spike response probabilities. We show that some linear-nonlinear Poisson cascade (LNP) models are a special case of MPHs. We demonstrate the efficiency and usefulness of MPHs in simulations of both jittered and switching spike responses to white noise and natural stimuli. Furthermore, we apply MPHs to extracellular single and multi-unit data recorded in cortical brain areas of singing birds to showcase a novel method for estimating response lag distributions. MPHs allow simultaneous estimation of receptive fields, latency statistics, and hidden state dynamics and so can help to uncover complex stimulus response relationships that are subject to variable timing and involve diverse neural codes. PMID:24625448

  9. Parametric models to relate spike train and LFP dynamics with neural information processing.

    PubMed

    Banerjee, Arpan; Dean, Heather L; Pesaran, Bijan

    2012-01-01

    Spike trains and local field potentials (LFPs) resulting from extracellular current flows provide a substrate for neural information processing. Understanding the neural code from simultaneous spike-field recordings and subsequent decoding of information processing events will have widespread applications. One way to demonstrate an understanding of the neural code, with particular advantages for the development of applications, is to formulate a parametric statistical model of neural activity and its covariates. Here, we propose a set of parametric spike-field models (unified models) that can be used with existing decoding algorithms to reveal the timing of task or stimulus specific processing. Our proposed unified modeling framework captures the effects of two important features of information processing: time-varying stimulus-driven inputs and ongoing background activity that occurs even in the absence of environmental inputs. We have applied this framework for decoding neural latencies in simulated and experimentally recorded spike-field sessions obtained from the lateral intraparietal area (LIP) of awake, behaving monkeys performing cued look-and-reach movements to spatial targets. Using both simulated and experimental data, we find that estimates of trial-by-trial parameters are not significantly affected by the presence of ongoing background activity. However, including background activity in the unified model improves goodness of fit for predicting individual spiking events. Uncovering the relationship between the model parameters and the timing of movements offers new ways to test hypotheses about the relationship between neural activity and behavior. We obtained significant spike-field onset time correlations from single trials using a previously published data set where significantly strong correlation was only obtained through trial averaging. We also found that unified models extracted a stronger relationship between neural response latency and trial-by-trial behavioral performance than existing models of neural information processing. Our results highlight the utility of the unified modeling framework for characterizing spike-LFP recordings obtained during behavioral performance.

  10. The Emergent Executive: A Dynamic Field Theory of the Development of Executive Function

    PubMed Central

    Buss, Aaron T.; Spencer, John P.

    2015-01-01

    A dynamic neural field (DNF) model is presented which provides a process-based account of behavior and developmental change in a key task used to probe the early development of executive function—the Dimensional Change Card Sort (DCCS) task. In the DCCS, children must flexibly switch from sorting cards either by shape or color to sorting by the other dimension. Typically, 3-year-olds, but not 4-year-olds, lack the flexibility to do so and perseverate on the first set of rules when instructed to switch. In the DNF model, rule-use and behavioral flexibility come about through a form of dimensional attention which modulates activity within different cortical fields tuned to specific feature dimensions. In particular, we capture developmental change by increasing the strength of excitatory and inhibitory neural interactions in the dimensional attention system as well as refining the connectivity between this system and the feature-specific cortical fields. Note that although this enables the model to effectively switch tasks, the dimensional attention system does not ‘know’ the details of task-specific performance. Rather, correct performance emerges as a property of system-wide neural interactions. We show how this captures children's behavior in quantitative detail across 12 versions of the DCCS task. Moreover, we successfully test a set of novel predictions with 3-year-old children from a version of the task not explained by other theories. PMID:24818836

  11. State-space receptive fields of semicircular canal afferent neurons in the bullfrog

    NASA Technical Reports Server (NTRS)

    Paulin, M. G.; Hoffman, L. F.

    2001-01-01

    Receptive fields are commonly used to describe spatial characteristics of sensory neuron responses. They can be extended to characterize temporal or dynamical aspects by mapping neural responses in dynamical state spaces. The state-space receptive field of a neuron is the probability distribution of the dynamical state of the stimulus-generating system conditioned upon the occurrence of a spike. We have computed state-space receptive fields for semicircular canal afferent neurons in the bullfrog (Rana catesbeiana). We recorded spike times during broad-band Gaussian noise rotational velocity stimuli, computed the frequency distribution of head states at spike times, and normalized these to obtain conditional pdfs for the state. These state-space receptive fields quantify what the brain can deduce about the dynamical state of the head when a single spike arrives from the periphery. c2001 Elsevier Science B.V. All rights reserved.

  12. Normalized value coding explains dynamic adaptation in the human valuation process.

    PubMed

    Khaw, Mel W; Glimcher, Paul W; Louie, Kenway

    2017-11-28

    The notion of subjective value is central to choice theories in ecology, economics, and psychology, serving as an integrated decision variable by which options are compared. Subjective value is often assumed to be an absolute quantity, determined in a static manner by the properties of an individual option. Recent neurobiological studies, however, have shown that neural value coding dynamically adapts to the statistics of the recent reward environment, introducing an intrinsic temporal context dependence into the neural representation of value. Whether valuation exhibits this kind of dynamic adaptation at the behavioral level is unknown. Here, we show that the valuation process in human subjects adapts to the history of previous values, with current valuations varying inversely with the average value of recently observed items. The dynamics of this adaptive valuation are captured by divisive normalization, linking these temporal context effects to spatial context effects in decision making as well as spatial and temporal context effects in perception. These findings suggest that adaptation is a universal feature of neural information processing and offer a unifying explanation for contextual phenomena in fields ranging from visual psychophysics to economic choice.

  13. A stochastic-field description of finite-size spiking neural networks

    PubMed Central

    Longtin, André

    2017-01-01

    Neural network dynamics are governed by the interaction of spiking neurons. Stochastic aspects of single-neuron dynamics propagate up to the network level and shape the dynamical and informational properties of the population. Mean-field models of population activity disregard the finite-size stochastic fluctuations of network dynamics and thus offer a deterministic description of the system. Here, we derive a stochastic partial differential equation (SPDE) describing the temporal evolution of the finite-size refractory density, which represents the proportion of neurons in a given refractory state at any given time. The population activity—the density of active neurons per unit time—is easily extracted from this refractory density. The SPDE includes finite-size effects through a two-dimensional Gaussian white noise that acts both in time and along the refractory dimension. For an infinite number of neurons the standard mean-field theory is recovered. A discretization of the SPDE along its characteristic curves allows direct simulations of the activity of large but finite spiking networks; this constitutes the main advantage of our approach. Linearizing the SPDE with respect to the deterministic asynchronous state allows the theoretical investigation of finite-size activity fluctuations. In particular, analytical expressions for the power spectrum and autocorrelation of activity fluctuations are obtained. Moreover, our approach can be adapted to incorporate multiple interacting populations and quasi-renewal single-neuron dynamics. PMID:28787447

  14. Suppression of anomalous synchronization and nonstationary behavior of neural network under small-world topology

    NASA Astrophysics Data System (ADS)

    Boaretto, B. R. R.; Budzinski, R. C.; Prado, T. L.; Kurths, J.; Lopes, S. R.

    2018-05-01

    It is known that neural networks under small-world topology can present anomalous synchronization and nonstationary behavior for weak coupling regimes. Here, we propose methods to suppress the anomalous synchronization and also to diminish the nonstationary behavior occurring in weakly coupled neural network under small-world topology. We consider a network of 2000 thermally sensitive identical neurons, based on the model of Hodgkin-Huxley in a small-world topology, with the probability of adding non local connection equal to p = 0 . 001. Based on experimental protocols to suppress anomalous synchronization, as well as nonstationary behavior of the neural network dynamics, we make use of (i) external stimulus (pulsed current); (ii) biologic parameters changing (neuron membrane conductance changes); and (iii) body temperature changes. Quantification analysis to evaluate phase synchronization makes use of the Kuramoto's order parameter, while recurrence quantification analysis, particularly the determinism, computed over the easily accessible mean field of network, the local field potential (LFP), is used to evaluate nonstationary states. We show that the methods proposed can control the anomalous synchronization and nonstationarity occurring for weak coupling parameter without any effect on the individual neuron dynamics, neither in the expected asymptotic synchronized states occurring for large values of the coupling parameter.

  15. Real-time emulation of neural images in the outer retinal circuit.

    PubMed

    Hasegawa, Jun; Yagi, Tetsuya

    2008-12-01

    We describe a novel real-time system that emulates the architecture and functionality of the vertebrate retina. This system reconstructs the neural images formed by the retinal neurons in real time by using a combination of analog and digital systems consisting of a neuromorphic silicon retina chip, a field-programmable gate array, and a digital computer. While the silicon retina carries out the spatial filtering of input images instantaneously, using the embedded resistive networks that emulate the receptive field structure of the outer retinal neurons, the digital computer carries out the temporal filtering of the spatially filtered images to emulate the dynamical properties of the outer retinal circuits. The emulations of the neural image, including 128 x 128 bipolar cells, are carried out at a frame rate of 62.5 Hz. The emulation of the response to the Hermann grid and a spot of light and an annulus of lights has demonstrated that the system responds as expected by previous physiological and psychophysical observations. Furthermore, the emulated dynamics of neural images in response to natural scenes revealed the complex nature of retinal neuron activity. We have concluded that the system reflects the spatiotemporal responses of bipolar cells in the vertebrate retina. The proposed emulation system is expected to aid in understanding the visual computation in the retina and the brain.

  16. Dynamical systems, attractors, and neural circuits.

    PubMed

    Miller, Paul

    2016-01-01

    Biology is the study of dynamical systems. Yet most of us working in biology have limited pedagogical training in the theory of dynamical systems, an unfortunate historical fact that can be remedied for future generations of life scientists. In my particular field of systems neuroscience, neural circuits are rife with nonlinearities at all levels of description, rendering simple methodologies and our own intuition unreliable. Therefore, our ideas are likely to be wrong unless informed by good models. These models should be based on the mathematical theories of dynamical systems since functioning neurons are dynamic-they change their membrane potential and firing rates with time. Thus, selecting the appropriate type of dynamical system upon which to base a model is an important first step in the modeling process. This step all too easily goes awry, in part because there are many frameworks to choose from, in part because the sparsely sampled data can be consistent with a variety of dynamical processes, and in part because each modeler has a preferred modeling approach that is difficult to move away from. This brief review summarizes some of the main dynamical paradigms that can arise in neural circuits, with comments on what they can achieve computationally and what signatures might reveal their presence within empirical data. I provide examples of different dynamical systems using simple circuits of two or three cells, emphasizing that any one connectivity pattern is compatible with multiple, diverse functions.

  17. Neural Architectures for Control

    NASA Technical Reports Server (NTRS)

    Peterson, James K.

    1991-01-01

    The cerebellar model articulated controller (CMAC) neural architectures are shown to be viable for the purposes of real-time learning and control. Software tools for the exploration of CMAC performance are developed for three hardware platforms, the MacIntosh, the IBM PC, and the SUN workstation. All algorithm development was done using the C programming language. These software tools were then used to implement an adaptive critic neuro-control design that learns in real-time how to back up a trailer truck. The truck backer-upper experiment is a standard performance measure in the neural network literature, but previously the training of the controllers was done off-line. With the CMAC neural architectures, it was possible to train the neuro-controllers on-line in real-time on a MS-DOS PC 386. CMAC neural architectures are also used in conjunction with a hierarchical planning approach to find collision-free paths over 2-D analog valued obstacle fields. The method constructs a coarse resolution version of the original problem and then finds the corresponding coarse optimal path using multipass dynamic programming. CMAC artificial neural architectures are used to estimate the analog transition costs that dynamic programming requires. The CMAC architectures are trained in real-time for each obstacle field presented. The coarse optimal path is then used as a baseline for the construction of a fine scale optimal path through the original obstacle array. These results are a very good indication of the potential power of the neural architectures in control design. In order to reach as wide an audience as possible, we have run a seminar on neuro-control that has met once per week since 20 May 1991. This seminar has thoroughly discussed the CMAC architecture, relevant portions of classical control, back propagation through time, and adaptive critic designs.

  18. Embedding recurrent neural networks into predator-prey models.

    PubMed

    Moreau, Yves; Louiès, Stephane; Vandewalle, Joos; Brenig, Leon

    1999-03-01

    We study changes of coordinates that allow the embedding of ordinary differential equations describing continuous-time recurrent neural networks into differential equations describing predator-prey models-also called Lotka-Volterra systems. We transform the equations for the neural network first into quasi-monomial form (Brenig, L. (1988). Complete factorization and analytic solutions of generalized Lotka-Volterra equations. Physics Letters A, 133(7-8), 378-382), where we express the vector field of the dynamical system as a linear combination of products of powers of the variables. In practice, this transformation is possible only if the activation function is the hyperbolic tangent or the logistic sigmoid. From this quasi-monomial form, we can directly transform the system further into Lotka-Volterra equations. The resulting Lotka-Volterra system is of higher dimension than the original system, but the behavior of its first variables is equivalent to the behavior of the original neural network. We expect that this transformation will permit the application of existing techniques for the analysis of Lotka-Volterra systems to recurrent neural networks. Furthermore, our results show that Lotka-Volterra systems are universal approximators of dynamical systems, just as are continuous-time neural networks.

  19. Perceptual suppression revealed by adaptive multi-scale entropy analysis of local field potential in monkey visual cortex.

    PubMed

    Hu, Meng; Liang, Hualou

    2013-04-01

    Generalized flash suppression (GFS), in which a salient visual stimulus can be rendered invisible despite continuous retinal input, provides a rare opportunity to directly study the neural mechanism of visual perception. Previous work based on linear methods, such as spectral analysis, on local field potential (LFP) during GFS has shown that the LFP power at distinctive frequency bands are differentially modulated by perceptual suppression. Yet, the linear method alone may be insufficient for the full assessment of neural dynamic due to the fundamentally nonlinear nature of neural signals. In this study, we set forth to analyze the LFP data collected from multiple visual areas in V1, V2 and V4 of macaque monkeys while performing the GFS task using a nonlinear method - adaptive multi-scale entropy (AME) - to reveal the neural dynamic of perceptual suppression. In addition, we propose a new cross-entropy measure at multiple scales, namely adaptive multi-scale cross-entropy (AMCE), to assess the nonlinear functional connectivity between two cortical areas. We show that: (1) multi-scale entropy exhibits percept-related changes in all three areas, with higher entropy observed during perceptual suppression; (2) the magnitude of the perception-related entropy changes increases systematically over successive hierarchical stages (i.e. from lower areas V1 to V2, up to higher area V4); and (3) cross-entropy between any two cortical areas reveals higher degree of asynchrony or dissimilarity during perceptual suppression, indicating a decreased functional connectivity between cortical areas. These results, taken together, suggest that perceptual suppression is related to a reduced functional connectivity and increased uncertainty of neural responses, and the modulation of perceptual suppression is more effective at higher visual cortical areas. AME is demonstrated to be a useful technique in revealing the underlying dynamic of nonlinear/nonstationary neural signal.

  20. Dynamic functional connectivity: Promise, issues, and interpretations

    PubMed Central

    Hutchison, R. Matthew; Womelsdorf, Thilo; Allen, Elena A.; Bandettini, Peter A.; Calhoun, Vince D.; Corbetta, Maurizio; Penna, Stefania Della; Duyn, Jeff H.; Glover, Gary H.; Gonzalez-Castillo, Javier; Handwerker, Daniel A.; Keilholz, Shella; Kiviniemi, Vesa; Leopold, David A.; de Pasquale, Francesco; Sporns, Olaf; Walter, Martin; Chang, Catie

    2013-01-01

    The brain must dynamically integrate, coordinate, and respond to internal and external stimuli across multiple time scales. Non-invasive measurements of brain activity with fMRI have greatly advanced our understanding of the large-scale functional organization supporting these fundamental features of brain function. Conclusions from previous resting-state fMRI investigations were based upon static descriptions of functional connectivity (FC), and only recently studies have begun to capitalize on the wealth of information contained within the temporal features of spontaneous BOLD FC. Emerging evidence suggests that dynamic FC metrics may index changes in macroscopic neural activity patterns underlying critical aspects of cognition and behavior, though limitations with regard to analysis and interpretation remain. Here, we review recent findings, methodological considerations, neural and behavioral correlates, and future directions in the emerging field of dynamic FC investigations. PMID:23707587

  1. Learning to recognize objects on the fly: a neurally based dynamic field approach.

    PubMed

    Faubel, Christian; Schöner, Gregor

    2008-05-01

    Autonomous robots interacting with human users need to build and continuously update scene representations. This entails the problem of rapidly learning to recognize new objects under user guidance. Based on analogies with human visual working memory, we propose a dynamical field architecture, in which localized peaks of activation represent objects over a small number of simple feature dimensions. Learning consists of laying down memory traces of such peaks. We implement the dynamical field model on a service robot and demonstrate how it learns 30 objects from a very small number of views (about 5 per object are sufficient). We also illustrate how properties of feature binding emerge from this framework.

  2. Endocytotic potential governs magnetic particle loading in dividing neural cells: studying modes of particle inheritance

    PubMed Central

    Tickle, Jacqueline A; Jenkins, Stuart I; Polyak, Boris; Pickard, Mark R; Chari, Divya M

    2016-01-01

    Aim: To achieve high and sustained magnetic particle loading in a proliferative and endocytotically active neural transplant population (astrocytes) through tailored magnetite content in polymeric iron oxide particles. Materials & methods: MPs of varying magnetite content were applied to primary-derived rat cortical astrocytes ± static/oscillating magnetic fields to assess labeling efficiency and safety. Results: Higher magnetite content particles display high but safe accumulation in astrocytes, with longer-term label retention versus lower/no magnetite content particles. Magnetic fields enhanced loading extent. Dynamic live cell imaging of dividing labeled astrocytes demonstrated that particle distribution into daughter cells is predominantly ‘asymmetric’. Conclusion: These findings could inform protocols to achieve efficient MP loading into neural transplant cells, with significant implications for post-transplantation tracking/localization. PMID:26785794

  3. On DSS Implementation in the Dynamic Model of the Digital Oil field

    NASA Astrophysics Data System (ADS)

    Korovin, Iakov S.; Khisamutdinov, Maksim V.; Kalyaev, Anatoly I.

    2018-02-01

    Decision support systems (DSS), especially based on the artificial intelligence (AI) techniques are been widely applied in different domains nowadays. In the paper we depict an approach of implementing DSS in to Digital Oil Field (DOF) dynamic model structure in order to reduce the human factor influence, considering the automation of all production processes to be the DOF model clue element. As the basic tool of data handling we propose the hybrid application on artificial neural networks and evolutional algorithms.

  4. The TensorMol-0.1 model chemistry: a neural network augmented with long-range physics.

    PubMed

    Yao, Kun; Herr, John E; Toth, David W; Mckintyre, Ryker; Parkhill, John

    2018-02-28

    Traditional force fields cannot model chemical reactivity, and suffer from low generality without re-fitting. Neural network potentials promise to address these problems, offering energies and forces with near ab initio accuracy at low cost. However a data-driven approach is naturally inefficient for long-range interatomic forces that have simple physical formulas. In this manuscript we construct a hybrid model chemistry consisting of a nearsighted neural network potential with screened long-range electrostatic and van der Waals physics. This trained potential, simply dubbed "TensorMol-0.1", is offered in an open-source Python package capable of many of the simulation types commonly used to study chemistry: geometry optimizations, harmonic spectra, open or periodic molecular dynamics, Monte Carlo, and nudged elastic band calculations. We describe the robustness and speed of the package, demonstrating its millihartree accuracy and scalability to tens-of-thousands of atoms on ordinary laptops. We demonstrate the performance of the model by reproducing vibrational spectra, and simulating the molecular dynamics of a protein. Our comparisons with electronic structure theory and experimental data demonstrate that neural network molecular dynamics is poised to become an important tool for molecular simulation, lowering the resource barrier to simulating chemistry.

  5. How infants' reaches reveal principles of sensorimotor decision making

    NASA Astrophysics Data System (ADS)

    Dineva, Evelina; Schöner, Gregor

    2018-01-01

    In Piaget's classical A-not-B-task, infants repeatedly make a sensorimotor decision to reach to one of two cued targets. Perseverative errors are induced by switching the cue from A to B, while spontaneous errors are unsolicited reaches to B when only A is cued. We argue that theoretical accounts of sensorimotor decision-making fail to address how motor decisions leave a memory trace that may impact future sensorimotor decisions. Instead, in extant neural models, perseveration is caused solely by the history of stimulation. We present a neural dynamic model of sensorimotor decision-making within the framework of Dynamic Field Theory, in which a dynamic instability amplifies fluctuations in neural activation into macroscopic, stable neural activation states that leave memory traces. The model predicts perseveration, but also a tendency to repeat spontaneous errors. To test the account, we pool data from several A-not-B experiments. A conditional probabilities analysis accounts quantitatively how motor decisions depend on the history of reaching. The results provide evidence for the interdependence among subsequent reaching decisions that is explained by the model, showing that by amplifying small differences in activation and affecting learning, decisions have consequences beyond the individual behavioural act.

  6. Force adaptation transfers to untrained workspace regions in children: evidence for developing inverse dynamic motor models.

    PubMed

    Jansen-Osmann, Petra; Richter, Stefanie; Konczak, Jürgen; Kalveram, Karl-Theodor

    2002-03-01

    When humans perform goal-directed arm movements under the influence of an external damping force, they learn to adapt to these external dynamics. After removal of the external force field, they reveal kinematic aftereffects that are indicative of a neural controller that still compensates the no longer existing force. Such behavior suggests that the adult human nervous system uses a neural representation of inverse arm dynamics to control upper-extremity motion. Central to the notion of an inverse dynamic model (IDM) is that learning generalizes. Consequently, aftereffects should be observable even in untrained workspace regions. Adults have shown such behavior, but the ontogenetic development of this process remains unclear. This study examines the adaptive behavior of children and investigates whether learning a force field in one hemifield of the right arm workspace has an effect on force adaptation in the other hemifield. Thirty children (aged 6-10 years) and ten adults performed 30 degrees elbow flexion movements under two conditions of external damping (negative and null). We found that learning to compensate an external damping force transferred to the opposite hemifield, which indicates that a model of the limb dynamics rather than an association of visited space and experienced force was acquired. Aftereffects were more pronounced in the younger children and readaptation to a null-force condition was prolonged. This finding is consistent with the view that IDMs in children are imprecise neural representations of the actual arm dynamics. It indicates that the acquisition of IDMs is a developmental achievement and that the human motor system is inherently flexible enough to adapt to any novel force within the limits of the organism's biomechanics.

  7. Neural adaptation accounts for the dynamic resizing of peripersonal space: evidence from a psychophysical-computational approach.

    PubMed

    Noel, Jean-Paul; Blanke, Olaf; Magosso, Elisa; Serino, Andrea

    2018-06-01

    Interactions between the body and the environment occur within the peripersonal space (PPS), the space immediately surrounding the body. The PPS is encoded by multisensory (audio-tactile, visual-tactile) neurons that possess receptive fields (RFs) anchored on the body and restricted in depth. The extension in depth of PPS neurons' RFs has been documented to change dynamically as a function of the velocity of incoming stimuli, but the underlying neural mechanisms are still unknown. Here, by integrating a psychophysical approach with neural network modeling, we propose a mechanistic explanation behind this inherent dynamic property of PPS. We psychophysically mapped the size of participant's peri-face and peri-trunk space as a function of the velocity of task-irrelevant approaching auditory stimuli. Findings indicated that the peri-trunk space was larger than the peri-face space, and, importantly, as for the neurophysiological delineation of RFs, both of these representations enlarged as the velocity of incoming sound increased. We propose a neural network model to mechanistically interpret these findings: the network includes reciprocal connections between unisensory areas and higher order multisensory neurons, and it implements neural adaptation to persistent stimulation as a mechanism sensitive to stimulus velocity. The network was capable of replicating the behavioral observations of PPS size remapping and relates behavioral proxies of PPS size to neurophysiological measures of multisensory neurons' RF size. We propose that a biologically plausible neural adaptation mechanism embedded within the network encoding for PPS can be responsible for the dynamic alterations in PPS size as a function of the velocity of incoming stimuli. NEW & NOTEWORTHY Interactions between body and environment occur within the peripersonal space (PPS). PPS neurons are highly dynamic, adapting online as a function of body-object interactions. The mechanistic underpinning PPS dynamic properties are unexplained. We demonstrate with a psychophysical approach that PPS enlarges as incoming stimulus velocity increases, efficiently preventing contacts with faster approaching objects. We present a neurocomputational model of multisensory PPS implementing neural adaptation to persistent stimulation to propose a neurophysiological mechanism underlying this effect.

  8. Study of parameter identification using hybrid neural-genetic algorithm in electro-hydraulic servo system

    NASA Astrophysics Data System (ADS)

    Moon, Byung-Young

    2005-12-01

    The hybrid neural-genetic multi-model parameter estimation algorithm was demonstrated. This method can be applied to structured system identification of electro-hydraulic servo system. This algorithms consist of a recurrent incremental credit assignment(ICRA) neural network and a genetic algorithm. The ICRA neural network evaluates each member of a generation of model and genetic algorithm produces new generation of model. To evaluate the proposed method, electro-hydraulic servo system was designed and manufactured. The experiment was carried out to figure out the hybrid neural-genetic multi-model parameter estimation algorithm. As a result, the dynamic characteristics were obtained such as the parameters(mass, damping coefficient, bulk modulus, spring coefficient), which minimize total square error. The result of this study can be applied to hydraulic systems in industrial fields.

  9. Towards a theory of cortical columns: From spiking neurons to interacting neural populations of finite size.

    PubMed

    Schwalger, Tilo; Deger, Moritz; Gerstner, Wulfram

    2017-04-01

    Neural population equations such as neural mass or field models are widely used to study brain activity on a large scale. However, the relation of these models to the properties of single neurons is unclear. Here we derive an equation for several interacting populations at the mesoscopic scale starting from a microscopic model of randomly connected generalized integrate-and-fire neuron models. Each population consists of 50-2000 neurons of the same type but different populations account for different neuron types. The stochastic population equations that we find reveal how spike-history effects in single-neuron dynamics such as refractoriness and adaptation interact with finite-size fluctuations on the population level. Efficient integration of the stochastic mesoscopic equations reproduces the statistical behavior of the population activities obtained from microscopic simulations of a full spiking neural network model. The theory describes nonlinear emergent dynamics such as finite-size-induced stochastic transitions in multistable networks and synchronization in balanced networks of excitatory and inhibitory neurons. The mesoscopic equations are employed to rapidly integrate a model of a cortical microcircuit consisting of eight neuron types, which allows us to predict spontaneous population activities as well as evoked responses to thalamic input. Our theory establishes a general framework for modeling finite-size neural population dynamics based on single cell and synapse parameters and offers an efficient approach to analyzing cortical circuits and computations.

  10. Pulse-coupled neural nets: translation, rotation, scale, distortion, and intensity signal invariance for images.

    PubMed

    Johnson, J L

    1994-09-10

    The linking-field neural network model of Eckhorn et al. [Neural Comput. 2, 293-307 (1990)] was introduced to explain the experimentally observed synchronous activity among neural assemblies in the cat cortex induced by feature-dependent visual activity. The model produces synchronous bursts of pulses from neurons with similar activity, effectively grouping them by phase and pulse frequency. It gives a basic new function: grouping by similarity. The synchronous bursts are obtained in the limit of strong linking strengths. The linking-field model in the limit of moderate-to-weak linking characterized by few if any multiple bursts is investigated. In this limit dynamic, locally periodic traveling waves exist whose time signal encodes the geometrical structure of a two-dimensional input image. The signal can be made insensitive to translation, scale, rotation, distortion, and intensity. The waves transmit information beyond the physical interconnect distance. The model is implemented in an optical hybrid demonstration system. Results of the simulations and the optical system are presented.

  11. Integration of Continuous-Time Dynamics in a Spiking Neural Network Simulator.

    PubMed

    Hahne, Jan; Dahmen, David; Schuecker, Jannis; Frommer, Andreas; Bolten, Matthias; Helias, Moritz; Diesmann, Markus

    2017-01-01

    Contemporary modeling approaches to the dynamics of neural networks include two important classes of models: biologically grounded spiking neuron models and functionally inspired rate-based units. We present a unified simulation framework that supports the combination of the two for multi-scale modeling, enables the quantitative validation of mean-field approaches by spiking network simulations, and provides an increase in reliability by usage of the same simulation code and the same network model specifications for both model classes. While most spiking simulations rely on the communication of discrete events, rate models require time-continuous interactions between neurons. Exploiting the conceptual similarity to the inclusion of gap junctions in spiking network simulations, we arrive at a reference implementation of instantaneous and delayed interactions between rate-based models in a spiking network simulator. The separation of rate dynamics from the general connection and communication infrastructure ensures flexibility of the framework. In addition to the standard implementation we present an iterative approach based on waveform-relaxation techniques to reduce communication and increase performance for large-scale simulations of rate-based models with instantaneous interactions. Finally we demonstrate the broad applicability of the framework by considering various examples from the literature, ranging from random networks to neural-field models. The study provides the prerequisite for interactions between rate-based and spiking models in a joint simulation.

  12. A Constructive Mean-Field Analysis of Multi-Population Neural Networks with Random Synaptic Weights and Stochastic Inputs

    PubMed Central

    Faugeras, Olivier; Touboul, Jonathan; Cessac, Bruno

    2008-01-01

    We deal with the problem of bridging the gap between two scales in neuronal modeling. At the first (microscopic) scale, neurons are considered individually and their behavior described by stochastic differential equations that govern the time variations of their membrane potentials. They are coupled by synaptic connections acting on their resulting activity, a nonlinear function of their membrane potential. At the second (mesoscopic) scale, interacting populations of neurons are described individually by similar equations. The equations describing the dynamical and the stationary mean-field behaviors are considered as functional equations on a set of stochastic processes. Using this new point of view allows us to prove that these equations are well-posed on any finite time interval and to provide a constructive method for effectively computing their unique solution. This method is proved to converge to the unique solution and we characterize its complexity and convergence rate. We also provide partial results for the stationary problem on infinite time intervals. These results shed some new light on such neural mass models as the one of Jansen and Rit (1995): their dynamics appears as a coarse approximation of the much richer dynamics that emerges from our analysis. Our numerical experiments confirm that the framework we propose and the numerical methods we derive from it provide a new and powerful tool for the exploration of neural behaviors at different scales. PMID:19255631

  13. Integration of Continuous-Time Dynamics in a Spiking Neural Network Simulator

    PubMed Central

    Hahne, Jan; Dahmen, David; Schuecker, Jannis; Frommer, Andreas; Bolten, Matthias; Helias, Moritz; Diesmann, Markus

    2017-01-01

    Contemporary modeling approaches to the dynamics of neural networks include two important classes of models: biologically grounded spiking neuron models and functionally inspired rate-based units. We present a unified simulation framework that supports the combination of the two for multi-scale modeling, enables the quantitative validation of mean-field approaches by spiking network simulations, and provides an increase in reliability by usage of the same simulation code and the same network model specifications for both model classes. While most spiking simulations rely on the communication of discrete events, rate models require time-continuous interactions between neurons. Exploiting the conceptual similarity to the inclusion of gap junctions in spiking network simulations, we arrive at a reference implementation of instantaneous and delayed interactions between rate-based models in a spiking network simulator. The separation of rate dynamics from the general connection and communication infrastructure ensures flexibility of the framework. In addition to the standard implementation we present an iterative approach based on waveform-relaxation techniques to reduce communication and increase performance for large-scale simulations of rate-based models with instantaneous interactions. Finally we demonstrate the broad applicability of the framework by considering various examples from the literature, ranging from random networks to neural-field models. The study provides the prerequisite for interactions between rate-based and spiking models in a joint simulation. PMID:28596730

  14. Obtaining Arbitrary Prescribed Mean Field Dynamics for Recurrently Coupled Networks of Type-I Spiking Neurons with Analytically Determined Weights

    PubMed Central

    Nicola, Wilten; Tripp, Bryan; Scott, Matthew

    2016-01-01

    A fundamental question in computational neuroscience is how to connect a network of spiking neurons to produce desired macroscopic or mean field dynamics. One possible approach is through the Neural Engineering Framework (NEF). The NEF approach requires quantities called decoders which are solved through an optimization problem requiring large matrix inversion. Here, we show how a decoder can be obtained analytically for type I and certain type II firing rates as a function of the heterogeneity of its associated neuron. These decoders generate approximants for functions that converge to the desired function in mean-squared error like 1/N, where N is the number of neurons in the network. We refer to these decoders as scale-invariant decoders due to their structure. These decoders generate weights for a network of neurons through the NEF formula for weights. These weights force the spiking network to have arbitrary and prescribed mean field dynamics. The weights generated with scale-invariant decoders all lie on low dimensional hypersurfaces asymptotically. We demonstrate the applicability of these scale-invariant decoders and weight surfaces by constructing networks of spiking theta neurons that replicate the dynamics of various well known dynamical systems such as the neural integrator, Van der Pol system and the Lorenz system. As these decoders are analytically determined and non-unique, the weights are also analytically determined and non-unique. We discuss the implications for measured weights of neuronal networks. PMID:26973503

  15. Obtaining Arbitrary Prescribed Mean Field Dynamics for Recurrently Coupled Networks of Type-I Spiking Neurons with Analytically Determined Weights.

    PubMed

    Nicola, Wilten; Tripp, Bryan; Scott, Matthew

    2016-01-01

    A fundamental question in computational neuroscience is how to connect a network of spiking neurons to produce desired macroscopic or mean field dynamics. One possible approach is through the Neural Engineering Framework (NEF). The NEF approach requires quantities called decoders which are solved through an optimization problem requiring large matrix inversion. Here, we show how a decoder can be obtained analytically for type I and certain type II firing rates as a function of the heterogeneity of its associated neuron. These decoders generate approximants for functions that converge to the desired function in mean-squared error like 1/N, where N is the number of neurons in the network. We refer to these decoders as scale-invariant decoders due to their structure. These decoders generate weights for a network of neurons through the NEF formula for weights. These weights force the spiking network to have arbitrary and prescribed mean field dynamics. The weights generated with scale-invariant decoders all lie on low dimensional hypersurfaces asymptotically. We demonstrate the applicability of these scale-invariant decoders and weight surfaces by constructing networks of spiking theta neurons that replicate the dynamics of various well known dynamical systems such as the neural integrator, Van der Pol system and the Lorenz system. As these decoders are analytically determined and non-unique, the weights are also analytically determined and non-unique. We discuss the implications for measured weights of neuronal networks.

  16. Influence of neural adaptation on dynamics and equilibrium state of neural activities in a ring neural network

    NASA Astrophysics Data System (ADS)

    Takiyama, Ken

    2017-12-01

    How neural adaptation affects neural information processing (i.e. the dynamics and equilibrium state of neural activities) is a central question in computational neuroscience. In my previous works, I analytically clarified the dynamics and equilibrium state of neural activities in a ring-type neural network model that is widely used to model the visual cortex, motor cortex, and several other brain regions. The neural dynamics and the equilibrium state in the neural network model corresponded to a Bayesian computation and statistically optimal multiple information integration, respectively, under a biologically inspired condition. These results were revealed in an analytically tractable manner; however, adaptation effects were not considered. Here, I analytically reveal how the dynamics and equilibrium state of neural activities in a ring neural network are influenced by spike-frequency adaptation (SFA). SFA is an adaptation that causes gradual inhibition of neural activity when a sustained stimulus is applied, and the strength of this inhibition depends on neural activities. I reveal that SFA plays three roles: (1) SFA amplifies the influence of external input in neural dynamics; (2) SFA allows the history of the external input to affect neural dynamics; and (3) the equilibrium state corresponds to the statistically optimal multiple information integration independent of the existence of SFA. In addition, the equilibrium state in a ring neural network model corresponds to the statistically optimal integration of multiple information sources under biologically inspired conditions, independent of the existence of SFA.

  17. An integrative approach for analyzing hundreds of neurons in task performing mice using wide-field calcium imaging.

    PubMed

    Mohammed, Ali I; Gritton, Howard J; Tseng, Hua-an; Bucklin, Mark E; Yao, Zhaojie; Han, Xue

    2016-02-08

    Advances in neurotechnology have been integral to the investigation of neural circuit function in systems neuroscience. Recent improvements in high performance fluorescent sensors and scientific CMOS cameras enables optical imaging of neural networks at a much larger scale. While exciting technical advances demonstrate the potential of this technique, further improvement in data acquisition and analysis, especially those that allow effective processing of increasingly larger datasets, would greatly promote the application of optical imaging in systems neuroscience. Here we demonstrate the ability of wide-field imaging to capture the concurrent dynamic activity from hundreds to thousands of neurons over millimeters of brain tissue in behaving mice. This system allows the visualization of morphological details at a higher spatial resolution than has been previously achieved using similar functional imaging modalities. To analyze the expansive data sets, we developed software to facilitate rapid downstream data processing. Using this system, we show that a large fraction of anatomically distinct hippocampal neurons respond to discrete environmental stimuli associated with classical conditioning, and that the observed temporal dynamics of transient calcium signals are sufficient for exploring certain spatiotemporal features of large neural networks.

  18. Selective population rate coding: a possible computational role of gamma oscillations in selective attention.

    PubMed

    Masuda, Naoki

    2009-12-01

    Selective attention is often accompanied by gamma oscillations in local field potentials and spike field coherence in brain areas related to visual, motor, and cognitive information processing. Gamma oscillations are implicated to play an important role in, for example, visual tasks including object search, shape perception, and speed detection. However, the mechanism by which gamma oscillations enhance cognitive and behavioral performance of attentive subjects is still elusive. Using feedforward fan-in networks composed of spiking neurons, we examine a possible role for gamma oscillations in selective attention and population rate coding of external stimuli. We implement the concept proposed by Fries ( 2005 ) that under dynamic stimuli, neural populations effectively communicate with each other only when there is a good phase relationship among associated gamma oscillations. We show that the downstream neural population selects a specific dynamic stimulus received by an upstream population and represents it by population rate coding. The encoded stimulus is the one for which gamma rhythm in the corresponding upstream population is resonant with the downstream gamma rhythm. The proposed role for gamma oscillations in stimulus selection is to enable top-down control, a neural version of time division multiple access used in communication engineering.

  19. Extending Gurwitsch's field theory of consciousness.

    PubMed

    Yoshimi, Jeff; Vinson, David W

    2015-07-01

    Aron Gurwitsch's theory of the structure and dynamics of consciousness has much to offer contemporary theorizing about consciousness and its basis in the embodied brain. On Gurwitsch's account, as we develop it, the field of consciousness has a variable sized focus or "theme" of attention surrounded by a structured periphery of inattentional contents. As the field evolves, its contents change their status, sometimes smoothly, sometimes abruptly. Inner thoughts, a sense of one's body, and the physical environment are dominant field contents. These ideas can be linked with (and help unify) contemporary theories about the neural correlates of consciousness, inattention, the small world structure of the brain, meta-stable dynamics, embodied cognition, and predictive coding in the brain. Published by Elsevier Inc.

  20. Neurovision processor for designing intelligent sensors

    NASA Astrophysics Data System (ADS)

    Gupta, Madan M.; Knopf, George K.

    1992-03-01

    A programmable multi-task neuro-vision processor, called the Positive-Negative (PN) neural processor, is proposed as a plausible hardware mechanism for constructing robust multi-task vision sensors. The computational operations performed by the PN neural processor are loosely based on the neural activity fields exhibited by certain nervous tissue layers situated in the brain. The neuro-vision processor can be programmed to generate diverse dynamic behavior that may be used for spatio-temporal stabilization (STS), short-term visual memory (STVM), spatio-temporal filtering (STF) and pulse frequency modulation (PFM). A multi- functional vision sensor that performs a variety of information processing operations on time- varying two-dimensional sensory images can be constructed from a parallel and hierarchical structure of numerous individually programmed PN neural processors.

  1. Spatiotemporal properties of microsaccades: Model predictions and experimental tests

    NASA Astrophysics Data System (ADS)

    Zhou, Jian-Fang; Yuan, Wu-Jie; Zhou, Zhao

    2016-10-01

    Microsaccades are involuntary and very small eye movements during fixation. Recently, the microsaccade-related neural dynamics have been extensively investigated both in experiments and by constructing neural network models. Experimentally, microsaccades also exhibit many behavioral properties. It’s well known that the behavior properties imply the underlying neural dynamical mechanisms, and so are determined by neural dynamics. The behavioral properties resulted from neural responses to microsaccades, however, are not yet understood and are rarely studied theoretically. Linking neural dynamics to behavior is one of the central goals of neuroscience. In this paper, we provide behavior predictions on spatiotemporal properties of microsaccades according to microsaccade-induced neural dynamics in a cascading network model, which includes both retinal adaptation and short-term depression (STD) at thalamocortical synapses. We also successfully give experimental tests in the statistical sense. Our results provide the first behavior description of microsaccades based on neural dynamics induced by behaving activity, and so firstly link neural dynamics to behavior of microsaccades. These results indicate strongly that the cascading adaptations play an important role in the study of microsaccades. Our work may be useful for further investigations of the microsaccadic behavioral properties and of the underlying neural dynamical mechanisms responsible for the behavioral properties.

  2. Simultaneous cellular-resolution optical perturbation and imaging of place cell firing fields

    PubMed Central

    Rickgauer, John Peter; Deisseroth, Karl; Tank, David W.

    2015-01-01

    Linking neural microcircuit function to emergent properties of the mammalian brain requires fine-scale manipulation and measurement of neural activity during behavior, where each neuron’s coding and dynamics can be characterized. We developed an optical method for simultaneous cellular-resolution stimulation and large-scale recording of neuronal activity in behaving mice. Dual-wavelength two-photon excitation allowed largely independent functional imaging with a green fluorescent calcium sensor (GCaMP3, λ = 920 ± 6 nm) and single-neuron photostimulation with a red-shifted optogenetic probe (C1V1, λ = 1,064 ± 6 nm) in neurons coexpressing the two proteins. We manipulated task-modulated activity in individual hippocampal CA1 place cells during spatial navigation in a virtual reality environment, mimicking natural place-field activity, or ‘biasing’, to reveal subthreshold dynamics. Notably, manipulating single place-cell activity also affected activity in small groups of other place cells that were active around the same time in the task, suggesting a functional role for local place cell interactions in shaping firing fields. PMID:25402854

  3. Dynamics of the functional link between area MT LFPs and motion detection

    PubMed Central

    Smith, Jackson E. T.; Beliveau, Vincent; Schoen, Alan; Remz, Jordana; Zhan, Chang'an A.

    2015-01-01

    The evolution of a visually guided perceptual decision results from multiple neural processes, and recent work suggests that signals with different neural origins are reflected in separate frequency bands of the cortical local field potential (LFP). Spike activity and LFPs in the middle temporal area (MT) have a functional link with the perception of motion stimuli (referred to as neural-behavioral correlation). To cast light on the different neural origins that underlie this functional link, we compared the temporal dynamics of the neural-behavioral correlations of MT spikes and LFPs. Wide-band activity was simultaneously recorded from two locations of MT from monkeys performing a threshold, two-stimuli, motion pulse detection task. Shortly after the motion pulse occurred, we found that high-gamma (100–200 Hz) LFPs had a fast, positive correlation with detection performance that was similar to that of the spike response. Beta (10–30 Hz) LFPs were negatively correlated with detection performance, but their dynamics were much slower, peaked late, and did not depend on stimulus configuration or reaction time. A late change in the correlation of all LFPs across the two recording electrodes suggests that a common input arrived at both MT locations prior to the behavioral response. Our results support a framework in which early high-gamma LFPs likely reflected fast, bottom-up, sensory processing that was causally linked to perception of the motion pulse. In comparison, late-arriving beta and high-gamma LFPs likely reflected slower, top-down, sources of neural-behavioral correlation that originated after the perception of the motion pulse. PMID:25948867

  4. Analog neural network control method proposed for use in a backup satellite control mode

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frigo, J.R.; Tilden, M.W.

    1998-03-01

    The authors propose to use an analog neural network controller implemented in hardware, independent of the active control system, for use in a satellite backup control mode. The controller uses coarse sun sensor inputs. The field of view of the sensors activate the neural controller, creating an analog dead band with respect to the direction of the sun on each axis. This network controls the orientation of the vehicle toward the sunlight to ensure adequate power for the system. The attitude of the spacecraft is stabilized with respect to the ambient magnetic field on orbit. This paper develops a modelmore » of the controller using real-time coarse sun sensor data and a dynamic model of a prototype system based on a satellite system. The simulation results and the feasibility of this control method for use in a satellite backup control mode are discussed.« less

  5. Elements of an algorithm for optimizing a parameter-structural neural network

    NASA Astrophysics Data System (ADS)

    Mrówczyńska, Maria

    2016-06-01

    The field of processing information provided by measurement results is one of the most important components of geodetic technologies. The dynamic development of this field improves classic algorithms for numerical calculations in the aspect of analytical solutions that are difficult to achieve. Algorithms based on artificial intelligence in the form of artificial neural networks, including the topology of connections between neurons have become an important instrument connected to the problem of processing and modelling processes. This concept results from the integration of neural networks and parameter optimization methods and makes it possible to avoid the necessity to arbitrarily define the structure of a network. This kind of extension of the training process is exemplified by the algorithm called the Group Method of Data Handling (GMDH), which belongs to the class of evolutionary algorithms. The article presents a GMDH type network, used for modelling deformations of the geometrical axis of a steel chimney during its operation.

  6. Discrete Dynamics Lab

    NASA Astrophysics Data System (ADS)

    Wuensche, Andrew

    DDLab is interactive graphics software for creating, visualizing, and analyzing many aspects of Cellular Automata, Random Boolean Networks, and Discrete Dynamical Networks in general and studying their behavior, both from the time-series perspective — space-time patterns, and from the state-space perspective — attractor basins. DDLab is relevant to research, applications, and education in the fields of complexity, self-organization, emergent phenomena, chaos, collision-based computing, neural networks, content addressable memory, genetic regulatory networks, dynamical encryption, generative art and music, and the study of the abstract mathematical/physical/dynamical phenomena in their own right.

  7. A self-organizing model of perisaccadic visual receptive field dynamics in primate visual and oculomotor system.

    PubMed

    Mender, Bedeho M W; Stringer, Simon M

    2015-01-01

    We propose and examine a model for how perisaccadic visual receptive field dynamics, observed in a range of primate brain areas such as LIP, FEF, SC, V3, V3A, V2, and V1, may develop through a biologically plausible process of unsupervised visually guided learning. These dynamics are associated with remapping, which is the phenomenon where receptive fields anticipate the consequences of saccadic eye movements. We find that a neural network model using a local associative synaptic learning rule, when exposed to visual scenes in conjunction with saccades, can account for a range of associated phenomena. In particular, our model demonstrates predictive and pre-saccadic remapping, responsiveness shifts around the time of saccades, and remapping from multiple directions.

  8. A self-organizing model of perisaccadic visual receptive field dynamics in primate visual and oculomotor system

    PubMed Central

    Mender, Bedeho M. W.; Stringer, Simon M.

    2015-01-01

    We propose and examine a model for how perisaccadic visual receptive field dynamics, observed in a range of primate brain areas such as LIP, FEF, SC, V3, V3A, V2, and V1, may develop through a biologically plausible process of unsupervised visually guided learning. These dynamics are associated with remapping, which is the phenomenon where receptive fields anticipate the consequences of saccadic eye movements. We find that a neural network model using a local associative synaptic learning rule, when exposed to visual scenes in conjunction with saccades, can account for a range of associated phenomena. In particular, our model demonstrates predictive and pre-saccadic remapping, responsiveness shifts around the time of saccades, and remapping from multiple directions. PMID:25717301

  9. Towards a theory of cortical columns: From spiking neurons to interacting neural populations of finite size

    PubMed Central

    Gerstner, Wulfram

    2017-01-01

    Neural population equations such as neural mass or field models are widely used to study brain activity on a large scale. However, the relation of these models to the properties of single neurons is unclear. Here we derive an equation for several interacting populations at the mesoscopic scale starting from a microscopic model of randomly connected generalized integrate-and-fire neuron models. Each population consists of 50–2000 neurons of the same type but different populations account for different neuron types. The stochastic population equations that we find reveal how spike-history effects in single-neuron dynamics such as refractoriness and adaptation interact with finite-size fluctuations on the population level. Efficient integration of the stochastic mesoscopic equations reproduces the statistical behavior of the population activities obtained from microscopic simulations of a full spiking neural network model. The theory describes nonlinear emergent dynamics such as finite-size-induced stochastic transitions in multistable networks and synchronization in balanced networks of excitatory and inhibitory neurons. The mesoscopic equations are employed to rapidly integrate a model of a cortical microcircuit consisting of eight neuron types, which allows us to predict spontaneous population activities as well as evoked responses to thalamic input. Our theory establishes a general framework for modeling finite-size neural population dynamics based on single cell and synapse parameters and offers an efficient approach to analyzing cortical circuits and computations. PMID:28422957

  10. Creative-Dynamics Approach To Neural Intelligence

    NASA Technical Reports Server (NTRS)

    Zak, Michail A.

    1992-01-01

    Paper discusses approach to mathematical modeling of artificial neural networks exhibiting complicated behaviors reminiscent of creativity and intelligence of biological neural networks. Neural network treated as non-Lipschitzian dynamical system - as described in "Non-Lipschitzian Dynamics For Modeling Neural Networks" (NPO-17814). System serves as tool for modeling of temporal-pattern memories and recognition of complicated spatial patterns.

  11. Inferring neural activity from BOLD signals through nonlinear optimization.

    PubMed

    Vakorin, Vasily A; Krakovska, Olga O; Borowsky, Ron; Sarty, Gordon E

    2007-11-01

    The blood oxygen level-dependent (BOLD) fMRI signal does not measure neuronal activity directly. This fact is a key concern for interpreting functional imaging data based on BOLD. Mathematical models describing the path from neural activity to the BOLD response allow us to numerically solve the inverse problem of estimating the timing and amplitude of the neuronal activity underlying the BOLD signal. In fact, these models can be viewed as an advanced substitute for the impulse response function. In this work, the issue of estimating the dynamics of neuronal activity from the observed BOLD signal is considered within the framework of optimization problems. The model is based on the extended "balloon" model and describes the conversion of neuronal signals into the BOLD response through the transitional dynamics of the blood flow-inducing signal, cerebral blood flow, cerebral blood volume and deoxyhemoglobin concentration. Global optimization techniques are applied to find a control input (the neuronal activity and/or the biophysical parameters in the model) that causes the system to follow an admissible solution to minimize discrepancy between model and experimental data. As an alternative to a local linearization (LL) filtering scheme, the optimization method escapes the linearization of the transition system and provides a possibility to search for the global optimum, avoiding spurious local minima. We have found that the dynamics of the neural signals and the physiological variables as well as the biophysical parameters can be robustly reconstructed from the BOLD responses. Furthermore, it is shown that spiking off/on dynamics of the neural activity is the natural mathematical solution of the model. Incorporating, in addition, the expansion of the neural input by smooth basis functions, representing a low-pass filtering, allows us to model local field potential (LFP) solutions instead of spiking solutions.

  12. EDITORIAL: Why we need a new journal in neural engineering

    NASA Astrophysics Data System (ADS)

    Durand, Dominique M.

    2004-03-01

    The field of neural engineering crystallizes for many engineers and scientists an area of research at the interface between neuroscience and engineering. For the last 15 years or so, the discipline of neural engineering (neuroengineering) has slowly appeared at conferences as a theme or track. The first conference devoted entirely to this area was the 1st International IEEE EMBS Conference on Neural Engineering which took place in Capri, Italy in 2003. Understanding how the brain works is considered the ultimate frontier and challenge in science. The complexity of the brain is so great that understanding even the most basic functions will require that we fully exploit all the tools currently at our disposal in science and engineering and simultaneously develop new methods of analysis. While neuroscientists and engineers from varied fields such as brain anatomy, neural development and electrophysiology have made great strides in the analysis of this complex organ, there remains a great deal yet to be uncovered. The potential for applications and remedies deriving from scientific discoveries and breakthroughs is extremely high. As a result of the growing availability of micromachining technology, research into neurotechnology has grown relatively rapidly in recent years and appears to be approaching a critical mass. For example, by understanding how neuronal circuits process and store information, we could design computers with capabilities beyond current limits. By understanding how neurons develop and grow, we could develop new technologies for spinal cord repair or central nervous system repair following neurological disorders. Moreover, discoveries related to higher-level cognitive function and consciousness could have a profound influence on how humans make sense of their surroundings and interact with each other. The ability to successfully interface the brain with external electronics would have enormous implications for our society and facilitate a revolutionary change in the quality of life of persons with sensory and/or motor deficits. Microelectrode technology represents the initial step towards this goal and has already improved the quality of life of many patients, as is evident from the success of auditory prostheses. The cost to society of neurological disorders such as stroke, Parkinson's disease, Alzheimer's disease and epilepsy is staggering. Stroke, which is the third leading cause of death in North America, runs up costs of 40 billion to society per year for its treatment. Costs associated with brain disorders are estimated at 285 billion. Breakthroughs in this field will have a significant impact on the market for enabling technologies. The market for neurological medical devices totaled 2 billion in 1999 and is projected to grow at a rate of 20 to 30% in the next ten years, far outpacing the market for cardiac devices. Although we have all recognized the importance of interdisciplinary research (see the NIH Road map at http://nihroadmap.nih.gov/), the fields of neuroscience and engineering have remained compartmentalized. Collaboration is still difficult since the language of these disciplines is different. Moreover, the scientific journals in these fields are also clearly separate. Researchers involved in neural engineering have a choice of publishing their research in either neuroscience-oriented journals such as Journal of Neuroscience, Journal of Neurophysiology and Brain Research or in engineering journals such as IEEE Transactions on Biomedical Engineering, IEEE Transactions on Neural Systems and Rehabilitation and Annals of Biomedical Engineering. There is no journal currently available focusing on the interdisciplinary field of neural engineering. In order to capitalize on the potential of neural engineering to investigate neural function and to solve problems related to neural disorders, it is necessary to break down the traditional barriers between neuroscientists and engineers not just in the laboratory but also in the publication of scientific papers. We do, therefore, need a new journal that provides a platform for this emerging interdisciplinary field of neural engineering where neuroscientists, neurobiologists and engineers can publish their work in one periodical that spans the disciplines. Journal of Neural Engineering will provide this platform. The new journal will publish full-length articles of the highest quality and importance in the field of neural engineering at the molecular, cellular and systems levels. The scope of Journal of Neural Engineering encompasses experimental, computational and theoretical aspects of neural interfacing, neuroelectronics, neuromechanical systems, neuroinformatics, neuroimaging, neural prostheses, artificial and biological neural circuits, neural control, neural tissue regeneration, neural signal processing, neural modeling and neuro-computation. The scope of the journal has both depth and breadth in areas relevant to the interface between neuroscience and engineering. There will be two Editors-in-Chief, with expertise covering both engineering and neuroscience. Experts in the areas encompassed by the journal's scope have been identified for the Editorial Board and the composition of the board will be continually updated to address the developments in this new and exciting field. The first issue of this new journal covers a variety of topics that combine neuroscience and engineering: mental state recognition from EEG signals, analysis of body motion in Parkinson's patients, non-linear dynamics of the respiratory system, automatic identification of saccade-related visual evoked potentials, multiple electrode stimulators, algorithms to estimate the causal relationship between brain sources, diffusion tensor imaging in the brain and phase synchronization of neural activity in vitro. This broad array of manuscripts focusing on neural imaging, neurophysiology, neural signal processing, neuroelectronics and neuro-dynamics can be found for the first time within the pages of a single journal: Journal of Neural Engineering. I am grateful to Institute of Physics Publishing and Jane Roscoe in particular for putting together this new journal to accommodate the fast-growing field of neural engineering. I am also grateful to Andrew Schwartz who has agreed to be the co-Editor-in-Chief for the journal.

  13. Dynamic Decision Making in Complex Task Environments: Principles and Neural Mechanisms

    DTIC Science & Technology

    2013-03-01

    Dynamical models of cognition . Mathematical models of mental processes. Human performance optimization. U U U U Dr. Jay Myung 703-696-8487 Reset 1...we have continued to develop a neurodynamic theory of decision making, using a combination of computational and experimental approaches, to address...a long history in the field of human cognitive psychology. The theoretical foundations of this research can be traced back to signal detection

  14. Bifurcation Analysis on Phase-Amplitude Cross-Frequency Coupling in Neural Networks with Dynamic Synapses

    PubMed Central

    Sase, Takumi; Katori, Yuichi; Komuro, Motomasa; Aihara, Kazuyuki

    2017-01-01

    We investigate a discrete-time network model composed of excitatory and inhibitory neurons and dynamic synapses with the aim at revealing dynamical properties behind oscillatory phenomena possibly related to brain functions. We use a stochastic neural network model to derive the corresponding macroscopic mean field dynamics, and subsequently analyze the dynamical properties of the network. In addition to slow and fast oscillations arising from excitatory and inhibitory networks, respectively, we show that the interaction between these two networks generates phase-amplitude cross-frequency coupling (CFC), in which multiple different frequency components coexist and the amplitude of the fast oscillation is modulated by the phase of the slow oscillation. Furthermore, we clarify the detailed properties of the oscillatory phenomena by applying the bifurcation analysis to the mean field model, and accordingly show that the intermittent and the continuous CFCs can be characterized by an aperiodic orbit on a closed curve and one on a torus, respectively. These two CFC modes switch depending on the coupling strength from the excitatory to inhibitory networks, via the saddle-node cycle bifurcation of a one-dimensional torus in map (MT1SNC), and may be associated with the function of multi-item representation. We believe that the present model might have potential for studying possible functional roles of phase-amplitude CFC in the cerebral cortex. PMID:28424606

  15. Fast fMRI can detect oscillatory neural activity in humans.

    PubMed

    Lewis, Laura D; Setsompop, Kawin; Rosen, Bruce R; Polimeni, Jonathan R

    2016-10-25

    Oscillatory neural dynamics play an important role in the coordination of large-scale brain networks. High-level cognitive processes depend on dynamics evolving over hundreds of milliseconds, so measuring neural activity in this frequency range is important for cognitive neuroscience. However, current noninvasive neuroimaging methods are not able to precisely localize oscillatory neural activity above 0.2 Hz. Electroencephalography and magnetoencephalography have limited spatial resolution, whereas fMRI has limited temporal resolution because it measures vascular responses rather than directly recording neural activity. We hypothesized that the recent development of fast fMRI techniques, combined with the extra sensitivity afforded by ultra-high-field systems, could enable precise localization of neural oscillations. We tested whether fMRI can detect neural oscillations using human visual cortex as a model system. We detected small oscillatory fMRI signals in response to stimuli oscillating at up to 0.75 Hz within single scan sessions, and these responses were an order of magnitude larger than predicted by canonical linear models. Simultaneous EEG-fMRI and simulations based on a biophysical model of the hemodynamic response to neuronal activity suggested that the blood oxygen level-dependent response becomes faster for rapidly varying stimuli, enabling the detection of higher frequencies than expected. Accounting for phase delays across voxels further improved detection, demonstrating that identifying vascular delays will be of increasing importance with higher-frequency activity. These results challenge the assumption that the hemodynamic response is slow, and demonstrate that fMRI has the potential to map neural oscillations directly throughout the brain.

  16. Death and rebirth of neural activity in sparse inhibitory networks

    NASA Astrophysics Data System (ADS)

    Angulo-Garcia, David; Luccioli, Stefano; Olmi, Simona; Torcini, Alessandro

    2017-05-01

    Inhibition is a key aspect of neural dynamics playing a fundamental role for the emergence of neural rhythms and the implementation of various information coding strategies. Inhibitory populations are present in several brain structures, and the comprehension of their dynamics is strategical for the understanding of neural processing. In this paper, we clarify the mechanisms underlying a general phenomenon present in pulse-coupled heterogeneous inhibitory networks: inhibition can induce not only suppression of neural activity, as expected, but can also promote neural re-activation. In particular, for globally coupled systems, the number of firing neurons monotonically reduces upon increasing the strength of inhibition (neuronal death). However, the random pruning of connections is able to reverse the action of inhibition, i.e. in a random sparse network a sufficiently strong synaptic strength can surprisingly promote, rather than depress, the activity of neurons (neuronal rebirth). Thus, the number of firing neurons reaches a minimum value at some intermediate synaptic strength. We show that this minimum signals a transition from a regime dominated by neurons with a higher firing activity to a phase where all neurons are effectively sub-threshold and their irregular firing is driven by current fluctuations. We explain the origin of the transition by deriving a mean field formulation of the problem able to provide the fraction of active neurons as well as the first two moments of their firing statistics. The introduction of a synaptic time scale does not modify the main aspects of the reported phenomenon. However, for sufficiently slow synapses the transition becomes dramatic, and the system passes from a perfectly regular evolution to irregular bursting dynamics. In this latter regime the model provides predictions consistent with experimental findings for a specific class of neurons, namely the medium spiny neurons in the striatum.

  17. The Neurodynamics of Affect in the Laboratory Predicts Persistence of Real-World Emotional Responses.

    PubMed

    Heller, Aaron S; Fox, Andrew S; Wing, Erik K; McQuisition, Kaitlyn M; Vack, Nathan J; Davidson, Richard J

    2015-07-22

    Failure to sustain positive affect over time is a hallmark of depression and other psychopathologies, but the mechanisms supporting the ability to sustain positive emotional responses are poorly understood. Here, we investigated the neural correlates associated with the persistence of positive affect in the real world by conducting two experiments in humans: an fMRI task of reward responses and an experience-sampling task measuring emotional responses to a reward obtained in the field. The magnitude of DLPFC engagement to rewards administered in the laboratory predicted reactivity of real-world positive emotion following a reward administered in the field. Sustained ventral striatum engagement in the laboratory positively predicted the duration of real-world positive emotional responses. These results suggest that common pathways are associated with the unfolding of neural processes over seconds and with the dynamics of emotions experienced over minutes. Examining such dynamics may facilitate a better understanding of the brain-behavior associations underlying emotion. Significance statement: How real-world emotion, experienced over seconds, minutes, and hours, is instantiated in the brain over the course of milliseconds and seconds is unknown. We combined a novel, real-world experience-sampling task with fMRI to examine how individual differences in real-world emotion, experienced over minutes and hours, is subserved by affective neurodynamics of brain activity over the course of seconds. When winning money in the real world, individuals sustaining positive emotion the longest were those with the most prolonged ventral striatal activity. These results suggest that common pathways are associated with the unfolding of neural processes over seconds and with the dynamics of emotions experienced over minutes. Examining such dynamics may facilitate a better understanding of the brain-behavior associations underlying emotion. Copyright © 2015 the authors 0270-6474/15/3510503-07$15.00/0.

  18. A Data-Driven Response Virtual Sensor Technique with Partial Vibration Measurements Using Convolutional Neural Network.

    PubMed

    Sun, Shan-Bin; He, Yuan-Yuan; Zhou, Si-Da; Yue, Zhen-Jiang

    2017-12-12

    Measurement of dynamic responses plays an important role in structural health monitoring, damage detection and other fields of research. However, in aerospace engineering, the physical sensors are limited in the operational conditions of spacecraft, due to the severe environment in outer space. This paper proposes a virtual sensor model with partial vibration measurements using a convolutional neural network. The transmissibility function is employed as prior knowledge. A four-layer neural network with two convolutional layers, one fully connected layer, and an output layer is proposed as the predicting model. Numerical examples of two different structural dynamic systems demonstrate the performance of the proposed approach. The excellence of the novel technique is further indicated using a simply supported beam experiment comparing to a modal-model-based virtual sensor, which uses modal parameters, such as mode shapes, for estimating the responses of the faulty sensors. The results show that the presented data-driven response virtual sensor technique can predict structural response with high accuracy.

  19. A Data-Driven Response Virtual Sensor Technique with Partial Vibration Measurements Using Convolutional Neural Network

    PubMed Central

    Sun, Shan-Bin; He, Yuan-Yuan; Zhou, Si-Da; Yue, Zhen-Jiang

    2017-01-01

    Measurement of dynamic responses plays an important role in structural health monitoring, damage detection and other fields of research. However, in aerospace engineering, the physical sensors are limited in the operational conditions of spacecraft, due to the severe environment in outer space. This paper proposes a virtual sensor model with partial vibration measurements using a convolutional neural network. The transmissibility function is employed as prior knowledge. A four-layer neural network with two convolutional layers, one fully connected layer, and an output layer is proposed as the predicting model. Numerical examples of two different structural dynamic systems demonstrate the performance of the proposed approach. The excellence of the novel technique is further indicated using a simply supported beam experiment comparing to a modal-model-based virtual sensor, which uses modal parameters, such as mode shapes, for estimating the responses of the faulty sensors. The results show that the presented data-driven response virtual sensor technique can predict structural response with high accuracy. PMID:29231868

  20. Functional identification of spike-processing neural circuits.

    PubMed

    Lazar, Aurel A; Slutskiy, Yevgeniy B

    2014-02-01

    We introduce a novel approach for a complete functional identification of biophysical spike-processing neural circuits. The circuits considered accept multidimensional spike trains as their input and comprise a multitude of temporal receptive fields and conductance-based models of action potential generation. Each temporal receptive field describes the spatiotemporal contribution of all synapses between any two neurons and incorporates the (passive) processing carried out by the dendritic tree. The aggregate dendritic current produced by a multitude of temporal receptive fields is encoded into a sequence of action potentials by a spike generator modeled as a nonlinear dynamical system. Our approach builds on the observation that during any experiment, an entire neural circuit, including its receptive fields and biophysical spike generators, is projected onto the space of stimuli used to identify the circuit. Employing the reproducing kernel Hilbert space (RKHS) of trigonometric polynomials to describe input stimuli, we quantitatively describe the relationship between underlying circuit parameters and their projections. We also derive experimental conditions under which these projections converge to the true parameters. In doing so, we achieve the mathematical tractability needed to characterize the biophysical spike generator and identify the multitude of receptive fields. The algorithms obviate the need to repeat experiments in order to compute the neurons' rate of response, rendering our methodology of interest to both experimental and theoretical neuroscientists.

  1. Computational modeling of neural plasticity for self-organization of neural networks.

    PubMed

    Chrol-Cannon, Joseph; Jin, Yaochu

    2014-11-01

    Self-organization in biological nervous systems during the lifetime is known to largely occur through a process of plasticity that is dependent upon the spike-timing activity in connected neurons. In the field of computational neuroscience, much effort has been dedicated to building up computational models of neural plasticity to replicate experimental data. Most recently, increasing attention has been paid to understanding the role of neural plasticity in functional and structural neural self-organization, as well as its influence on the learning performance of neural networks for accomplishing machine learning tasks such as classification and regression. Although many ideas and hypothesis have been suggested, the relationship between the structure, dynamics and learning performance of neural networks remains elusive. The purpose of this article is to review the most important computational models for neural plasticity and discuss various ideas about neural plasticity's role. Finally, we suggest a few promising research directions, in particular those along the line that combines findings in computational neuroscience and systems biology, and their synergetic roles in understanding learning, memory and cognition, thereby bridging the gap between computational neuroscience, systems biology and computational intelligence. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  2. Role of local network oscillations in resting-state functional connectivity.

    PubMed

    Cabral, Joana; Hugues, Etienne; Sporns, Olaf; Deco, Gustavo

    2011-07-01

    Spatio-temporally organized low-frequency fluctuations (<0.1 Hz), observed in BOLD fMRI signal during rest, suggest the existence of underlying network dynamics that emerge spontaneously from intrinsic brain processes. Furthermore, significant correlations between distinct anatomical regions-or functional connectivity (FC)-have led to the identification of several widely distributed resting-state networks (RSNs). This slow dynamics seems to be highly structured by anatomical connectivity but the mechanism behind it and its relationship with neural activity, particularly in the gamma frequency range, remains largely unknown. Indeed, direct measurements of neuronal activity have revealed similar large-scale correlations, particularly in slow power fluctuations of local field potential gamma frequency range oscillations. To address these questions, we investigated neural dynamics in a large-scale model of the human brain's neural activity. A key ingredient of the model was a structural brain network defined by empirically derived long-range brain connectivity together with the corresponding conduction delays. A neural population, assumed to spontaneously oscillate in the gamma frequency range, was placed at each network node. When these oscillatory units are integrated in the network, they behave as weakly coupled oscillators. The time-delayed interaction between nodes is described by the Kuramoto model of phase oscillators, a biologically-based model of coupled oscillatory systems. For a realistic setting of axonal conduction speed, we show that time-delayed network interaction leads to the emergence of slow neural activity fluctuations, whose patterns correlate significantly with the empirically measured FC. The best agreement of the simulated FC with the empirically measured FC is found for a set of parameters where subsets of nodes tend to synchronize although the network is not globally synchronized. Inside such clusters, the simulated BOLD signal between nodes is found to be correlated, instantiating the empirically observed RSNs. Between clusters, patterns of positive and negative correlations are observed, as described in experimental studies. These results are found to be robust with respect to a biologically plausible range of model parameters. In conclusion, our model suggests how resting-state neural activity can originate from the interplay between the local neural dynamics and the large-scale structure of the brain. Copyright © 2011 Elsevier Inc. All rights reserved.

  3. Multitask neurovision processor with extensive feedback and feedforward connections

    NASA Astrophysics Data System (ADS)

    Gupta, Madan M.; Knopf, George K.

    1991-11-01

    A multi-task neuro-vision parameter which performs a variety of information processing operations associated with the early stages of biological vision is presented. The network architecture of this neuro-vision processor, called the positive-negative (PN) neural processor, is loosely based on the neural activity fields exhibited by thalamic and cortical nervous tissue layers. The computational operation performed by the processor arises from the strength of the recurrent feedback among the numerous positive and negative neural computing units. By adjusting the feedback connections it is possible to generate diverse dynamic behavior that may be used for short-term visual memory (STVM), spatio-temporal filtering (STF), and pulse frequency modulation (PFM). The information attributes that are to be processes may be regulated by modifying the feedforward connections from the signal space to the neural processor.

  4. Chimera states in a Hodgkin-Huxley model of thermally sensitive neurons

    NASA Astrophysics Data System (ADS)

    Glaze, Tera A.; Lewis, Scott; Bahar, Sonya

    2016-08-01

    Chimera states occur when identically coupled groups of nonlinear oscillators exhibit radically different dynamics, with one group exhibiting synchronized oscillations and the other desynchronized behavior. This dynamical phenomenon has recently been studied in computational models and demonstrated experimentally in mechanical, optical, and chemical systems. The theoretical basis of these states is currently under active investigation. Chimera behavior is of particular relevance in the context of neural synchronization, given the phenomenon of unihemispheric sleep and the recent observation of asymmetric sleep in human patients with sleep apnea. The similarity of neural chimera states to neural "bump" states, which have been suggested as a model for working memory and visual orientation tuning in the cortex, adds to their interest as objects of study. Chimera states have been demonstrated in the FitzHugh-Nagumo model of excitable cells and in the Hindmarsh-Rose neural model. Here, we demonstrate chimera states and chimera-like behaviors in a Hodgkin-Huxley-type model of thermally sensitive neurons both in a system with Abrams-Strogatz (mean field) coupling and in a system with Kuramoto (distance-dependent) coupling. We map the regions of parameter space for which chimera behavior occurs in each of the two coupling schemes.

  5. Clustering promotes switching dynamics in networks of noisy neurons

    NASA Astrophysics Data System (ADS)

    Franović, Igor; Klinshov, Vladimir

    2018-02-01

    Macroscopic variability is an emergent property of neural networks, typically manifested in spontaneous switching between the episodes of elevated neuronal activity and the quiescent episodes. We investigate the conditions that facilitate switching dynamics, focusing on the interplay between the different sources of noise and heterogeneity of the network topology. We consider clustered networks of rate-based neurons subjected to external and intrinsic noise and derive an effective model where the network dynamics is described by a set of coupled second-order stochastic mean-field systems representing each of the clusters. The model provides an insight into the different contributions to effective macroscopic noise and qualitatively indicates the parameter domains where switching dynamics may occur. By analyzing the mean-field model in the thermodynamic limit, we demonstrate that clustering promotes multistability, which gives rise to switching dynamics in a considerably wider parameter region compared to the case of a non-clustered network with sparse random connection topology.

  6. From homeostasis to behavior: Balanced activity in an exploration of embodied dynamic environmental-neural interaction.

    PubMed

    Hellyer, Peter John; Clopath, Claudia; Kehagia, Angie A; Turkheimer, Federico E; Leech, Robert

    2017-08-01

    In recent years, there have been many computational simulations of spontaneous neural dynamics. Here, we describe a simple model of spontaneous neural dynamics that controls an agent moving in a simple virtual environment. These dynamics generate interesting brain-environment feedback interactions that rapidly destabilize neural and behavioral dynamics demonstrating the need for homeostatic mechanisms. We investigate roles for homeostatic plasticity both locally (local inhibition adjusting to balance excitatory input) as well as more globally (regional "task negative" activity that compensates for "task positive", sensory input in another region) balancing neural activity and leading to more stable behavior (trajectories through the environment). Our results suggest complementary functional roles for both local and macroscale mechanisms in maintaining neural and behavioral dynamics and a novel functional role for macroscopic "task-negative" patterns of activity (e.g., the default mode network).

  7. Dynamic interactions in neural networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arbib, M.A.; Amari, S.

    The study of neural networks is enjoying a great renaissance, both in computational neuroscience, the development of information processing models of living brains, and in neural computing, the use of neurally inspired concepts in the construction of intelligent machines. This volume presents models and data on the dynamic interactions occurring in the brain, and exhibits the dynamic interactions between research in computational neuroscience and in neural computing. The authors present current research, future trends and open problems.

  8. Predictive Ensemble Decoding of Acoustical Features Explains Context-Dependent Receptive Fields.

    PubMed

    Yildiz, Izzet B; Mesgarani, Nima; Deneve, Sophie

    2016-12-07

    A primary goal of auditory neuroscience is to identify the sound features extracted and represented by auditory neurons. Linear encoding models, which describe neural responses as a function of the stimulus, have been primarily used for this purpose. Here, we provide theoretical arguments and experimental evidence in support of an alternative approach, based on decoding the stimulus from the neural response. We used a Bayesian normative approach to predict the responses of neurons detecting relevant auditory features, despite ambiguities and noise. We compared the model predictions to recordings from the primary auditory cortex of ferrets and found that: (1) the decoding filters of auditory neurons resemble the filters learned from the statistics of speech sounds; (2) the decoding model captures the dynamics of responses better than a linear encoding model of similar complexity; and (3) the decoding model accounts for the accuracy with which the stimulus is represented in neural activity, whereas linear encoding model performs very poorly. Most importantly, our model predicts that neuronal responses are fundamentally shaped by "explaining away," a divisive competition between alternative interpretations of the auditory scene. Neural responses in the auditory cortex are dynamic, nonlinear, and hard to predict. Traditionally, encoding models have been used to describe neural responses as a function of the stimulus. However, in addition to external stimulation, neural activity is strongly modulated by the responses of other neurons in the network. We hypothesized that auditory neurons aim to collectively decode their stimulus. In particular, a stimulus feature that is decoded (or explained away) by one neuron is not explained by another. We demonstrated that this novel Bayesian decoding model is better at capturing the dynamic responses of cortical neurons in ferrets. Whereas the linear encoding model poorly reflects selectivity of neurons, the decoding model can account for the strong nonlinearities observed in neural data. Copyright © 2016 Yildiz et al.

  9. The brain as a dynamic physical system.

    PubMed

    McKenna, T M; McMullen, T A; Shlesinger, M F

    1994-06-01

    The brain is a dynamic system that is non-linear at multiple levels of analysis. Characterization of its non-linear dynamics is fundamental to our understanding of brain function. Identifying families of attractors in phase space analysis, an approach which has proven valuable in describing non-linear mechanical and electrical systems, can prove valuable in describing a range of behaviors and associated neural activity including sensory and motor repertoires. Additionally, transitions between attractors may serve as useful descriptors for analysing state changes in neurons and neural ensembles. Recent observations of synchronous neural activity, and the emerging capability to record the spatiotemporal dynamics of neural activity by voltage-sensitive dyes and electrode arrays, provide opportunities for observing the population dynamics of neural ensembles within a dynamic systems context. New developments in the experimental physics of complex systems, such as the control of chaotic systems, selection of attractors, attractor switching and transient states, can be a source of powerful new analytical tools and insights into the dynamics of neural systems.

  10. Learning and adaptation: neural and behavioural mechanisms behind behaviour change

    NASA Astrophysics Data System (ADS)

    Lowe, Robert; Sandamirskaya, Yulia

    2018-01-01

    This special issue presents perspectives on learning and adaptation as they apply to a number of cognitive phenomena including pupil dilation in humans and attention in robots, natural language acquisition and production in embodied agents (robots), human-robot game play and social interaction, neural-dynamic modelling of active perception and neural-dynamic modelling of infant development in the Piagetian A-not-B task. The aim of the special issue, through its contributions, is to highlight some of the critical neural-dynamic and behavioural aspects of learning as it grounds adaptive responses in robotic- and neural-dynamic systems.

  11. Propagating waves can explain irregular neural dynamics.

    PubMed

    Keane, Adam; Gong, Pulin

    2015-01-28

    Cortical neurons in vivo fire quite irregularly. Previous studies about the origin of such irregular neural dynamics have given rise to two major models: a balanced excitation and inhibition model, and a model of highly synchronized synaptic inputs. To elucidate the network mechanisms underlying synchronized synaptic inputs and account for irregular neural dynamics, we investigate a spatially extended, conductance-based spiking neural network model. We show that propagating wave patterns with complex dynamics emerge from the network model. These waves sweep past neurons, to which they provide highly synchronized synaptic inputs. On the other hand, these patterns only emerge from the network with balanced excitation and inhibition; our model therefore reconciles the two major models of irregular neural dynamics. We further demonstrate that the collective dynamics of propagating wave patterns provides a mechanistic explanation for a range of irregular neural dynamics, including the variability of spike timing, slow firing rate fluctuations, and correlated membrane potential fluctuations. In addition, in our model, the distributions of synaptic conductance and membrane potential are non-Gaussian, consistent with recent experimental data obtained using whole-cell recordings. Our work therefore relates the propagating waves that have been widely observed in the brain to irregular neural dynamics. These results demonstrate that neural firing activity, although appearing highly disordered at the single-neuron level, can form dynamical coherent structures, such as propagating waves at the population level. Copyright © 2015 the authors 0270-6474/15/351591-15$15.00/0.

  12. Dynamical principles in neuroscience

    NASA Astrophysics Data System (ADS)

    Rabinovich, Mikhail I.; Varona, Pablo; Selverston, Allen I.; Abarbanel, Henry D. I.

    2006-10-01

    Dynamical modeling of neural systems and brain functions has a history of success over the last half century. This includes, for example, the explanation and prediction of some features of neural rhythmic behaviors. Many interesting dynamical models of learning and memory based on physiological experiments have been suggested over the last two decades. Dynamical models even of consciousness now exist. Usually these models and results are based on traditional approaches and paradigms of nonlinear dynamics including dynamical chaos. Neural systems are, however, an unusual subject for nonlinear dynamics for several reasons: (i) Even the simplest neural network, with only a few neurons and synaptic connections, has an enormous number of variables and control parameters. These make neural systems adaptive and flexible, and are critical to their biological function. (ii) In contrast to traditional physical systems described by well-known basic principles, first principles governing the dynamics of neural systems are unknown. (iii) Many different neural systems exhibit similar dynamics despite having different architectures and different levels of complexity. (iv) The network architecture and connection strengths are usually not known in detail and therefore the dynamical analysis must, in some sense, be probabilistic. (v) Since nervous systems are able to organize behavior based on sensory inputs, the dynamical modeling of these systems has to explain the transformation of temporal information into combinatorial or combinatorial-temporal codes, and vice versa, for memory and recognition. In this review these problems are discussed in the context of addressing the stimulating questions: What can neuroscience learn from nonlinear dynamics, and what can nonlinear dynamics learn from neuroscience?

  13. Dynamical principles in neuroscience

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rabinovich, Mikhail I.; Varona, Pablo; Selverston, Allen I.

    Dynamical modeling of neural systems and brain functions has a history of success over the last half century. This includes, for example, the explanation and prediction of some features of neural rhythmic behaviors. Many interesting dynamical models of learning and memory based on physiological experiments have been suggested over the last two decades. Dynamical models even of consciousness now exist. Usually these models and results are based on traditional approaches and paradigms of nonlinear dynamics including dynamical chaos. Neural systems are, however, an unusual subject for nonlinear dynamics for several reasons: (i) Even the simplest neural network, with only amore » few neurons and synaptic connections, has an enormous number of variables and control parameters. These make neural systems adaptive and flexible, and are critical to their biological function. (ii) In contrast to traditional physical systems described by well-known basic principles, first principles governing the dynamics of neural systems are unknown. (iii) Many different neural systems exhibit similar dynamics despite having different architectures and different levels of complexity. (iv) The network architecture and connection strengths are usually not known in detail and therefore the dynamical analysis must, in some sense, be probabilistic. (v) Since nervous systems are able to organize behavior based on sensory inputs, the dynamical modeling of these systems has to explain the transformation of temporal information into combinatorial or combinatorial-temporal codes, and vice versa, for memory and recognition. In this review these problems are discussed in the context of addressing the stimulating questions: What can neuroscience learn from nonlinear dynamics, and what can nonlinear dynamics learn from neuroscience?.« less

  14. Evolving neural networks with genetic algorithms to study the string landscape

    NASA Astrophysics Data System (ADS)

    Ruehle, Fabian

    2017-08-01

    We study possible applications of artificial neural networks to examine the string landscape. Since the field of application is rather versatile, we propose to dynamically evolve these networks via genetic algorithms. This means that we start from basic building blocks and combine them such that the neural network performs best for the application we are interested in. We study three areas in which neural networks can be applied: to classify models according to a fixed set of (physically) appealing features, to find a concrete realization for a computation for which the precise algorithm is known in principle but very tedious to actually implement, and to predict or approximate the outcome of some involved mathematical computation which performs too inefficient to apply it, e.g. in model scans within the string landscape. We present simple examples that arise in string phenomenology for all three types of problems and discuss how they can be addressed by evolving neural networks from genetic algorithms.

  15. Dissipativity and stability analysis of fractional-order complex-valued neural networks with time delay.

    PubMed

    Velmurugan, G; Rakkiyappan, R; Vembarasan, V; Cao, Jinde; Alsaedi, Ahmed

    2017-02-01

    As we know, the notion of dissipativity is an important dynamical property of neural networks. Thus, the analysis of dissipativity of neural networks with time delay is becoming more and more important in the research field. In this paper, the authors establish a class of fractional-order complex-valued neural networks (FCVNNs) with time delay, and intensively study the problem of dissipativity, as well as global asymptotic stability of the considered FCVNNs with time delay. Based on the fractional Halanay inequality and suitable Lyapunov functions, some new sufficient conditions are obtained that guarantee the dissipativity of FCVNNs with time delay. Moreover, some sufficient conditions are derived in order to ensure the global asymptotic stability of the addressed FCVNNs with time delay. Finally, two numerical simulations are posed to ensure that the attention of our main results are valuable. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Engaging and disengaging recurrent inhibition coincides with sensing and unsensing of a sensory stimulus

    PubMed Central

    Saha, Debajit; Sun, Wensheng; Li, Chao; Nizampatnam, Srinath; Padovano, William; Chen, Zhengdao; Chen, Alex; Altan, Ege; Lo, Ray; Barbour, Dennis L.; Raman, Baranidharan

    2017-01-01

    Even simple sensory stimuli evoke neural responses that are dynamic and complex. Are the temporally patterned neural activities important for controlling the behavioral output? Here, we investigated this issue. Our results reveal that in the insect antennal lobe, due to circuit interactions, distinct neural ensembles are activated during and immediately following the termination of every odorant. Such non-overlapping response patterns are not observed even when the stimulus intensity or identities were changed. In addition, we find that ON and OFF ensemble neural activities differ in their ability to recruit recurrent inhibition, entrain field-potential oscillations and more importantly in their relevance to behaviour (initiate versus reset conditioned responses). Notably, we find that a strikingly similar strategy is also used for encoding sound onsets and offsets in the marmoset auditory cortex. In sum, our results suggest a general approach where recurrent inhibition is associated with stimulus ‘recognition' and ‘derecognition'. PMID:28534502

  17. Multi-scale analysis of neural activity in humans: Implications for micro-scale electrocorticography.

    PubMed

    Kellis, Spencer; Sorensen, Larry; Darvas, Felix; Sayres, Conor; O'Neill, Kevin; Brown, Richard B; House, Paul; Ojemann, Jeff; Greger, Bradley

    2016-01-01

    Electrocorticography grids have been used to study and diagnose neural pathophysiology for over 50 years, and recently have been used for various neural prosthetic applications. Here we provide evidence that micro-scale electrodes are better suited for studying cortical pathology and function, and for implementing neural prostheses. This work compares dynamics in space, time, and frequency of cortical field potentials recorded by three types of electrodes: electrocorticographic (ECoG) electrodes, non-penetrating micro-ECoG (μECoG) electrodes that use microelectrodes and have tighter interelectrode spacing; and penetrating microelectrodes (MEA) that penetrate the cortex to record single- or multiunit activity (SUA or MUA) and local field potentials (LFP). While the finest spatial scales are found in LFPs recorded intracortically, we found that LFP recorded from μECoG electrodes demonstrate scales of linear similarity (i.e., correlation, coherence, and phase) closer to the intracortical electrodes than the clinical ECoG electrodes. We conclude that LFPs can be recorded intracortically and epicortically at finer scales than clinical ECoG electrodes are capable of capturing. Recorded with appropriately scaled electrodes and grids, field potentials expose a more detailed representation of cortical network activity, enabling advanced analyses of cortical pathology and demanding applications such as brain-computer interfaces. Copyright © 2015 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  18. Linking structure and activity in nonlinear spiking networks

    PubMed Central

    Josić, Krešimir; Shea-Brown, Eric

    2017-01-01

    Recent experimental advances are producing an avalanche of data on both neural connectivity and neural activity. To take full advantage of these two emerging datasets we need a framework that links them, revealing how collective neural activity arises from the structure of neural connectivity and intrinsic neural dynamics. This problem of structure-driven activity has drawn major interest in computational neuroscience. Existing methods for relating activity and architecture in spiking networks rely on linearizing activity around a central operating point and thus fail to capture the nonlinear responses of individual neurons that are the hallmark of neural information processing. Here, we overcome this limitation and present a new relationship between connectivity and activity in networks of nonlinear spiking neurons by developing a diagrammatic fluctuation expansion based on statistical field theory. We explicitly show how recurrent network structure produces pairwise and higher-order correlated activity, and how nonlinearities impact the networks’ spiking activity. Our findings open new avenues to investigating how single-neuron nonlinearities—including those of different cell types—combine with connectivity to shape population activity and function. PMID:28644840

  19. Neuro-cognitive mechanisms of decision making in joint action: a human-robot interaction study.

    PubMed

    Bicho, Estela; Erlhagen, Wolfram; Louro, Luis; e Silva, Eliana Costa

    2011-10-01

    In this paper we present a model for action preparation and decision making in cooperative tasks that is inspired by recent experimental findings about the neuro-cognitive mechanisms supporting joint action in humans. It implements the coordination of actions and goals among the partners as a dynamic process that integrates contextual cues, shared task knowledge and predicted outcome of others' motor behavior. The control architecture is formalized by a system of coupled dynamic neural fields representing a distributed network of local but connected neural populations. Different pools of neurons encode task-relevant information about action means, task goals and context in the form of self-sustained activation patterns. These patterns are triggered by input from connected populations and evolve continuously in time under the influence of recurrent interactions. The dynamic model of joint action is evaluated in a task in which a robot and a human jointly construct a toy object. We show that the highly context sensitive mapping from action observation onto appropriate complementary actions allows coping with dynamically changing joint action situations. Copyright © 2010 Elsevier B.V. All rights reserved.

  20. High-performance object tracking and fixation with an online neural estimator.

    PubMed

    Kumarawadu, Sisil; Watanabe, Keigo; Lee, Tsu-Tian

    2007-02-01

    Vision-based target tracking and fixation to keep objects that move in three dimensions in view is important for many tasks in several fields including intelligent transportation systems and robotics. Much of the visual control literature has focused on the kinematics of visual control and ignored a number of significant dynamic control issues that limit performance. In line with this, this paper presents a neural network (NN)-based binocular tracking scheme for high-performance target tracking and fixation with minimum sensory information. The procedure allows the designer to take into account the physical (Lagrangian dynamics) properties of the vision system in the control law. The design objective is to synthesize a binocular tracking controller that explicitly takes the systems dynamics into account, yet needs no knowledge of dynamic nonlinearities and joint velocity sensory information. The combined neurocontroller-observer scheme can guarantee the uniform ultimate bounds of the tracking, observer, and NN weight estimation errors under fairly general conditions on the controller-observer gains. The controller is tested and verified via simulation tests in the presence of severe target motion changes.

  1. RELATING ACCUMULATOR MODEL PARAMETERS AND NEURAL DYNAMICS

    PubMed Central

    Purcell, Braden A.; Palmeri, Thomas J.

    2016-01-01

    Accumulator models explain decision-making as an accumulation of evidence to a response threshold. Specific model parameters are associated with specific model mechanisms, such as the time when accumulation begins, the average rate of evidence accumulation, and the threshold. These mechanisms determine both the within-trial dynamics of evidence accumulation and the predicted behavior. Cognitive modelers usually infer what mechanisms vary during decision-making by seeing what parameters vary when a model is fitted to observed behavior. The recent identification of neural activity with evidence accumulation suggests that it may be possible to directly infer what mechanisms vary from an analysis of how neural dynamics vary. However, evidence accumulation is often noisy, and noise complicates the relationship between accumulator dynamics and the underlying mechanisms leading to those dynamics. To understand what kinds of inferences can be made about decision-making mechanisms based on measures of neural dynamics, we measured simulated accumulator model dynamics while systematically varying model parameters. In some cases, decision- making mechanisms can be directly inferred from dynamics, allowing us to distinguish between models that make identical behavioral predictions. In other cases, however, different parameterized mechanisms produce surprisingly similar dynamics, limiting the inferences that can be made based on measuring dynamics alone. Analyzing neural dynamics can provide a powerful tool to resolve model mimicry at the behavioral level, but we caution against drawing inferences based solely on neural analyses. Instead, simultaneous modeling of behavior and neural dynamics provides the most powerful approach to understand decision-making and likely other aspects of cognition and perception. PMID:28392584

  2. Mean Field Analysis of Large-Scale Interacting Populations of Stochastic Conductance-Based Spiking Neurons Using the Klimontovich Method

    NASA Astrophysics Data System (ADS)

    Gandolfo, Daniel; Rodriguez, Roger; Tuckwell, Henry C.

    2017-03-01

    We investigate the dynamics of large-scale interacting neural populations, composed of conductance based, spiking model neurons with modifiable synaptic connection strengths, which are possibly also subjected to external noisy currents. The network dynamics is controlled by a set of neural population probability distributions (PPD) which are constructed along the same lines as in the Klimontovich approach to the kinetic theory of plasmas. An exact non-closed, nonlinear, system of integro-partial differential equations is derived for the PPDs. As is customary, a closing procedure leads to a mean field limit. The equations we have obtained are of the same type as those which have been recently derived using rigorous techniques of probability theory. The numerical solutions of these so called McKean-Vlasov-Fokker-Planck equations, which are only valid in the limit of infinite size networks, actually shows that the statistical measures as obtained from PPDs are in good agreement with those obtained through direct integration of the stochastic dynamical system for large but finite size networks. Although numerical solutions have been obtained for networks of Fitzhugh-Nagumo model neurons, which are often used to approximate Hodgkin-Huxley model neurons, the theory can be readily applied to networks of general conductance-based model neurons of arbitrary dimension.

  3. Separate representations of dynamics in rhythmic and discrete movements: evidence from motor learning

    PubMed Central

    Ingram, James N.; Wolpert, Daniel M.

    2011-01-01

    Rhythmic and discrete arm movements occur ubiquitously in everyday life, and there is a debate as to whether these two classes of movements arise from the same or different underlying neural mechanisms. Here we examine interference in a motor-learning paradigm to test whether rhythmic and discrete movements employ at least partially separate neural representations. Subjects were required to make circular movements of their right hand while they were exposed to a velocity-dependent force field that perturbed the circularity of the movement path. The direction of the force-field perturbation reversed at the end of each block of 20 revolutions. When subjects made only rhythmic or only discrete circular movements, interference was observed when switching between the two opposing force fields. However, when subjects alternated between blocks of rhythmic and discrete movements, such that each was uniquely associated with one of the perturbation directions, interference was significantly reduced. Only in this case did subjects learn to corepresent the two opposing perturbations, suggesting that different neural resources were employed for the two movement types. Our results provide further evidence that rhythmic and discrete movements employ at least partially separate control mechanisms in the motor system. PMID:21273324

  4. Connectomics and graph theory analyses: Novel insights into network abnormalities in epilepsy.

    PubMed

    Gleichgerrcht, Ezequiel; Kocher, Madison; Bonilha, Leonardo

    2015-11-01

    The assessment of neural networks in epilepsy has become increasingly relevant in the context of translational research, given that localized forms of epilepsy are more likely to be related to abnormal function within specific brain networks, as opposed to isolated focal brain pathology. It is notable that variability in clinical outcomes from epilepsy treatment may be a reflection of individual patterns of network abnormalities. As such, network endophenotypes may be important biomarkers for the diagnosis and treatment of epilepsy. Despite its exceptional potential, measuring abnormal networks in translational research has been thus far constrained by methodologic limitations. Fortunately, recent advancements in neuroscience, particularly in the field of connectomics, permit a detailed assessment of network organization, dynamics, and function at an individual level. Data from the personal connectome can be assessed using principled forms of network analyses based on graph theory, which may disclose patterns of organization that are prone to abnormal dynamics and epileptogenesis. Although the field of connectomics is relatively new, there is already a rapidly growing body of evidence to suggest that it can elucidate several important and fundamental aspects of abnormal networks to epilepsy. In this article, we provide a review of the emerging evidence from connectomics research regarding neural network architecture, dynamics, and function related to epilepsy. We discuss how connectomics may bring together pathophysiologic hypotheses from conceptual and basic models of epilepsy and in vivo biomarkers for clinical translational research. By providing neural network information unique to each individual, the field of connectomics may help to elucidate variability in clinical outcomes and open opportunities for personalized medicine approaches to epilepsy. Connectomics involves complex and rich data from each subject, thus collaborative efforts to enable the systematic and rigorous evaluation of this form of "big data" are paramount to leverage the full potential of this new approach. Wiley Periodicals, Inc. © 2015 International League Against Epilepsy.

  5. Mean-field theory of a plastic network of integrate-and-fire neurons.

    PubMed

    Chen, Chun-Chung; Jasnow, David

    2010-01-01

    We consider a noise-driven network of integrate-and-fire neurons. The network evolves as result of the activities of the neurons following spike-timing-dependent plasticity rules. We apply a self-consistent mean-field theory to the system to obtain the mean activity level for the system as a function of the mean synaptic weight, which predicts a first-order transition and hysteresis between a noise-dominated regime and a regime of persistent neural activity. Assuming Poisson firing statistics for the neurons, the plasticity dynamics of a synapse under the influence of the mean-field environment can be mapped to the dynamics of an asymmetric random walk in synaptic-weight space. Using a master equation for small steps, we predict a narrow distribution of synaptic weights that scales with the square root of the plasticity rate for the stationary state of the system given plausible physiological parameter values describing neural transmission and plasticity. The dependence of the distribution on the synaptic weight of the mean-field environment allows us to determine the mean synaptic weight self-consistently. The effect of fluctuations in the total synaptic conductance and plasticity step sizes are also considered. Such fluctuations result in a smoothing of the first-order transition for low number of afferent synapses per neuron and a broadening of the synaptic-weight distribution, respectively.

  6. Stable and Dynamic Coding for Working Memory in Primate Prefrontal Cortex

    PubMed Central

    Watanabe, Kei; Funahashi, Shintaro; Stokes, Mark G.

    2017-01-01

    Working memory (WM) provides the stability necessary for high-level cognition. Influential theories typically assume that WM depends on the persistence of stable neural representations, yet increasing evidence suggests that neural states are highly dynamic. Here we apply multivariate pattern analysis to explore the population dynamics in primate lateral prefrontal cortex (PFC) during three variants of the classic memory-guided saccade task (recorded in four animals). We observed the hallmark of dynamic population coding across key phases of a working memory task: sensory processing, memory encoding, and response execution. Throughout both these dynamic epochs and the memory delay period, however, the neural representational geometry remained stable. We identified two characteristics that jointly explain these dynamics: (1) time-varying changes in the subpopulation of neurons coding for task variables (i.e., dynamic subpopulations); and (2) time-varying selectivity within neurons (i.e., dynamic selectivity). These results indicate that even in a very simple memory-guided saccade task, PFC neurons display complex dynamics to support stable representations for WM. SIGNIFICANCE STATEMENT Flexible, intelligent behavior requires the maintenance and manipulation of incoming information over various time spans. For short time spans, this faculty is labeled “working memory” (WM). Dominant models propose that WM is maintained by stable, persistent patterns of neural activity in prefrontal cortex (PFC). However, recent evidence suggests that neural activity in PFC is dynamic, even while the contents of WM remain stably represented. Here, we explored the neural dynamics in PFC during a memory-guided saccade task. We found evidence for dynamic population coding in various task epochs, despite striking stability in the neural representational geometry of WM. Furthermore, we identified two distinct cellular mechanisms that contribute to dynamic population coding. PMID:28559375

  7. Neural speech recognition: continuous phoneme decoding using spatiotemporal representations of human cortical activity

    NASA Astrophysics Data System (ADS)

    Moses, David A.; Mesgarani, Nima; Leonard, Matthew K.; Chang, Edward F.

    2016-10-01

    Objective. The superior temporal gyrus (STG) and neighboring brain regions play a key role in human language processing. Previous studies have attempted to reconstruct speech information from brain activity in the STG, but few of them incorporate the probabilistic framework and engineering methodology used in modern speech recognition systems. In this work, we describe the initial efforts toward the design of a neural speech recognition (NSR) system that performs continuous phoneme recognition on English stimuli with arbitrary vocabulary sizes using the high gamma band power of local field potentials in the STG and neighboring cortical areas obtained via electrocorticography. Approach. The system implements a Viterbi decoder that incorporates phoneme likelihood estimates from a linear discriminant analysis model and transition probabilities from an n-gram phonemic language model. Grid searches were used in an attempt to determine optimal parameterizations of the feature vectors and Viterbi decoder. Main results. The performance of the system was significantly improved by using spatiotemporal representations of the neural activity (as opposed to purely spatial representations) and by including language modeling and Viterbi decoding in the NSR system. Significance. These results emphasize the importance of modeling the temporal dynamics of neural responses when analyzing their variations with respect to varying stimuli and demonstrate that speech recognition techniques can be successfully leveraged when decoding speech from neural signals. Guided by the results detailed in this work, further development of the NSR system could have applications in the fields of automatic speech recognition and neural prosthetics.

  8. Model Of Neural Network With Creative Dynamics

    NASA Technical Reports Server (NTRS)

    Zak, Michail; Barhen, Jacob

    1993-01-01

    Paper presents analysis of mathematical model of one-neuron/one-synapse neural network featuring coupled activation and learning dynamics and parametrical periodic excitation. Demonstrates self-programming, partly random behavior of suitable designed neural network; believed to be related to spontaneity and creativity of biological neural networks.

  9. Two-photon imaging and analysis of neural network dynamics

    NASA Astrophysics Data System (ADS)

    Lütcke, Henry; Helmchen, Fritjof

    2011-08-01

    The glow of a starry night sky, the smell of a freshly brewed cup of coffee or the sound of ocean waves breaking on the beach are representations of the physical world that have been created by the dynamic interactions of thousands of neurons in our brains. How the brain mediates perceptions, creates thoughts, stores memories and initiates actions remains one of the most profound puzzles in biology, if not all of science. A key to a mechanistic understanding of how the nervous system works is the ability to measure and analyze the dynamics of neuronal networks in the living organism in the context of sensory stimulation and behavior. Dynamic brain properties have been fairly well characterized on the microscopic level of individual neurons and on the macroscopic level of whole brain areas largely with the help of various electrophysiological techniques. However, our understanding of the mesoscopic level comprising local populations of hundreds to thousands of neurons (so-called 'microcircuits') remains comparably poor. Predominantly, this has been due to the technical difficulties involved in recording from large networks of neurons with single-cell spatial resolution and near-millisecond temporal resolution in the brain of living animals. In recent years, two-photon microscopy has emerged as a technique which meets many of these requirements and thus has become the method of choice for the interrogation of local neural circuits. Here, we review the state-of-research in the field of two-photon imaging of neuronal populations, covering the topics of microscope technology, suitable fluorescent indicator dyes, staining techniques, and in particular analysis techniques for extracting relevant information from the fluorescence data. We expect that functional analysis of neural networks using two-photon imaging will help to decipher fundamental operational principles of neural microcircuits.

  10. Decoding Spontaneous Emotional States in the Human Brain

    PubMed Central

    Kragel, Philip A.; Knodt, Annchen R.; Hariri, Ahmad R.; LaBar, Kevin S.

    2016-01-01

    Pattern classification of human brain activity provides unique insight into the neural underpinnings of diverse mental states. These multivariate tools have recently been used within the field of affective neuroscience to classify distributed patterns of brain activation evoked during emotion induction procedures. Here we assess whether neural models developed to discriminate among distinct emotion categories exhibit predictive validity in the absence of exteroceptive emotional stimulation. In two experiments, we show that spontaneous fluctuations in human resting-state brain activity can be decoded into categories of experience delineating unique emotional states that exhibit spatiotemporal coherence, covary with individual differences in mood and personality traits, and predict on-line, self-reported feelings. These findings validate objective, brain-based models of emotion and show how emotional states dynamically emerge from the activity of separable neural systems. PMID:27627738

  11. A modular, closed-loop platform for intracranial stimulation in people with neurological disorders.

    PubMed

    Sarma, Anish A; Crocker, Britni; Cash, Sydney S; Truccolo, Wilson

    2016-08-01

    Neuromodulation systems based on electrical stimulation can be used to investigate, probe, and potentially treat a range of neurological disorders. The effects of ongoing neural state and dynamics on stimulation response, and of stimulation parameters on neural state, have broad implications for the development of closed-loop neuro-modulation approaches. We describe the development of a modular, low-latency platform for pre-clinical, closed-loop neuromodulation studies with human participants. We illustrate the uses of the platform in a stimulation case study with a person with epilepsy undergoing neuro-monitoring prior to resective surgery. We demonstrate the efficacy of the system by tracking interictal epileptiform discharges in the local field potential to trigger intracranial electrical stimulation, and show that the response to stimulation depends on the neural state.

  12. Event- and Time-Driven Techniques Using Parallel CPU-GPU Co-processing for Spiking Neural Networks

    PubMed Central

    Naveros, Francisco; Garrido, Jesus A.; Carrillo, Richard R.; Ros, Eduardo; Luque, Niceto R.

    2017-01-01

    Modeling and simulating the neural structures which make up our central neural system is instrumental for deciphering the computational neural cues beneath. Higher levels of biological plausibility usually impose higher levels of complexity in mathematical modeling, from neural to behavioral levels. This paper focuses on overcoming the simulation problems (accuracy and performance) derived from using higher levels of mathematical complexity at a neural level. This study proposes different techniques for simulating neural models that hold incremental levels of mathematical complexity: leaky integrate-and-fire (LIF), adaptive exponential integrate-and-fire (AdEx), and Hodgkin-Huxley (HH) neural models (ranged from low to high neural complexity). The studied techniques are classified into two main families depending on how the neural-model dynamic evaluation is computed: the event-driven or the time-driven families. Whilst event-driven techniques pre-compile and store the neural dynamics within look-up tables, time-driven techniques compute the neural dynamics iteratively during the simulation time. We propose two modifications for the event-driven family: a look-up table recombination to better cope with the incremental neural complexity together with a better handling of the synchronous input activity. Regarding the time-driven family, we propose a modification in computing the neural dynamics: the bi-fixed-step integration method. This method automatically adjusts the simulation step size to better cope with the stiffness of the neural model dynamics running in CPU platforms. One version of this method is also implemented for hybrid CPU-GPU platforms. Finally, we analyze how the performance and accuracy of these modifications evolve with increasing levels of neural complexity. We also demonstrate how the proposed modifications which constitute the main contribution of this study systematically outperform the traditional event- and time-driven techniques under increasing levels of neural complexity. PMID:28223930

  13. Neural network control of focal position during time-lapse microscopy of cells.

    PubMed

    Wei, Ling; Roberts, Elijah

    2018-05-09

    Live-cell microscopy is quickly becoming an indispensable technique for studying the dynamics of cellular processes. Maintaining the specimen in focus during image acquisition is crucial for high-throughput applications, especially for long experiments or when a large sample is being continuously scanned. Automated focus control methods are often expensive, imperfect, or ill-adapted to a specific application and are a bottleneck for widespread adoption of high-throughput, live-cell imaging. Here, we demonstrate a neural network approach for automatically maintaining focus during bright-field microscopy. Z-stacks of yeast cells growing in a microfluidic device were collected and used to train a convolutional neural network to classify images according to their z-position. We studied the effect on prediction accuracy of the various hyperparameters of the neural network, including downsampling, batch size, and z-bin resolution. The network was able to predict the z-position of an image with ±1 μm accuracy, outperforming human annotators. Finally, we used our neural network to control microscope focus in real-time during a 24 hour growth experiment. The method robustly maintained the correct focal position compensating for 40 μm of focal drift and was insensitive to changes in the field of view. About ~100 annotated z-stacks were required to train the network making our method quite practical for custom autofocus applications.

  14. Neural attractor network for application in visual field data classification.

    PubMed

    Fink, Wolfgang

    2004-07-07

    The purpose was to introduce a novel method for computer-based classification of visual field data derived from perimetric examination, that may act as a 'counsellor', providing an independent 'second opinion' to the diagnosing physician. The classification system consists of a Hopfield-type neural attractor network that obtains its input data from perimetric examination results. An iterative relaxation process determines the states of the neurons dynamically. Therefore, even 'noisy' perimetric output, e.g., early stages of a disease, may eventually be classified correctly according to the predefined idealized visual field defect (scotoma) patterns, stored as attractors of the network, that are found with diseases of the eye, optic nerve and the central nervous system. Preliminary tests of the classification system on real visual field data derived from perimetric examinations have shown a classification success of over 80%. Some of the main advantages of the Hopfield-attractor-network-based approach over feed-forward type neural networks are: (1) network architecture is defined by the classification problem; (2) no training is required to determine the neural coupling strengths; (3) assignment of an auto-diagnosis confidence level is possible by means of an overlap parameter and the Hamming distance. In conclusion, the novel method for computer-based classification of visual field data, presented here, furnishes a valuable first overview and an independent 'second opinion' in judging perimetric examination results, pointing towards a final diagnosis by a physician. It should not be considered a substitute for the diagnosing physician. Thanks to the worldwide accessibility of the Internet, the classification system offers a promising perspective towards modern computer-assisted diagnosis in both medicine and tele-medicine, for example and in particular, with respect to non-ophthalmic clinics or in communities where perimetric expertise is not readily available.

  15. Modeling and control of magnetorheological fluid dampers using neural networks

    NASA Astrophysics Data System (ADS)

    Wang, D. H.; Liao, W. H.

    2005-02-01

    Due to the inherent nonlinear nature of magnetorheological (MR) fluid dampers, one of the challenging aspects for utilizing these devices to achieve high system performance is the development of accurate models and control algorithms that can take advantage of their unique characteristics. In this paper, the direct identification and inverse dynamic modeling for MR fluid dampers using feedforward and recurrent neural networks are studied. The trained direct identification neural network model can be used to predict the damping force of the MR fluid damper on line, on the basis of the dynamic responses across the MR fluid damper and the command voltage, and the inverse dynamic neural network model can be used to generate the command voltage according to the desired damping force through supervised learning. The architectures and the learning methods of the dynamic neural network models and inverse neural network models for MR fluid dampers are presented, and some simulation results are discussed. Finally, the trained neural network models are applied to predict and control the damping force of the MR fluid damper. Moreover, validation methods for the neural network models developed are proposed and used to evaluate their performance. Validation results with different data sets indicate that the proposed direct identification dynamic model using the recurrent neural network can be used to predict the damping force accurately and the inverse identification dynamic model using the recurrent neural network can act as a damper controller to generate the command voltage when the MR fluid damper is used in a semi-active mode.

  16. Dynamic decomposition of spatiotemporal neural signals

    PubMed Central

    2017-01-01

    Neural signals are characterized by rich temporal and spatiotemporal dynamics that reflect the organization of cortical networks. Theoretical research has shown how neural networks can operate at different dynamic ranges that correspond to specific types of information processing. Here we present a data analysis framework that uses a linearized model of these dynamic states in order to decompose the measured neural signal into a series of components that capture both rhythmic and non-rhythmic neural activity. The method is based on stochastic differential equations and Gaussian process regression. Through computer simulations and analysis of magnetoencephalographic data, we demonstrate the efficacy of the method in identifying meaningful modulations of oscillatory signals corrupted by structured temporal and spatiotemporal noise. These results suggest that the method is particularly suitable for the analysis and interpretation of complex temporal and spatiotemporal neural signals. PMID:28558039

  17. Neural Networks for Rapid Design and Analysis

    NASA Technical Reports Server (NTRS)

    Sparks, Dean W., Jr.; Maghami, Peiman G.

    1998-01-01

    Artificial neural networks have been employed for rapid and efficient dynamics and control analysis of flexible systems. Specifically, feedforward neural networks are designed to approximate nonlinear dynamic components over prescribed input ranges, and are used in simulations as a means to speed up the overall time response analysis process. To capture the recursive nature of dynamic components with artificial neural networks, recurrent networks, which use state feedback with the appropriate number of time delays, as inputs to the networks, are employed. Once properly trained, neural networks can give very good approximations to nonlinear dynamic components, and by their judicious use in simulations, allow the analyst the potential to speed up the analysis process considerably. To illustrate this potential speed up, an existing simulation model of a spacecraft reaction wheel system is executed, first conventionally, and then with an artificial neural network in place.

  18. Efficient digital implementation of a conductance-based globus pallidus neuron and the dynamics analysis

    NASA Astrophysics Data System (ADS)

    Yang, Shuangming; Wei, Xile; Deng, Bin; Liu, Chen; Li, Huiyan; Wang, Jiang

    2018-03-01

    Balance between biological plausibility of dynamical activities and computational efficiency is one of challenging problems in computational neuroscience and neural system engineering. This paper proposes a set of efficient methods for the hardware realization of the conductance-based neuron model with relevant dynamics, targeting reproducing the biological behaviors with low-cost implementation on digital programmable platform, which can be applied in wide range of conductance-based neuron models. Modified GP neuron models for efficient hardware implementation are presented to reproduce reliable pallidal dynamics, which decode the information of basal ganglia and regulate the movement disorder related voluntary activities. Implementation results on a field-programmable gate array (FPGA) demonstrate that the proposed techniques and models can reduce the resource cost significantly and reproduce the biological dynamics accurately. Besides, the biological behaviors with weak network coupling are explored on the proposed platform, and theoretical analysis is also made for the investigation of biological characteristics of the structured pallidal oscillator and network. The implementation techniques provide an essential step towards the large-scale neural network to explore the dynamical mechanisms in real time. Furthermore, the proposed methodology enables the FPGA-based system a powerful platform for the investigation on neurodegenerative diseases and real-time control of bio-inspired neuro-robotics.

  19. A Symbolic and Graphical Computer Representation of Dynamical Systems

    NASA Astrophysics Data System (ADS)

    Gould, Laurence I.

    2005-04-01

    AUTONO is a Macsyma/Maxima program, designed at the University of Hartford, for solving autonomous systems of differential equations as well as for relating Lagrangians and Hamiltonians to their associated dynamical equations. AUTONO can be used in a number of fields to decipher a variety of complex dynamical systems with ease, producing their Lagrangian and Hamiltonian equations in seconds. These equations can then be incorporated into VisSim, a modeling and simulation program, which yields graphical representations of motion in a given system through easily chosen input parameters. The program, along with the VisSim differential-equations graphical package, allows for resolution and easy understanding of complex problems in a relatively short time; thus enabling quicker and more advanced computing of dynamical systems on any number of platforms---from a network of sensors on a space probe, to the behavior of neural networks, to the effects of an electromagnetic field on components in a dynamical system. A flowchart of AUTONO, along with some simple applications and VisSim output, will be shown.

  20. Dynamic Photorefractive Memory and its Application for Opto-Electronic Neural Networks.

    NASA Astrophysics Data System (ADS)

    Sasaki, Hironori

    This dissertation describes the analysis of the photorefractive crystal dynamics and its application for opto-electronic neural network systems. The realization of the dynamic photorefractive memory is investigated in terms of the following aspects: fast memory update, uniform grating multiplexing schedules and the prevention of the partial erasure of existing gratings. The fast memory update is realized by the selective erasure process that superimposes a new grating on the original one with an appropriate phase shift. The dynamics of the selective erasure process is analyzed using the first-order photorefractive material equations and experimentally confirmed. The effects of beam coupling and fringe bending on the selective erasure dynamics are also analyzed by numerically solving a combination of coupled wave equations and the photorefractive material equation. Incremental recording technique is proposed as a uniform grating multiplexing schedule and compared with the conventional scheduled recording technique in terms of phase distribution in the presence of an external dc electric field, as well as the image gray scale dependence. The theoretical analysis and experimental results proved the superiority of the incremental recording technique over the scheduled recording. Novel recirculating information memory architecture is proposed and experimentally demonstrated to prevent partial degradation of the existing gratings by accessing the memory. Gratings are circulated through a memory feed back loop based on the incremental recording dynamics and demonstrate robust read/write/erase capabilities. The dynamic photorefractive memory is applied to opto-electronic neural network systems. Module architecture based on the page-oriented dynamic photorefractive memory is proposed. This module architecture can implement two complementary interconnection organizations, fan-in and fan-out. The module system scalability and the learning capabilities are theoretically investigated using the photorefractive dynamics described in previous chapters of the dissertation. The implementation of the feed-forward image compression network with 900 input and 9 output neurons with 6-bit interconnection accuracy is experimentally demonstrated. Learning of the Perceptron network that determines sex based on input face images of 900 pixels is also successfully demonstrated.

  1. Generalizing the dynamic field theory of spatial cognition across real and developmental time scales

    PubMed Central

    Simmering, Vanessa R.; Spencer, John P.; Schutte, Anne R.

    2008-01-01

    Within cognitive neuroscience, computational models are designed to provide insights into the organization of behavior while adhering to neural principles. These models should provide sufficient specificity to generate novel predictions while maintaining the generality needed to capture behavior across tasks and/or time scales. This paper presents one such model, the Dynamic Field Theory (DFT) of spatial cognition, showing new simulations that provide a demonstration proof that the theory generalizes across developmental changes in performance in four tasks—the Piagetian A-not-B task, a sandbox version of the A-not-B task, a canonical spatial recall task, and a position discrimination task. Model simulations demonstrate that the DFT can accomplish both specificity—generating novel, testable predictions—and generality—spanning multiple tasks across development with a relatively simple developmental hypothesis. Critically, the DFT achieves generality across tasks and time scales with no modification to its basic structure and with a strong commitment to neural principles. The only change necessary to capture development in the model was an increase in the precision of the tuning of receptive fields as well as an increase in the precision of local excitatory interactions among neurons in the model. These small quantitative changes were sufficient to move the model through a set of quantitative and qualitative behavioral changes that span the age range from 8 months to 6 years and into adulthood. We conclude by considering how the DFT is positioned in the literature, the challenges on the horizon for our framework, and how a dynamic field approach can yield new insights into development from a computational cognitive neuroscience perspective. PMID:17716632

  2. A point process approach to identifying and tracking transitions in neural spiking dynamics in the subthalamic nucleus of Parkinson's patients

    NASA Astrophysics Data System (ADS)

    Deng, Xinyi; Eskandar, Emad N.; Eden, Uri T.

    2013-12-01

    Understanding the role of rhythmic dynamics in normal and diseased brain function is an important area of research in neural electrophysiology. Identifying and tracking changes in rhythms associated with spike trains present an additional challenge, because standard approaches for continuous-valued neural recordings—such as local field potential, magnetoencephalography, and electroencephalography data—require assumptions that do not typically hold for point process data. Additionally, subtle changes in the history dependent structure of a spike train have been shown to lead to robust changes in rhythmic firing patterns. Here, we propose a point process modeling framework to characterize the rhythmic spiking dynamics in spike trains, test for statistically significant changes to those dynamics, and track the temporal evolution of such changes. We first construct a two-state point process model incorporating spiking history and develop a likelihood ratio test to detect changes in the firing structure. We then apply adaptive state-space filters and smoothers to track these changes through time. We illustrate our approach with a simulation study as well as with experimental data recorded in the subthalamic nucleus of Parkinson's patients performing an arm movement task. Our analyses show that during the arm movement task, neurons underwent a complex pattern of modulation of spiking intensity characterized initially by a release of inhibitory control at 20-40 ms after a spike, followed by a decrease in excitatory influence at 40-60 ms after a spike.

  3. Neurodynamics With Spatial Self-Organization

    NASA Technical Reports Server (NTRS)

    Zak, Michail A.

    1993-01-01

    Report presents theoretical study of dynamics of neural network organizing own response in both phase space and in position space. Postulates several mathematical models of dynamics including spatial derivatives representing local interconnections among neurons. Shows how neural responses propagate via these interconnections and how spatial pattern of neural responses formed in homogeneous biological neural network.

  4. An Attractor-Based Complexity Measurement for Boolean Recurrent Neural Networks

    PubMed Central

    Cabessa, Jérémie; Villa, Alessandro E. P.

    2014-01-01

    We provide a novel refined attractor-based complexity measurement for Boolean recurrent neural networks that represents an assessment of their computational power in terms of the significance of their attractor dynamics. This complexity measurement is achieved by first proving a computational equivalence between Boolean recurrent neural networks and some specific class of -automata, and then translating the most refined classification of -automata to the Boolean neural network context. As a result, a hierarchical classification of Boolean neural networks based on their attractive dynamics is obtained, thus providing a novel refined attractor-based complexity measurement for Boolean recurrent neural networks. These results provide new theoretical insights to the computational and dynamical capabilities of neural networks according to their attractive potentialities. An application of our findings is illustrated by the analysis of the dynamics of a simplified model of the basal ganglia-thalamocortical network simulated by a Boolean recurrent neural network. This example shows the significance of measuring network complexity, and how our results bear new founding elements for the understanding of the complexity of real brain circuits. PMID:24727866

  5. Brain-wave Dynamics Related to Cognitive Tasks and Neurofeedback Information Flow

    NASA Astrophysics Data System (ADS)

    Pop-Jordanova, Nada; Pop-Jordanov, Jordan; Dimitrovski, Darko; Markovska, Natasa

    2003-08-01

    Synchronization of oscillating neuronal discharges has been recently correlated to the moment of perception and the ensuing motor response, with transition between these two cognitive acts "through cellular mechanisms that remain to be established"[1]. Last year, using genetic strategies, it was found that the switching off persistent electric activity in the brain blocks memory recall [2]. On the other hand, analyzing mental-neural information flow, the nobelist Eccles has formulated a fundamental hypotheses that mental events may change the probability of quantum vesicular emissions of transmitters analogously to probability functions of quantum mechanics [3]. Applying the advanced quantum modeling to molecular rotational states exposed to electric activity in brain cells, we found that the probability of transitions does not depend on the field amplitude, suggesting the electric field frequency as the possible information-bearing physical quantity [4]. In this paper, an attempt is made to inter-correlate the above results on frequency aspects of neural transitions induced by cognitive tasks. Furthermore, considering the consecutive steps of mental-neural information flow during the biofeedback training to normalize EEG frequencies, the rationales for neurofeedback efficiency have been deduced.

  6. Learning of spatio-temporal codes in a coupled oscillator system.

    PubMed

    Orosz, Gábor; Ashwin, Peter; Townley, Stuart

    2009-07-01

    In this paper, we consider a learning strategy that allows one to transmit information between two coupled phase oscillator systems (called teaching and learning systems) via frequency adaptation. The dynamics of these systems can be modeled with reference to a number of partially synchronized cluster states and transitions between them. Forcing the teaching system by steady but spatially nonhomogeneous inputs produces cyclic sequences of transitions between the cluster states, that is, information about inputs is encoded via a "winnerless competition" process into spatio-temporal codes. The large variety of codes can be learned by the learning system that adapts its frequencies to those of the teaching system. We visualize the dynamics using "weighted order parameters (WOPs)" that are analogous to "local field potentials" in neural systems. Since spatio-temporal coding is a mechanism that appears in olfactory systems, the developed learning rules may help to extract information from these neural ensembles.

  7. Cellular Level Brain Imaging in Behaving Mammals: An Engineering Approach

    PubMed Central

    Hamel, Elizabeth J.O.; Grewe, Benjamin F.; Parker, Jones G.; Schnitzer, Mark J.

    2017-01-01

    Fluorescence imaging offers expanding capabilities for recording neural dynamics in behaving mammals, including the means to monitor hundreds of cells targeted by genetic type or connectivity, track cells over weeks, densely sample neurons within local microcircuits, study cells too inactive to isolate in extracellular electrical recordings, and visualize activity in dendrites, axons, or dendritic spines. We discuss recent progress and future directions for imaging in behaving mammals from a systems engineering perspective, which seeks holistic consideration of fluorescent indicators, optical instrumentation, and computational analyses. Today, genetically encoded indicators of neural Ca2+ dynamics are widely used, and those of trans-membrane voltage are rapidly improving. Two complementary imaging paradigms involve conventional microscopes for studying head-restrained animals and head-mounted miniature microscopes for imaging in freely behaving animals. Overall, the field has attained sufficient sophistication that increased cooperation between those designing new indicators, light sources, microscopes, and computational analyses would greatly benefit future progress. PMID:25856491

  8. Dynamic Skin Patterns in Cephalopods

    PubMed Central

    How, Martin J.; Norman, Mark D.; Finn, Julian; Chung, Wen-Sung; Marshall, N. Justin

    2017-01-01

    Cephalopods are unrivaled in the natural world in their ability to alter their visual appearance. These mollusks have evolved a complex system of dermal units under neural, hormonal, and muscular control to produce an astonishing variety of body patterns. With parallels to the pixels on a television screen, cephalopod chromatophores can be coordinated to produce dramatic, dynamic, and rhythmic displays, defined collectively here as “dynamic patterns.” This study examines the nature, context, and potential functions of dynamic patterns across diverse cephalopod taxa. Examples are presented for 21 species, including 11 previously unreported in the scientific literature. These range from simple flashing or flickering patterns, to highly complex passing wave patterns involving multiple skin fields. PMID:28674500

  9. Dissipation of ‘dark energy’ by cortex in knowledge retrieval

    NASA Astrophysics Data System (ADS)

    Capolupo, Antonio; Freeman, Walter J.; Vitiello, Giuseppe

    2013-03-01

    We have devised a thermodynamic model of cortical neurodynamics expressed at the classical level by neural networks and at the quantum level by dissipative quantum field theory. Our model is based on features in the spatial images of cortical activity newly revealed by high-density electrode arrays. We have incorporated the mechanism and necessity for so-called dark energy in knowledge retrieval. We have extended the model first using the Carnot cycle to define our measures for energy, entropy and temperature, and then using the Rankine cycle to incorporate criticality and phase transitions. We describe the dynamics of two interactive fields of neural activity that express knowledge, one at high and the other at low energy density, and the two operators that create and annihilate the fields. We postulate that the extremely high density of energy sequestered briefly in cortical activity patterns can account for the vividness, richness of associations, and emotional intensity of memories recalled by stimuli.

  10. Dissipation of 'dark energy' by cortex in knowledge retrieval.

    PubMed

    Capolupo, Antonio; Freeman, Walter J; Vitiello, Giuseppe

    2013-03-01

    We have devised a thermodynamic model of cortical neurodynamics expressed at the classical level by neural networks and at the quantum level by dissipative quantum field theory. Our model is based on features in the spatial images of cortical activity newly revealed by high-density electrode arrays. We have incorporated the mechanism and necessity for so-called dark energy in knowledge retrieval. We have extended the model first using the Carnot cycle to define our measures for energy, entropy and temperature, and then using the Rankine cycle to incorporate criticality and phase transitions. We describe the dynamics of two interactive fields of neural activity that express knowledge, one at high and the other at low energy density, and the two operators that create and annihilate the fields. We postulate that the extremely high density of energy sequestered briefly in cortical activity patterns can account for the vividness, richness of associations, and emotional intensity of memories recalled by stimuli. Copyright © 2013 Elsevier B.V. All rights reserved.

  11. Different propagation speeds of recalled sequences in plastic spiking neural networks

    NASA Astrophysics Data System (ADS)

    Huang, Xuhui; Zheng, Zhigang; Hu, Gang; Wu, Si; Rasch, Malte J.

    2015-03-01

    Neural networks can generate spatiotemporal patterns of spike activity. Sequential activity learning and retrieval have been observed in many brain areas, and e.g. is crucial for coding of episodic memory in the hippocampus or generating temporal patterns during song production in birds. In a recent study, a sequential activity pattern was directly entrained onto the neural activity of the primary visual cortex (V1) of rats and subsequently successfully recalled by a local and transient trigger. It was observed that the speed of activity propagation in coordinates of the retinotopically organized neural tissue was constant during retrieval regardless how the speed of light stimulation sweeping across the visual field during training was varied. It is well known that spike-timing dependent plasticity (STDP) is a potential mechanism for embedding temporal sequences into neural network activity. How training and retrieval speeds relate to each other and how network and learning parameters influence retrieval speeds, however, is not well described. We here theoretically analyze sequential activity learning and retrieval in a recurrent neural network with realistic synaptic short-term dynamics and STDP. Testing multiple STDP rules, we confirm that sequence learning can be achieved by STDP. However, we found that a multiplicative nearest-neighbor (NN) weight update rule generated weight distributions and recall activities that best matched the experiments in V1. Using network simulations and mean-field analysis, we further investigated the learning mechanisms and the influence of network parameters on recall speeds. Our analysis suggests that a multiplicative STDP rule with dominant NN spike interaction might be implemented in V1 since recall speed was almost constant in an NMDA-dominant regime. Interestingly, in an AMPA-dominant regime, neural circuits might exhibit recall speeds that instead follow the change in stimulus speeds. This prediction could be tested in experiments.

  12. Towards multifocal ultrasonic neural stimulation: pattern generation algorithms

    NASA Astrophysics Data System (ADS)

    Hertzberg, Yoni; Naor, Omer; Volovick, Alexander; Shoham, Shy

    2010-10-01

    Focused ultrasound (FUS) waves directed onto neural structures have been shown to dynamically modulate neural activity and excitability, opening up a range of possible systems and applications where the non-invasiveness, safety, mm-range resolution and other characteristics of FUS are advantageous. As in other neuro-stimulation and modulation modalities, the highly distributed and parallel nature of neural systems and neural information processing call for the development of appropriately patterned stimulation strategies which could simultaneously address multiple sites in flexible patterns. Here, we study the generation of sparse multi-focal ultrasonic distributions using phase-only modulation in ultrasonic phased arrays. We analyse the relative performance of an existing algorithm for generating multifocal ultrasonic distributions and new algorithms that we adapt from the field of optical digital holography, and find that generally the weighted Gerchberg-Saxton algorithm leads to overall superior efficiency and uniformity in the focal spots, without significantly increasing the computational burden. By combining phased-array FUS and magnetic-resonance thermometry we experimentally demonstrate the simultaneous generation of tightly focused multifocal distributions in a tissue phantom, a first step towards patterned FUS neuro-modulation systems and devices.

  13. Chronic multisite brain recordings from a totally implantable bidirectional neural interface: experience in 5 patients with Parkinson's disease.

    PubMed

    Swann, Nicole C; de Hemptinne, Coralie; Miocinovic, Svjetlana; Qasim, Salman; Ostrem, Jill L; Galifianakis, Nicholas B; Luciano, Marta San; Wang, Sarah S; Ziman, Nathan; Taylor, Robin; Starr, Philip A

    2018-02-01

    OBJECTIVE Dysfunction of distributed neural networks underlies many brain disorders. The development of neuromodulation therapies depends on a better understanding of these networks. Invasive human brain recordings have a favorable temporal and spatial resolution for the analysis of network phenomena but have generally been limited to acute intraoperative recording or short-term recording through temporarily externalized leads. Here, the authors describe their initial experience with an investigational, first-generation, totally implantable, bidirectional neural interface that allows both continuous therapeutic stimulation and recording of field potentials at multiple sites in a neural network. METHODS Under a physician-sponsored US Food and Drug Administration investigational device exemption, 5 patients with Parkinson's disease were implanted with the Activa PC+S system (Medtronic Inc.). The device was attached to a quadripolar lead placed in the subdural space over motor cortex, for electrocorticography potential recordings, and to a quadripolar lead in the subthalamic nucleus (STN), for both therapeutic stimulation and recording of local field potentials. Recordings from the brain of each patient were performed at multiple time points over a 1-year period. RESULTS There were no serious surgical complications or interruptions in deep brain stimulation therapy. Signals in both the cortex and the STN were relatively stable over time, despite a gradual increase in electrode impedance. Canonical movement-related changes in specific frequency bands in the motor cortex were identified in most but not all recordings. CONCLUSIONS The acquisition of chronic multisite field potentials in humans is feasible. The device performance characteristics described here may inform the design of the next generation of totally implantable neural interfaces. This research tool provides a platform for translating discoveries in brain network dynamics to improved neurostimulation paradigms. Clinical trial registration no.: NCT01934296 (clinicaltrials.gov).

  14. Autonomy in Action: Linking the Act of Looking to Memory Formation in Infancy via Dynamic Neural Fields

    ERIC Educational Resources Information Center

    Perone, Sammy; Spencer, John P.

    2013-01-01

    Looking is a fundamental exploratory behavior by which infants acquire knowledge about the world. In theories of infant habituation, however, looking as an exploratory behavior has been deemphasized relative to the reliable nature with which looking indexes active cognitive processing. We present a new theory that connects looking to the dynamics…

  15. Neural Dynamics of Object-Based Multifocal Visual Spatial Attention and Priming: Object Cueing, Useful-Field-of-View, and Crowding

    ERIC Educational Resources Information Center

    Foley, Nicholas C.; Grossberg, Stephen; Mingolla, Ennio

    2012-01-01

    How are spatial and object attention coordinated to achieve rapid object learning and recognition during eye movement search? How do prefrontal priming and parietal spatial mechanisms interact to determine the reaction time costs of intra-object attention shifts, inter-object attention shifts, and shifts between visible objects and covertly cued…

  16. Vertically aligned carbon nanofiber as nano-neuron interface for monitoring neural function.

    PubMed

    Yu, Zhe; McKnight, Timothy E; Ericson, M Nance; Melechko, Anatoli V; Simpson, Michael L; Morrison, Barclay

    2012-05-01

    Neural chips, which are capable of simultaneous multisite neural recording and stimulation, have been used to detect and modulate neural activity for almost thirty years. As neural interfaces, neural chips provide dynamic functional information for neural decoding and neural control. By improving sensitivity and spatial resolution, nano-scale electrodes may revolutionize neural detection and modulation at cellular and molecular levels as nano-neuron interfaces. We developed a carbon-nanofiber neural chip with lithographically defined arrays of vertically aligned carbon nanofiber electrodes and demonstrated its capability of both stimulating and monitoring electrophysiological signals from brain tissues in vitro and monitoring dynamic information of neuroplasticity. This novel nano-neuron interface may potentially serve as a precise, informative, biocompatible, and dual-mode neural interface for monitoring of both neuroelectrical and neurochemical activity at the single-cell level and even inside the cell. The authors demonstrate the utility of a neural chip with lithographically defined arrays of vertically aligned carbon nanofiber electrodes. The new device can be used to stimulate and/or monitor signals from brain tissue in vitro and for monitoring dynamic information of neuroplasticity both intracellularly and at the single cell level including neuroelectrical and neurochemical activities. Copyright © 2012 Elsevier Inc. All rights reserved.

  17. Synchronization transition in neuronal networks composed of chaotic or non-chaotic oscillators.

    PubMed

    Xu, Kesheng; Maidana, Jean Paul; Castro, Samy; Orio, Patricio

    2018-05-30

    Chaotic dynamics has been shown in the dynamics of neurons and neural networks, in experimental data and numerical simulations. Theoretical studies have proposed an underlying role of chaos in neural systems. Nevertheless, whether chaotic neural oscillators make a significant contribution to network behaviour and whether the dynamical richness of neural networks is sensitive to the dynamics of isolated neurons, still remain open questions. We investigated synchronization transitions in heterogeneous neural networks of neurons connected by electrical coupling in a small world topology. The nodes in our model are oscillatory neurons that - when isolated - can exhibit either chaotic or non-chaotic behaviour, depending on conductance parameters. We found that the heterogeneity of firing rates and firing patterns make a greater contribution than chaos to the steepness of the synchronization transition curve. We also show that chaotic dynamics of the isolated neurons do not always make a visible difference in the transition to full synchrony. Moreover, macroscopic chaos is observed regardless of the dynamics nature of the neurons. However, performing a Functional Connectivity Dynamics analysis, we show that chaotic nodes can promote what is known as multi-stable behaviour, where the network dynamically switches between a number of different semi-synchronized, metastable states.

  18. Prefrontal Cortex Networks Shift from External to Internal Modes during Learning.

    PubMed

    Brincat, Scott L; Miller, Earl K

    2016-09-14

    As we learn about items in our environment, their neural representations become increasingly enriched with our acquired knowledge. But there is little understanding of how network dynamics and neural processing related to external information changes as it becomes laden with "internal" memories. We sampled spiking and local field potential activity simultaneously from multiple sites in the lateral prefrontal cortex (PFC) and the hippocampus (HPC)-regions critical for sensory associations-of monkeys performing an object paired-associate learning task. We found that in the PFC, evoked potentials to, and neural information about, external sensory stimulation decreased while induced beta-band (∼11-27 Hz) oscillatory power and synchrony associated with "top-down" or internal processing increased. By contrast, the HPC showed little evidence of learning-related changes in either spiking activity or network dynamics. The results suggest that during associative learning, PFC networks shift their resources from external to internal processing. As we learn about items in our environment, their representations in our brain become increasingly enriched with our acquired "top-down" knowledge. We found that in the prefrontal cortex, but not the hippocampus, processing of external sensory inputs decreased while internal network dynamics related to top-down processing increased. The results suggest that during learning, prefrontal cortex networks shift their resources from external (sensory) to internal (memory) processing. Copyright © 2016 the authors 0270-6474/16/369739-16$15.00/0.

  19. Spatial-area selective retrieval of multiple object-place associations in a hierarchical cognitive map formed by theta phase coding.

    PubMed

    Sato, Naoyuki; Yamaguchi, Yoko

    2009-06-01

    The human cognitive map is known to be hierarchically organized consisting of a set of perceptually clustered landmarks. Patient studies have demonstrated that these cognitive maps are maintained by the hippocampus, while the neural dynamics are still poorly understood. The authors have shown that the neural dynamic "theta phase precession" observed in the rodent hippocampus may be capable of forming hierarchical cognitive maps in humans. In the model, a visual input sequence consisting of object and scene features in the central and peripheral visual fields, respectively, results in the formation of a hierarchical cognitive map for object-place associations. Surprisingly, it is possible for such a complex memory structure to be formed in a few seconds. In this paper, we evaluate the memory retrieval of object-place associations in the hierarchical network formed by theta phase precession. The results show that multiple object-place associations can be retrieved with the initial cue of a scene input. Importantly, according to the wide-to-narrow unidirectional connections among scene units, the spatial area for object-place retrieval can be controlled by the spatial area of the initial cue input. These results indicate that the hierarchical cognitive maps have computational advantages on a spatial-area selective retrieval of multiple object-place associations. Theta phase precession dynamics is suggested as a fundamental neural mechanism of the human cognitive map.

  20. Prefrontal Cortex Networks Shift from External to Internal Modes during Learning

    PubMed Central

    Brincat, Scott L.

    2016-01-01

    As we learn about items in our environment, their neural representations become increasingly enriched with our acquired knowledge. But there is little understanding of how network dynamics and neural processing related to external information changes as it becomes laden with “internal” memories. We sampled spiking and local field potential activity simultaneously from multiple sites in the lateral prefrontal cortex (PFC) and the hippocampus (HPC)—regions critical for sensory associations—of monkeys performing an object paired-associate learning task. We found that in the PFC, evoked potentials to, and neural information about, external sensory stimulation decreased while induced beta-band (∼11–27 Hz) oscillatory power and synchrony associated with “top-down” or internal processing increased. By contrast, the HPC showed little evidence of learning-related changes in either spiking activity or network dynamics. The results suggest that during associative learning, PFC networks shift their resources from external to internal processing. SIGNIFICANCE STATEMENT As we learn about items in our environment, their representations in our brain become increasingly enriched with our acquired “top-down” knowledge. We found that in the prefrontal cortex, but not the hippocampus, processing of external sensory inputs decreased while internal network dynamics related to top-down processing increased. The results suggest that during learning, prefrontal cortex networks shift their resources from external (sensory) to internal (memory) processing. PMID:27629722

  1. Application of dynamic recurrent neural networks in nonlinear system identification

    NASA Astrophysics Data System (ADS)

    Du, Yun; Wu, Xueli; Sun, Huiqin; Zhang, Suying; Tian, Qiang

    2006-11-01

    An adaptive identification method of simple dynamic recurrent neural network (SRNN) for nonlinear dynamic systems is presented in this paper. This method based on the theory that by using the inner-states feed-back of dynamic network to describe the nonlinear kinetic characteristics of system can reflect the dynamic characteristics more directly, deduces the recursive prediction error (RPE) learning algorithm of SRNN, and improves the algorithm by studying topological structure on recursion layer without the weight values. The simulation results indicate that this kind of neural network can be used in real-time control, due to its less weight values, simpler learning algorithm, higher identification speed, and higher precision of model. It solves the problems of intricate in training algorithm and slow rate in convergence caused by the complicate topological structure in usual dynamic recurrent neural network.

  2. Cognitive Flexibility through Metastable Neural Dynamics Is Disrupted by Damage to the Structural Connectome.

    PubMed

    Hellyer, Peter J; Scott, Gregory; Shanahan, Murray; Sharp, David J; Leech, Robert

    2015-06-17

    Current theory proposes that healthy neural dynamics operate in a metastable regime, where brain regions interact to simultaneously maximize integration and segregation. Metastability may confer important behavioral properties, such as cognitive flexibility. It is increasingly recognized that neural dynamics are constrained by the underlying structural connections between brain regions. An important challenge is, therefore, to relate structural connectivity, neural dynamics, and behavior. Traumatic brain injury (TBI) is a pre-eminent structural disconnection disorder whereby traumatic axonal injury damages large-scale connectivity, producing characteristic cognitive impairments, including slowed information processing speed and reduced cognitive flexibility, that may be a result of disrupted metastable dynamics. Therefore, TBI provides an experimental and theoretical model to examine how metastable dynamics relate to structural connectivity and cognition. Here, we use complementary empirical and computational approaches to investigate how metastability arises from the healthy structural connectome and relates to cognitive performance. We found reduced metastability in large-scale neural dynamics after TBI, measured with resting-state functional MRI. This reduction in metastability was associated with damage to the connectome, measured using diffusion MRI. Furthermore, decreased metastability was associated with reduced cognitive flexibility and information processing. A computational model, defined by empirically derived connectivity data, demonstrates how behaviorally relevant changes in neural dynamics result from structural disconnection. Our findings suggest how metastable dynamics are important for normal brain function and contingent on the structure of the human connectome. Copyright © 2015 the authors 0270-6474/15/359050-14$15.00/0.

  3. Near-instant automatic access to visually presented words in the human neocortex: neuromagnetic evidence.

    PubMed

    Shtyrov, Yury; MacGregor, Lucy J

    2016-05-24

    Rapid and efficient processing of external information by the brain is vital to survival in a highly dynamic environment. The key channel humans use to exchange information is language, but the neural underpinnings of its processing are still not fully understood. We investigated the spatio-temporal dynamics of neural access to word representations in the brain by scrutinising the brain's activity elicited in response to psycholinguistically, visually and phonologically matched groups of familiar words and meaningless pseudowords. Stimuli were briefly presented on the visual-field periphery to experimental participants whose attention was occupied with a non-linguistic visual feature-detection task. The neural activation elicited by these unattended orthographic stimuli was recorded using multi-channel whole-head magnetoencephalography, and the timecourse of lexically-specific neuromagnetic responses was assessed in sensor space as well as at the level of cortical sources, estimated using individual MR-based distributed source reconstruction. Our results demonstrate a neocortical signature of automatic near-instant access to word representations in the brain: activity in the perisylvian language network characterised by specific activation enhancement for familiar words, starting as early as ~70 ms after the onset of unattended word stimuli and underpinned by temporal and inferior-frontal cortices.

  4. Neural field model of memory-guided search.

    PubMed

    Kilpatrick, Zachary P; Poll, Daniel B

    2017-12-01

    Many organisms can remember locations they have previously visited during a search. Visual search experiments have shown exploration is guided away from these locations, reducing redundancies in the search path before finding a hidden target. We develop and analyze a two-layer neural field model that encodes positional information during a search task. A position-encoding layer sustains a bump attractor corresponding to the searching agent's current location, and search is modeled by velocity input that propagates the bump. A memory layer sustains persistent activity bounded by a wave front, whose edges expand in response to excitatory input from the position layer. Search can then be biased in response to remembered locations, influencing velocity inputs to the position layer. Asymptotic techniques are used to reduce the dynamics of our model to a low-dimensional system of equations that track the bump position and front boundary. Performance is compared for different target-finding tasks.

  5. Wide-field depth-sectioning fluorescence microscopy using projector-generated patterned illumination

    NASA Astrophysics Data System (ADS)

    Delica, Serafin; Mar Blanca, Carlo

    2007-10-01

    We present a simple and cost-effective wide-field, depth-sectioning, fluorescence microscope utilizing a commercial multimedia projector to generate excitation patterns on the sample. Highly resolved optical sections of fluorescent pollen grains at 1.9 μm axial resolution are constructed using the structured illumination technique. This requires grid excitation patterns to be scanned across the sample, which is straightforwardly implemented by creating slideshows of gratings at different phases, projecting them onto the sample, and synchronizing camera acquisition with slide transition. In addition to rapid dynamic pattern generation, the projector provides high illumination power and spectral excitation selectivity. We exploit these properties by imaging mouse neural cells in cultures multistained with Alexa 488 and Cy3. The spectral and structural neural information is effectively resolved in three dimensions. The flexibility and commercial availability of this light source is envisioned to open multidimensional imaging to a broader user base.

  6. Neural field theory of perceptual echo and implications for estimating brain connectivity

    NASA Astrophysics Data System (ADS)

    Robinson, P. A.; Pagès, J. C.; Gabay, N. C.; Babaie, T.; Mukta, K. N.

    2018-04-01

    Neural field theory is used to predict and analyze the phenomenon of perceptual echo in which random input stimuli at one location are correlated with electroencephalographic responses at other locations. It is shown that this echo correlation (EC) yields an estimate of the transfer function from the stimulated point to other locations. Modal analysis then explains the observed spatiotemporal structure of visually driven EC and the dominance of the alpha frequency; two eigenmodes of similar amplitude dominate the response, leading to temporal beating and a line of low correlation that runs from the crown of the head toward the ears. These effects result from mode splitting and symmetry breaking caused by interhemispheric coupling and cortical folding. It is shown how eigenmodes obtained from functional magnetic resonance imaging experiments can be combined with temporal dynamics from EC or other evoked responses to estimate the spatiotemporal transfer function between any two points and hence their effective connectivity.

  7. Neural field model of memory-guided search

    NASA Astrophysics Data System (ADS)

    Kilpatrick, Zachary P.; Poll, Daniel B.

    2017-12-01

    Many organisms can remember locations they have previously visited during a search. Visual search experiments have shown exploration is guided away from these locations, reducing redundancies in the search path before finding a hidden target. We develop and analyze a two-layer neural field model that encodes positional information during a search task. A position-encoding layer sustains a bump attractor corresponding to the searching agent's current location, and search is modeled by velocity input that propagates the bump. A memory layer sustains persistent activity bounded by a wave front, whose edges expand in response to excitatory input from the position layer. Search can then be biased in response to remembered locations, influencing velocity inputs to the position layer. Asymptotic techniques are used to reduce the dynamics of our model to a low-dimensional system of equations that track the bump position and front boundary. Performance is compared for different target-finding tasks.

  8. Global neural dynamic surface tracking control of strict-feedback systems with application to hypersonic flight vehicle.

    PubMed

    Xu, Bin; Yang, Chenguang; Pan, Yongping

    2015-10-01

    This paper studies both indirect and direct global neural control of strict-feedback systems in the presence of unknown dynamics, using the dynamic surface control (DSC) technique in a novel manner. A new switching mechanism is designed to combine an adaptive neural controller in the neural approximation domain, together with the robust controller that pulls the transient states back into the neural approximation domain from the outside. In comparison with the conventional control techniques, which could only achieve semiglobally uniformly ultimately bounded stability, the proposed control scheme guarantees all the signals in the closed-loop system are globally uniformly ultimately bounded, such that the conventional constraints on initial conditions of the neural control system can be relaxed. The simulation studies of hypersonic flight vehicle (HFV) are performed to demonstrate the effectiveness of the proposed global neural DSC design.

  9. A biologically inspired neural network for dynamic programming.

    PubMed

    Francelin Romero, R A; Kacpryzk, J; Gomide, F

    2001-12-01

    An artificial neural network with a two-layer feedback topology and generalized recurrent neurons, for solving nonlinear discrete dynamic optimization problems, is developed. A direct method to assign the weights of neural networks is presented. The method is based on Bellmann's Optimality Principle and on the interchange of information which occurs during the synaptic chemical processing among neurons. The neural network based algorithm is an advantageous approach for dynamic programming due to the inherent parallelism of the neural networks; further it reduces the severity of computational problems that can occur in methods like conventional methods. Some illustrative application examples are presented to show how this approach works out including the shortest path and fuzzy decision making problems.

  10. Twenty years and going strong: A dynamic systems revolution in motor and cognitive development

    PubMed Central

    Spencer, John P.; Perone, Sammy; Buss, Aaron T.

    2011-01-01

    This article reviews the major contributions of dynamic systems theory in advancing thinking about development, the empirical insights the theory has generated, and the key challenges for the theory on the horizon. The first section discusses the emergence of dynamic systems theory in developmental science, the core concepts of the theory, and the resonance it has with other approaches that adopt a systems metatheory. The second section reviews the work of Esther Thelen and colleagues, who revolutionized how researchers think about the field of motor development. It also reviews recent extensions of this work to the domain of cognitive development. Here, the focus is on dynamic field theory, a formal, neurally grounded approach that has yielded novel insights into the embodied nature of cognition. The final section proposes that the key challenge on the horizon is to formally specify how interactions among multiple levels of analysis interact across multiple time scales to create developmental change. PMID:22125575

  11. Nonlinear neural control with power systems applications

    NASA Astrophysics Data System (ADS)

    Chen, Dingguo

    1998-12-01

    Extensive studies have been undertaken on the transient stability of large interconnected power systems with flexible ac transmission systems (FACTS) devices installed. Varieties of control methodologies have been proposed to stabilize the postfault system which would otherwise eventually lose stability without a proper control. Generally speaking, regular transient stability is well understood, but the mechanism of load-driven voltage instability or voltage collapse has not been well understood. The interaction of generator dynamics and load dynamics makes synthesis of stabilizing controllers even more challenging. There is currently increasing interest in the research of neural networks as identifiers and controllers for dealing with dynamic time-varying nonlinear systems. This study focuses on the development of novel artificial neural network architectures for identification and control with application to dynamic electric power systems so that the stability of the interconnected power systems, following large disturbances, and/or with the inclusion of uncertain loads, can be largely enhanced, and stable operations are guaranteed. The latitudinal neural network architecture is proposed for the purpose of system identification. It may be used for identification of nonlinear static/dynamic loads, which can be further used for static/dynamic voltage stability analysis. The properties associated with this architecture are investigated. A neural network methodology is proposed for dealing with load modeling and voltage stability analysis. Based on the neural network models of loads, voltage stability analysis evolves, and modal analysis is performed. Simulation results are also provided. The transient stability problem is studied with consideration of load effects. The hierarchical neural control scheme is developed. Trajectory-following policy is used so that the hierarchical neural controller performs as almost well for non-nominal cases as they do for the nominal cases. The adaptive hierarchical neural control scheme is also proposed to deal with the time-varying nature of loads. Further, adaptive neural control, which is based on the on-line updating of the weights and biases of the neural networks, is studied. Simulations provided on the faulted power systems with unknown loads suggest that the proposed adaptive hierarchical neural control schemes should be useful for practical power applications.

  12. Standard representation and unified stability analysis for dynamic artificial neural network models.

    PubMed

    Kim, Kwang-Ki K; Patrón, Ernesto Ríos; Braatz, Richard D

    2018-02-01

    An overview is provided of dynamic artificial neural network models (DANNs) for nonlinear dynamical system identification and control problems, and convex stability conditions are proposed that are less conservative than past results. The three most popular classes of dynamic artificial neural network models are described, with their mathematical representations and architectures followed by transformations based on their block diagrams that are convenient for stability and performance analyses. Classes of nonlinear dynamical systems that are universally approximated by such models are characterized, which include rigorous upper bounds on the approximation errors. A unified framework and linear matrix inequality-based stability conditions are described for different classes of dynamic artificial neural network models that take additional information into account such as local slope restrictions and whether the nonlinearities within the DANNs are odd. A theoretical example shows reduced conservatism obtained by the conditions. Copyright © 2017. Published by Elsevier Ltd.

  13. Homeostatic Scaling of Excitability in Recurrent Neural Networks

    PubMed Central

    Remme, Michiel W. H.; Wadman, Wytse J.

    2012-01-01

    Neurons adjust their intrinsic excitability when experiencing a persistent change in synaptic drive. This process can prevent neural activity from moving into either a quiescent state or a saturated state in the face of ongoing plasticity, and is thought to promote stability of the network in which neurons reside. However, most neurons are embedded in recurrent networks, which require a delicate balance between excitation and inhibition to maintain network stability. This balance could be disrupted when neurons independently adjust their intrinsic excitability. Here, we study the functioning of activity-dependent homeostatic scaling of intrinsic excitability (HSE) in a recurrent neural network. Using both simulations of a recurrent network consisting of excitatory and inhibitory neurons that implement HSE, and a mean-field description of adapting excitatory and inhibitory populations, we show that the stability of such adapting networks critically depends on the relationship between the adaptation time scales of both neuron populations. In a stable adapting network, HSE can keep all neurons functioning within their dynamic range, while the network is undergoing several (patho)physiologically relevant types of plasticity, such as persistent changes in external drive, changes in connection strengths, or the loss of inhibitory cells from the network. However, HSE cannot prevent the unstable network dynamics that result when, due to such plasticity, recurrent excitation in the network becomes too strong compared to feedback inhibition. This suggests that keeping a neural network in a stable and functional state requires the coordination of distinct homeostatic mechanisms that operate not only by adjusting neural excitability, but also by controlling network connectivity. PMID:22570604

  14. Neural dynamics based on the recognition of neural fingerprints

    PubMed Central

    Carrillo-Medina, José Luis; Latorre, Roberto

    2015-01-01

    Experimental evidence has revealed the existence of characteristic spiking features in different neural signals, e.g., individual neural signatures identifying the emitter or functional signatures characterizing specific tasks. These neural fingerprints may play a critical role in neural information processing, since they allow receptors to discriminate or contextualize incoming stimuli. This could be a powerful strategy for neural systems that greatly enhances the encoding and processing capacity of these networks. Nevertheless, the study of information processing based on the identification of specific neural fingerprints has attracted little attention. In this work, we study (i) the emerging collective dynamics of a network of neurons that communicate with each other by exchange of neural fingerprints and (ii) the influence of the network topology on the self-organizing properties within the network. Complex collective dynamics emerge in the network in the presence of stimuli. Predefined inputs, i.e., specific neural fingerprints, are detected and encoded into coexisting patterns of activity that propagate throughout the network with different spatial organization. The patterns evoked by a stimulus can survive after the stimulation is over, which provides memory mechanisms to the network. The results presented in this paper suggest that neural information processing based on neural fingerprints can be a plausible, flexible, and powerful strategy. PMID:25852531

  15. In vivo optoacoustic monitoring of calcium activity in the brain (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Deán-Ben, Xose Luís.; Gottschalk, Sven; Sela, Gali; Lauri, Antonella; Kneipp, Moritz; Ntziachristos, Vasilis; Westmeyer, Gil G.; Shoham, Shy; Razansky, Daniel

    2017-03-01

    Non-invasive observation of spatio-temporal neural activity of large neural populations distributed over the entire brain of complex organisms is a longstanding goal of neuroscience [1,2]. Recently, genetically encoded calcium indicators (GECIs) have revolutionized neuroimaging by enabling mapping the activity of entire neuronal populations in vivo [3]. Visualization of these powerful sensors with fluorescence microscopy has however been limited to superficial regions while deep brain areas have so far remained unreachable [4]. We have developed a volumetric multispectral optoacoustic tomography platform for imaging neural activation deep in scattering brains [5]. The developed methodology can render 100 volumetric frames per second across scalable fields of view ranging between 50-1000 mm3 with respective spatial resolution of 35-150µm. Experiments performed in immobilized and freely swimming larvae and in adult zebrafish brains expressing the genetically-encoded calcium indicator GCaMP5G demonstrated, for the first time, the fundamental ability to directly track neural dynamics using optoacoustics while overcoming the depth barrier of optical imaging in scattering brains [6]. It was further possible to monitor calcium transients in a scattering brain of a living adult transgenic zebrafish expressing GCaMP5G calcium indicator [7]. Fast changes in optoacoustic traces associated to GCaMP5G activity were detectable in the presence of other strongly absorbing endogenous chromophores, such as hemoglobin. The results indicate that the optoacoustic signal traces generally follow the GCaMP5G fluorescence dynamics and further enable overcoming the longstanding optical-diffusion penetration barrier associated to scattering in biological tissues [6]. The new functional optoacoustic neuroimaging method can visualize neural activity at penetration depths and spatio-temporal resolution scales not covered with the existing neuroimaging techniques. Thus, in addition to the well-established capacity of optoacoustics to resolve vascular anatomy and multiple hemodynamic parameters deep in scattering tissues, the newly developed methodology offers unprecedented capabilities for functional whole brain observations of fast calcium dynamics.

  16. Neural Population Dynamics during Reaching Are Better Explained by a Dynamical System than Representational Tuning

    PubMed Central

    Dann, Benjamin

    2016-01-01

    Recent models of movement generation in motor cortex have sought to explain neural activity not as a function of movement parameters, known as representational models, but as a dynamical system acting at the level of the population. Despite evidence supporting this framework, the evaluation of representational models and their integration with dynamical systems is incomplete in the literature. Using a representational velocity-tuning based simulation of center-out reaching, we show that incorporating variable latency offsets between neural activity and kinematics is sufficient to generate rotational dynamics at the level of neural populations, a phenomenon observed in motor cortex. However, we developed a covariance-matched permutation test (CMPT) that reassigns neural data between task conditions independently for each neuron while maintaining overall neuron-to-neuron relationships, revealing that rotations based on the representational model did not uniquely depend on the underlying condition structure. In contrast, rotations based on either a dynamical model or motor cortex data depend on this relationship, providing evidence that the dynamical model more readily explains motor cortex activity. Importantly, implementing a recurrent neural network we demonstrate that both representational tuning properties and rotational dynamics emerge, providing evidence that a dynamical system can reproduce previous findings of representational tuning. Finally, using motor cortex data in combination with the CMPT, we show that results based on small numbers of neurons or conditions should be interpreted cautiously, potentially informing future experimental design. Together, our findings reinforce the view that representational models lack the explanatory power to describe complex aspects of single neuron and population level activity. PMID:27814352

  17. Neural Population Dynamics during Reaching Are Better Explained by a Dynamical System than Representational Tuning.

    PubMed

    Michaels, Jonathan A; Dann, Benjamin; Scherberger, Hansjörg

    2016-11-01

    Recent models of movement generation in motor cortex have sought to explain neural activity not as a function of movement parameters, known as representational models, but as a dynamical system acting at the level of the population. Despite evidence supporting this framework, the evaluation of representational models and their integration with dynamical systems is incomplete in the literature. Using a representational velocity-tuning based simulation of center-out reaching, we show that incorporating variable latency offsets between neural activity and kinematics is sufficient to generate rotational dynamics at the level of neural populations, a phenomenon observed in motor cortex. However, we developed a covariance-matched permutation test (CMPT) that reassigns neural data between task conditions independently for each neuron while maintaining overall neuron-to-neuron relationships, revealing that rotations based on the representational model did not uniquely depend on the underlying condition structure. In contrast, rotations based on either a dynamical model or motor cortex data depend on this relationship, providing evidence that the dynamical model more readily explains motor cortex activity. Importantly, implementing a recurrent neural network we demonstrate that both representational tuning properties and rotational dynamics emerge, providing evidence that a dynamical system can reproduce previous findings of representational tuning. Finally, using motor cortex data in combination with the CMPT, we show that results based on small numbers of neurons or conditions should be interpreted cautiously, potentially informing future experimental design. Together, our findings reinforce the view that representational models lack the explanatory power to describe complex aspects of single neuron and population level activity.

  18. Neural Computations in a Dynamical System with Multiple Time Scales.

    PubMed

    Mi, Yuanyuan; Lin, Xiaohan; Wu, Si

    2016-01-01

    Neural systems display rich short-term dynamics at various levels, e.g., spike-frequency adaptation (SFA) at the single-neuron level, and short-term facilitation (STF) and depression (STD) at the synapse level. These dynamical features typically cover a broad range of time scales and exhibit large diversity in different brain regions. It remains unclear what is the computational benefit for the brain to have such variability in short-term dynamics. In this study, we propose that the brain can exploit such dynamical features to implement multiple seemingly contradictory computations in a single neural circuit. To demonstrate this idea, we use continuous attractor neural network (CANN) as a working model and include STF, SFA and STD with increasing time constants in its dynamics. Three computational tasks are considered, which are persistent activity, adaptation, and anticipative tracking. These tasks require conflicting neural mechanisms, and hence cannot be implemented by a single dynamical feature or any combination with similar time constants. However, with properly coordinated STF, SFA and STD, we show that the network is able to implement the three computational tasks concurrently. We hope this study will shed light on the understanding of how the brain orchestrates its rich dynamics at various levels to realize diverse cognitive functions.

  19. Nonlinear Dynamics, Chaotic and Complex Systems

    NASA Astrophysics Data System (ADS)

    Infeld, E.; Zelazny, R.; Galkowski, A.

    2011-04-01

    Part I. Dynamic Systems Bifurcation Theory and Chaos: 1. Chaos in random dynamical systems V. M. Gunldach; 2. Controlling chaos using embedded unstable periodic orbits: the problem of optimal periodic orbits B. R. Hunt and E. Ott; 3. Chaotic tracer dynamics in open hydrodynamical flows G. Karolyi, A. Pentek, T. Tel and Z. Toroczkai; 4. Homoclinic chaos L. P. Shilnikov; Part II. Spatially Extended Systems: 5. Hydrodynamics of relativistic probability flows I. Bialynicki-Birula; 6. Waves in ionic reaction-diffusion-migration systems P. Hasal, V. Nevoral, I. Schreiber, H. Sevcikova, D. Snita, and M. Marek; 7. Anomalous scaling in turbulence: a field theoretical approach V. Lvov and I. Procaccia; 8. Abelian sandpile cellular automata M. Markosova; 9. Transport in an incompletely chaotic magnetic field F. Spineanu; Part III. Dynamical Chaos Quantum Physics and Foundations Of Statistical Mechanics: 10. Non-equilibrium statistical mechanics and ergodic theory L. A. Bunimovich; 11. Pseudochaos in statistical physics B. Chirikov; 12. Foundations of non-equilibrium statistical mechanics J. P. Dougherty; 13. Thermomechanical particle simulations W. G. Hoover, H. A. Posch, C. H. Dellago, O. Kum, C. G. Hoover, A. J. De Groot and B. L. Holian; 14. Quantum dynamics on a Markov background and irreversibility B. Pavlov; 15. Time chaos and the laws of nature I. Prigogine and D. J. Driebe; 16. Evolutionary Q and cognitive systems: dynamic entropies and predictability of evolutionary processes W. Ebeling; 17. Spatiotemporal chaos information processing in neural networks H. Szu; 18. Phase transitions and learning in neural networks C. Van den Broeck; 19. Synthesis of chaos A. Vanecek and S. Celikovsky; 20. Computational complexity of continuous problems H. Wozniakowski; Part IV. Complex Systems As An Interface Between Natural Sciences and Environmental Social and Economic Sciences: 21. Stochastic differential geometry in finance studies V. G. Makhankov; Part V. Conference Banquet Speech: Where will the future go? M. J. Feigenbaum.

  20. Dynamic security contingency screening and ranking using neural networks.

    PubMed

    Mansour, Y; Vaahedi, E; El-Sharkawi, M A

    1997-01-01

    This paper summarizes BC Hydro's experience in applying neural networks to dynamic security contingency screening and ranking. The idea is to use the information on the prevailing operating condition and directly provide contingency screening and ranking using a trained neural network. To train the two neural networks for the large scale systems of BC Hydro and Hydro Quebec, in total 1691 detailed transient stability simulation were conducted, 1158 for BC Hydro system and 533 for the Hydro Quebec system. The simulation program was equipped with the energy margin calculation module (second kick) to measure the energy margin in each run. The first set of results showed poor performance for the neural networks in assessing the dynamic security. However a number of corrective measures improved the results significantly. These corrective measures included: 1) the effectiveness of output; 2) the number of outputs; 3) the type of features (static versus dynamic); 4) the number of features; 5) system partitioning; and 6) the ratio of training samples to features. The final results obtained using the large scale systems of BC Hydro and Hydro Quebec demonstrates a good potential for neural network in dynamic security assessment contingency screening and ranking.

  1. Oscillatory phase dynamics in neural entrainment underpin illusory percepts of time.

    PubMed

    Herrmann, Björn; Henry, Molly J; Grigutsch, Maren; Obleser, Jonas

    2013-10-02

    Neural oscillatory dynamics are a candidate mechanism to steer perception of time and temporal rate change. While oscillator models of time perception are strongly supported by behavioral evidence, a direct link to neural oscillations and oscillatory entrainment has not yet been provided. In addition, it has thus far remained unaddressed how context-induced illusory percepts of time are coded for in oscillator models of time perception. To investigate these questions, we used magnetoencephalography and examined the neural oscillatory dynamics that underpin pitch-induced illusory percepts of temporal rate change. Human participants listened to frequency-modulated sounds that varied over time in both modulation rate and pitch, and judged the direction of rate change (decrease vs increase). Our results demonstrate distinct neural mechanisms of rate perception: Modulation rate changes directly affected listeners' rate percept as well as the exact frequency of the neural oscillation. However, pitch-induced illusory rate changes were unrelated to the exact frequency of the neural responses. The rate change illusion was instead linked to changes in neural phase patterns, which allowed for single-trial decoding of percepts. That is, illusory underestimations or overestimations of perceived rate change were tightly coupled to increased intertrial phase coherence and changes in cerebro-acoustic phase lag. The results provide insight on how illusory percepts of time are coded for by neural oscillatory dynamics.

  2. Out-of-equilibrium dynamical mean-field equations for the perceptron model

    NASA Astrophysics Data System (ADS)

    Agoritsas, Elisabeth; Biroli, Giulio; Urbani, Pierfrancesco; Zamponi, Francesco

    2018-02-01

    Perceptrons are the building blocks of many theoretical approaches to a wide range of complex systems, ranging from neural networks and deep learning machines, to constraint satisfaction problems, glasses and ecosystems. Despite their applicability and importance, a detailed study of their Langevin dynamics has never been performed yet. Here we derive the mean-field dynamical equations that describe the continuous random perceptron in the thermodynamic limit, in a very general setting with arbitrary noise and friction kernels, not necessarily related by equilibrium relations. We derive the equations in two ways: via a dynamical cavity method, and via a path-integral approach in its supersymmetric formulation. The end point of both approaches is the reduction of the dynamics of the system to an effective stochastic process for a representative dynamical variable. Because the perceptron is formally very close to a system of interacting particles in a high dimensional space, the methods we develop here can be transferred to the study of liquid and glasses in high dimensions. Potentially interesting applications are thus the study of the glass transition in active matter, the study of the dynamics around the jamming transition, and the calculation of rheological properties in driven systems.

  3. Coactosin accelerates cell dynamism by promoting actin polymerization.

    PubMed

    Hou, Xubin; Katahira, Tatsuya; Ohashi, Kazumasa; Mizuno, Kensaku; Sugiyama, Sayaka; Nakamura, Harukazu

    2013-07-01

    During development, cells dynamically move or extend their processes, which are achieved by actin dynamics. In the present study, we paid attention to Coactosin, an actin binding protein, and studied its role in actin dynamics. Coactosin was associated with actin and Capping protein in neural crest cells and N1E-115 neuroblastoma cells. Accumulation of Coactosin to cellular processes and its association with actin filaments prompted us to reveal the effect of Coactosin on cell migration. Coactosin overexpression induced cellular processes in cultured neural crest cells. In contrast, knock-down of Coactosin resulted in disruption of actin polymerization and of neural crest cell migration. Importantly, Coactosin was recruited to lamellipodia and filopodia in response to Rac signaling, and mutated Coactosin that cannot bind to F-actin did not react to Rac signaling, nor support neural crest cell migration. It was also shown that deprivation of Rac signaling from neural crest cells by dominant negative Rac1 (DN-Rac1) interfered with neural crest cell migration, and that co-transfection of DN-Rac1 and Coactosin restored neural crest cell migration. From these results we have concluded that Coactosin functions downstream of Rac signaling and that it is involved in neurite extension and neural crest cell migration by actively participating in actin polymerization. Copyright © 2013 Elsevier Inc. All rights reserved.

  4. A Scientific Understanding of Keystroke Dynamics

    DTIC Science & Technology

    2012-01-01

    keystroke- dynamics classifiers. Obaidat and Sadoun (1997) had 16 subjects type their own and each others’ user IDs. They constructed neural networks and a...puts are assigned high anomaly scores. In the training phase, the neural network is constructed with p input nodes and p out- put nodes (where p is...Berlin. S. Cho, C. Han, D. H. Han, and H.-I. Kim. Web- based keystroke dynamics identity ver- ification using neural network . Journal of Organizational

  5. Vertically aligned carbon nanofiber as nano-neuron interface for monitoring neural function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ericson, Milton Nance; McKnight, Timothy E; Melechko, Anatoli Vasilievich

    2012-01-01

    Neural chips, which are capable of simultaneous, multi-site neural recording and stimulation, have been used to detect and modulate neural activity for almost 30 years. As a neural interface, neural chips provide dynamic functional information for neural decoding and neural control. By improving sensitivity and spatial resolution, nano-scale electrodes may revolutionize neural detection and modulation at cellular and molecular levels as nano-neuron interfaces. We developed a carbon-nanofiber neural chip with lithographically defined arrays of vertically aligned carbon nanofiber electrodes and demonstrated its capability of both stimulating and monitoring electrophysiological signals from brain tissues in vitro and monitoring dynamic information ofmore » neuroplasticity. This novel nano-neuron interface can potentially serve as a precise, informative, biocompatible, and dual-mode neural interface for monitoring of both neuroelectrical and neurochemical activity at the single cell level and even inside the cell.« less

  6. Cortical geometry as a determinant of brain activity eigenmodes: Neural field analysis

    NASA Astrophysics Data System (ADS)

    Gabay, Natasha C.; Robinson, P. A.

    2017-09-01

    Perturbation analysis of neural field theory is used to derive eigenmodes of neural activity on a cortical hemisphere, which have previously been calculated numerically and found to be close analogs of spherical harmonics, despite heavy cortical folding. The present perturbation method treats cortical folding as a first-order perturbation from a spherical geometry. The first nine spatial eigenmodes on a population-averaged cortical hemisphere are derived and compared with previous numerical solutions. These eigenmodes contribute most to brain activity patterns such as those seen in electroencephalography and functional magnetic resonance imaging. The eigenvalues of these eigenmodes are found to agree with the previous numerical solutions to within their uncertainties. Also in agreement with the previous numerics, all eigenmodes are found to closely resemble spherical harmonics. The first seven eigenmodes exhibit a one-to-one correspondence with their numerical counterparts, with overlaps that are close to unity. The next two eigenmodes overlap the corresponding pair of numerical eigenmodes, having been rotated within the subspace spanned by that pair, likely due to second-order effects. The spatial orientations of the eigenmodes are found to be fixed by gross cortical shape rather than finer-scale cortical properties, which is consistent with the observed intersubject consistency of functional connectivity patterns. However, the eigenvalues depend more sensitively on finer-scale cortical structure, implying that the eigenfrequencies and consequent dynamical properties of functional connectivity depend more strongly on details of individual cortical folding. Overall, these results imply that well-established tools from perturbation theory and spherical harmonic analysis can be used to calculate the main properties and dynamics of low-order brain eigenmodes.

  7. Neural field model to reconcile structure with function in primary visual cortex.

    PubMed

    Rankin, James; Chavane, Frédéric

    2017-10-01

    Voltage-sensitive dye imaging experiments in primary visual cortex (V1) have shown that local, oriented visual stimuli elicit stable orientation-selective activation within the stimulus retinotopic footprint. The cortical activation dynamically extends far beyond the retinotopic footprint, but the peripheral spread stays non-selective-a surprising finding given a number of anatomo-functional studies showing the orientation specificity of long-range connections. Here we use a computational model to investigate this apparent discrepancy by studying the expected population response using known published anatomical constraints. The dynamics of input-driven localized states were simulated in a planar neural field model with multiple sub-populations encoding orientation. The realistic connectivity profile has parameters controlling the clustering of long-range connections and their orientation bias. We found substantial overlap between the anatomically relevant parameter range and a steep decay in orientation selective activation that is consistent with the imaging experiments. In this way our study reconciles the reported orientation bias of long-range connections with the functional expression of orientation selective neural activity. Our results demonstrate this sharp decay is contingent on three factors, that long-range connections are sufficiently diffuse, that the orientation bias of these connections is in an intermediate range (consistent with anatomy) and that excitation is sufficiently balanced by inhibition. Conversely, our modelling results predict that, for reduced inhibition strength, spurious orientation selective activation could be generated through long-range lateral connections. Furthermore, if the orientation bias of lateral connections is very strong, or if inhibition is particularly weak, the network operates close to an instability leading to unbounded cortical activation.

  8. The co-development of looking dynamics and discrimination performance

    PubMed Central

    Perone, Sammy; Spencer, John P.

    2015-01-01

    The study of looking dynamics and discrimination form the backbone of developmental science and are central processes in theories of infant cognition. Looking dynamics and discrimination change dramatically across the first year of life. Surprisingly, developmental changes in looking and discrimination have not been studied together. Recent simulations of a dynamic neural field (DNF) model of infant looking and memory suggest that looking and discrimination do change together over development and arise from a single neurodevelopmental mechanism. We probe this claim by measuring looking dynamics and discrimination along continuous, metrically organized dimensions in 5-, 7, and 10-month-old infants (N = 119). The results showed that looking dynamics and discrimination changed together over development and are linked within individuals. Quantitative simulations of a DNF model provide insights into the processes that underlie developmental change in looking dynamics and discrimination. Simulation results support the view that these changes might arise from a single neurodevelopmental mechanism. PMID:23957821

  9. Electrophoretic deposition of ligand-free platinum nanoparticles on neural electrodes affects their impedance in vitro and in vivo with no negative effect on reactive gliosis.

    PubMed

    Angelov, Svilen D; Koenen, Sven; Jakobi, Jurij; Heissler, Hans E; Alam, Mesbah; Schwabe, Kerstin; Barcikowski, Stephan; Krauss, Joachim K

    2016-01-12

    Electrodes for neural stimulation and recording are used for the treatment of neurological disorders. Their features critically depend on impedance and interaction with brain tissue. The effect of surface modification on electrode impedance was examined in vitro and in vivo after intracranial implantation in rats. Electrodes coated by electrophoretic deposition with platinum nanoparticles (NP; <10 and 50 nm) as well as uncoated references were implanted into the rat's subthalamic nucleus. After postoperative recovery, rats were electrostimulated for 3 weeks. Impedance was measured before implantation, after recovery and then weekly during stimulation. Finally, local field potential was recorded and tissue-to-implant reaction was immunohistochemically studied. Coating with NP significantly increased electrode's impedance in vitro. Postoperatively, the impedance of all electrodes was temporarily further increased. This effect was lowest for the electrodes coated with particles <10 nm, which also showed the most stable impedance dynamics during stimulation for 3 weeks and the lowest total power of local field potential during neuronal activity recording. Histological analysis revealed that NP-coating did not affect glial reactions or neural cell-count. Coating with NP <10 nm may improve electrode's impedance stability without affecting biocompatibility. Increased impedance after NP-coating may improve neural recording due to better signal-to-noise ratio.

  10. Synchrony-induced modes of oscillation of a neural field model

    NASA Astrophysics Data System (ADS)

    Esnaola-Acebes, Jose M.; Roxin, Alex; Avitabile, Daniele; Montbrió, Ernest

    2017-11-01

    We investigate the modes of oscillation of heterogeneous ring networks of quadratic integrate-and-fire (QIF) neurons with nonlocal, space-dependent coupling. Perturbations of the equilibrium state with a particular wave number produce transient standing waves with a specific temporal frequency, analogously to those in a tense string. In the neuronal network, the equilibrium corresponds to a spatially homogeneous, asynchronous state. Perturbations of this state excite the network's oscillatory modes, which reflect the interplay of episodes of synchronous spiking with the excitatory-inhibitory spatial interactions. In the thermodynamic limit, an exact low-dimensional neural field model describing the macroscopic dynamics of the network is derived. This allows us to obtain formulas for the Turing eigenvalues of the spatially homogeneous state and hence to obtain its stability boundary. We find that the frequency of each Turing mode depends on the corresponding Fourier coefficient of the synaptic pattern of connectivity. The decay rate instead is identical for all oscillation modes as a consequence of the heterogeneity-induced desynchronization of the neurons. Finally, we numerically compute the spectrum of spatially inhomogeneous solutions branching from the Turing bifurcation, showing that similar oscillatory modes operate in neural bump states and are maintained away from onset.

  11. The development and modeling of devices and paradigms for transcranial magnetic stimulation

    PubMed Central

    Goetz, Stefan M.; Deng, Zhi-De

    2017-01-01

    Magnetic stimulation is a noninvasive neurostimulation technique that can evoke action potentials and modulate neural circuits through induced electric fields. Biophysical models of magnetic stimulation have become a major driver for technological developments and the understanding of the mechanisms of magnetic neurostimulation and neuromodulation. Major technological developments involve stimulation coils with different spatial characteristics and pulse sources to control the pulse waveform. While early technological developments were the result of manual design and invention processes, there is a trend in both stimulation coil and pulse source design to mathematically optimize parameters with the help of computational models. To date, macroscopically highly realistic spatial models of the brain as well as peripheral targets, and user-friendly software packages enable researchers and practitioners to simulate the treatment-specific and induced electric field distribution in the brains of individual subjects and patients. Neuron models further introduce the microscopic level of neural activation to understand the influence of activation dynamics in response to different pulse shapes. A number of models that were designed for online calibration to extract otherwise covert information and biomarkers from the neural system recently form a third branch of modeling. PMID:28443696

  12. Synchrony-induced modes of oscillation of a neural field model.

    PubMed

    Esnaola-Acebes, Jose M; Roxin, Alex; Avitabile, Daniele; Montbrió, Ernest

    2017-11-01

    We investigate the modes of oscillation of heterogeneous ring networks of quadratic integrate-and-fire (QIF) neurons with nonlocal, space-dependent coupling. Perturbations of the equilibrium state with a particular wave number produce transient standing waves with a specific temporal frequency, analogously to those in a tense string. In the neuronal network, the equilibrium corresponds to a spatially homogeneous, asynchronous state. Perturbations of this state excite the network's oscillatory modes, which reflect the interplay of episodes of synchronous spiking with the excitatory-inhibitory spatial interactions. In the thermodynamic limit, an exact low-dimensional neural field model describing the macroscopic dynamics of the network is derived. This allows us to obtain formulas for the Turing eigenvalues of the spatially homogeneous state and hence to obtain its stability boundary. We find that the frequency of each Turing mode depends on the corresponding Fourier coefficient of the synaptic pattern of connectivity. The decay rate instead is identical for all oscillation modes as a consequence of the heterogeneity-induced desynchronization of the neurons. Finally, we numerically compute the spectrum of spatially inhomogeneous solutions branching from the Turing bifurcation, showing that similar oscillatory modes operate in neural bump states and are maintained away from onset.

  13. The development and modelling of devices and paradigms for transcranial magnetic stimulation.

    PubMed

    Goetz, Stefan M; Deng, Zhi-De

    2017-04-01

    Magnetic stimulation is a non-invasive neurostimulation technique that can evoke action potentials and modulate neural circuits through induced electric fields. Biophysical models of magnetic stimulation have become a major driver for technological developments and the understanding of the mechanisms of magnetic neurostimulation and neuromodulation. Major technological developments involve stimulation coils with different spatial characteristics and pulse sources to control the pulse waveform. While early technological developments were the result of manual design and invention processes, there is a trend in both stimulation coil and pulse source design to mathematically optimize parameters with the help of computational models. To date, macroscopically highly realistic spatial models of the brain, as well as peripheral targets, and user-friendly software packages enable researchers and practitioners to simulate the treatment-specific and induced electric field distribution in the brains of individual subjects and patients. Neuron models further introduce the microscopic level of neural activation to understand the influence of activation dynamics in response to different pulse shapes. A number of models that were designed for online calibration to extract otherwise covert information and biomarkers from the neural system recently form a third branch of modelling.

  14. Limb Dominance Results from Asymmetries in Predictive and Impedance Control Mechanisms

    PubMed Central

    Yadav, Vivek; Sainburg, Robert L.

    2014-01-01

    Handedness is a pronounced feature of human motor behavior, yet the underlying neural mechanisms remain unclear. We hypothesize that motor lateralization results from asymmetries in predictive control of task dynamics and in control of limb impedance. To test this hypothesis, we present an experiment with two different force field environments, a field with a predictable magnitude that varies with the square of velocity, and a field with a less predictable magnitude that varies linearly with velocity. These fields were designed to be compatible with controllers that are specialized in predicting limb and task dynamics, and modulating position and velocity dependent impedance, respectively. Because the velocity square field does not change the form of the equations of motion for the reaching arm, we reasoned that a forward dynamic-type controller should perform well in this field, while control of linear damping and stiffness terms should be less effective. In contrast, the unpredictable linear field should be most compatible with impedance control, but incompatible with predictive dynamics control. We measured steady state final position accuracy and 3 trajectory features during exposure to these fields: Mean squared jerk, Straightness, and Movement time. Our results confirmed that each arm made straighter, smoother, and quicker movements in its compatible field. Both arms showed similar final position accuracies, which were achieved using more extensive corrective sub-movements when either arm performed in its incompatible field. Finally, each arm showed limited adaptation to its incompatible field. Analysis of the dependence of trajectory errors on field magnitude suggested that dominant arm adaptation occurred by prediction of the mean field, thus exploiting predictive mechanisms for adaptation to the unpredictable field. Overall, our results support the hypothesis that motor lateralization reflects asymmetries in specific motor control mechanisms associated with predictive control of limb and task dynamics, and modulation of limb impedance. PMID:24695543

  15. The experimental identification of magnetorheological dampers and evaluation of their controllers

    NASA Astrophysics Data System (ADS)

    Metered, H.; Bonello, P.; Oyadiji, S. O.

    2010-05-01

    Magnetorheological (MR) fluid dampers are semi-active control devices that have been applied over a wide range of practical vibration control applications. This paper concerns the experimental identification of the dynamic behaviour of an MR damper and the use of the identified parameters in the control of such a damper. Feed-forward and recurrent neural networks are used to model both the direct and inverse dynamics of the damper. Training and validation of the proposed neural networks are achieved by using the data generated through dynamic tests with the damper mounted on a tensile testing machine. The validation test results clearly show that the proposed neural networks can reliably represent both the direct and inverse dynamic behaviours of an MR damper. The effect of the cylinder's surface temperature on both the direct and inverse dynamics of the damper is studied, and the neural network model is shown to be reasonably robust against significant temperature variation. The inverse recurrent neural network model is introduced as a damper controller and experimentally evaluated against alternative controllers proposed in the literature. The results reveal that the neural-based damper controller offers superior damper control. This observation and the added advantages of low-power requirement, extended service life of the damper and the minimal use of sensors, indicate that a neural-based damper controller potentially offers the most cost-effective vibration control solution among the controllers investigated.

  16. A dynamical systems approach for estimating phase interactions between rhythms of different frequencies from experimental data.

    PubMed

    Onojima, Takayuki; Goto, Takahiro; Mizuhara, Hiroaki; Aoyagi, Toshio

    2018-01-01

    Synchronization of neural oscillations as a mechanism of brain function is attracting increasing attention. Neural oscillation is a rhythmic neural activity that can be easily observed by noninvasive electroencephalography (EEG). Neural oscillations show the same frequency and cross-frequency synchronization for various cognitive and perceptual functions. However, it is unclear how this neural synchronization is achieved by a dynamical system. If neural oscillations are weakly coupled oscillators, the dynamics of neural synchronization can be described theoretically using a phase oscillator model. We propose an estimation method to identify the phase oscillator model from real data of cross-frequency synchronized activities. The proposed method can estimate the coupling function governing the properties of synchronization. Furthermore, we examine the reliability of the proposed method using time-series data obtained from numerical simulation and an electronic circuit experiment, and show that our method can estimate the coupling function correctly. Finally, we estimate the coupling function between EEG oscillation and the speech sound envelope, and discuss the validity of these results.

  17. Multi-Temporal Land Cover Classification with Long Short-Term Memory Neural Networks

    NASA Astrophysics Data System (ADS)

    Rußwurm, M.; Körner, M.

    2017-05-01

    Land cover classification (LCC) is a central and wide field of research in earth observation and has already put forth a variety of classification techniques. Many approaches are based on classification techniques considering observation at certain points in time. However, some land cover classes, such as crops, change their spectral characteristics due to environmental influences and can thus not be monitored effectively with classical mono-temporal approaches. Nevertheless, these temporal observations should be utilized to benefit the classification process. After extensive research has been conducted on modeling temporal dynamics by spectro-temporal profiles using vegetation indices, we propose a deep learning approach to utilize these temporal characteristics for classification tasks. In this work, we show how long short-term memory (LSTM) neural networks can be employed for crop identification purposes with SENTINEL 2A observations from large study areas and label information provided by local authorities. We compare these temporal neural network models, i.e., LSTM and recurrent neural network (RNN), with a classical non-temporal convolutional neural network (CNN) model and an additional support vector machine (SVM) baseline. With our rather straightforward LSTM variant, we exceeded state-of-the-art classification performance, thus opening promising potential for further research.

  18. Electronic neural networks for global optimization

    NASA Technical Reports Server (NTRS)

    Thakoor, A. P.; Moopenn, A. W.; Eberhardt, S.

    1990-01-01

    An electronic neural network with feedback architecture, implemented in analog custom VLSI is described. Its application to problems of global optimization for dynamic assignment is discussed. The convergence properties of the neural network hardware are compared with computer simulation results. The neural network's ability to provide optimal or near optimal solutions within only a few neuron time constants, a speed enhancement of several orders of magnitude over conventional search methods, is demonstrated. The effect of noise on the circuit dynamics and the convergence behavior of the neural network hardware is also examined.

  19. Phase synchronization motion and neural coding in dynamic transmission of neural information.

    PubMed

    Wang, Rubin; Zhang, Zhikang; Qu, Jingyi; Cao, Jianting

    2011-07-01

    In order to explore the dynamic characteristics of neural coding in the transmission of neural information in the brain, a model of neural network consisting of three neuronal populations is proposed in this paper using the theory of stochastic phase dynamics. Based on the model established, the neural phase synchronization motion and neural coding under spontaneous activity and stimulation are examined, for the case of varying network structure. Our analysis shows that, under the condition of spontaneous activity, the characteristics of phase neural coding are unrelated to the number of neurons participated in neural firing within the neuronal populations. The result of numerical simulation supports the existence of sparse coding within the brain, and verifies the crucial importance of the magnitudes of the coupling coefficients in neural information processing as well as the completely different information processing capability of neural information transmission in both serial and parallel couplings. The result also testifies that under external stimulation, the bigger the number of neurons in a neuronal population, the more the stimulation influences the phase synchronization motion and neural coding evolution in other neuronal populations. We verify numerically the experimental result in neurobiology that the reduction of the coupling coefficient between neuronal populations implies the enhancement of lateral inhibition function in neural networks, with the enhancement equivalent to depressing neuronal excitability threshold. Thus, the neuronal populations tend to have a stronger reaction under the same stimulation, and more neurons get excited, leading to more neurons participating in neural coding and phase synchronization motion.

  20. Dynamic neural network-based methods for compensation of nonlinear effects in multimode communication lines

    NASA Astrophysics Data System (ADS)

    Sidelnikov, O. S.; Redyuk, A. A.; Sygletos, S.

    2017-12-01

    We consider neural network-based schemes of digital signal processing. It is shown that the use of a dynamic neural network-based scheme of signal processing ensures an increase in the optical signal transmission quality in comparison with that provided by other methods for nonlinear distortion compensation.

  1. Neural mechanisms of movement planning: motor cortex and beyond.

    PubMed

    Svoboda, Karel; Li, Nuo

    2018-04-01

    Neurons in motor cortex and connected brain regions fire in anticipation of specific movements, long before movement occurs. This neural activity reflects internal processes by which the brain plans and executes volitional movements. The study of motor planning offers an opportunity to understand how the structure and dynamics of neural circuits support persistent internal states and how these states influence behavior. Recent advances in large-scale neural recordings are beginning to decipher the relationship of the dynamics of populations of neurons during motor planning and movements. New behavioral tasks in rodents, together with quantified perturbations, link dynamics in specific nodes of neural circuits to behavior. These studies reveal a neural network distributed across multiple brain regions that collectively supports motor planning. We review recent advances and highlight areas where further work is needed to achieve a deeper understanding of the mechanisms underlying motor planning and related cognitive processes. Copyright © 2017. Published by Elsevier Ltd.

  2. Implementing a Bayes Filter in a Neural Circuit: The Case of Unknown Stimulus Dynamics.

    PubMed

    Sokoloski, Sacha

    2017-09-01

    In order to interact intelligently with objects in the world, animals must first transform neural population responses into estimates of the dynamic, unknown stimuli that caused them. The Bayesian solution to this problem is known as a Bayes filter, which applies Bayes' rule to combine population responses with the predictions of an internal model. The internal model of the Bayes filter is based on the true stimulus dynamics, and in this note, we present a method for training a theoretical neural circuit to approximately implement a Bayes filter when the stimulus dynamics are unknown. To do this we use the inferential properties of linear probabilistic population codes to compute Bayes' rule and train a neural network to compute approximate predictions by the method of maximum likelihood. In particular, we perform stochastic gradient descent on the negative log-likelihood of the neural network parameters with a novel approximation of the gradient. We demonstrate our methods on a finite-state, a linear, and a nonlinear filtering problem and show how the hidden layer of the neural network develops tuning curves consistent with findings in experimental neuroscience.

  3. Hybrid Cubature Kalman filtering for identifying nonlinear models from sampled recording: Estimation of neuronal dynamics.

    PubMed

    Madi, Mahmoud K; Karameh, Fadi N

    2017-01-01

    Kalman filtering methods have long been regarded as efficient adaptive Bayesian techniques for estimating hidden states in models of linear dynamical systems under Gaussian uncertainty. Recent advents of the Cubature Kalman filter (CKF) have extended this efficient estimation property to nonlinear systems, and also to hybrid nonlinear problems where by the processes are continuous and the observations are discrete (continuous-discrete CD-CKF). Employing CKF techniques, therefore, carries high promise for modeling many biological phenomena where the underlying processes exhibit inherently nonlinear, continuous, and noisy dynamics and the associated measurements are uncertain and time-sampled. This paper investigates the performance of cubature filtering (CKF and CD-CKF) in two flagship problems arising in the field of neuroscience upon relating brain functionality to aggregate neurophysiological recordings: (i) estimation of the firing dynamics and the neural circuit model parameters from electric potentials (EP) observations, and (ii) estimation of the hemodynamic model parameters and the underlying neural drive from BOLD (fMRI) signals. First, in simulated neural circuit models, estimation accuracy was investigated under varying levels of observation noise (SNR), process noise structures, and observation sampling intervals (dt). When compared to the CKF, the CD-CKF consistently exhibited better accuracy for a given SNR, sharp accuracy increase with higher SNR, and persistent error reduction with smaller dt. Remarkably, CD-CKF accuracy shows only a mild deterioration for non-Gaussian process noise, specifically with Poisson noise, a commonly assumed form of background fluctuations in neuronal systems. Second, in simulated hemodynamic models, parametric estimates were consistently improved under CD-CKF. Critically, time-localization of the underlying neural drive, a determinant factor in fMRI-based functional connectivity studies, was significantly more accurate under CD-CKF. In conclusion, and with the CKF recently benchmarked against other advanced Bayesian techniques, the CD-CKF framework could provide significant gains in robustness and accuracy when estimating a variety of biological phenomena models where the underlying process dynamics unfold at time scales faster than those seen in collected measurements.

  4. Hybrid Cubature Kalman filtering for identifying nonlinear models from sampled recording: Estimation of neuronal dynamics

    PubMed Central

    2017-01-01

    Kalman filtering methods have long been regarded as efficient adaptive Bayesian techniques for estimating hidden states in models of linear dynamical systems under Gaussian uncertainty. Recent advents of the Cubature Kalman filter (CKF) have extended this efficient estimation property to nonlinear systems, and also to hybrid nonlinear problems where by the processes are continuous and the observations are discrete (continuous-discrete CD-CKF). Employing CKF techniques, therefore, carries high promise for modeling many biological phenomena where the underlying processes exhibit inherently nonlinear, continuous, and noisy dynamics and the associated measurements are uncertain and time-sampled. This paper investigates the performance of cubature filtering (CKF and CD-CKF) in two flagship problems arising in the field of neuroscience upon relating brain functionality to aggregate neurophysiological recordings: (i) estimation of the firing dynamics and the neural circuit model parameters from electric potentials (EP) observations, and (ii) estimation of the hemodynamic model parameters and the underlying neural drive from BOLD (fMRI) signals. First, in simulated neural circuit models, estimation accuracy was investigated under varying levels of observation noise (SNR), process noise structures, and observation sampling intervals (dt). When compared to the CKF, the CD-CKF consistently exhibited better accuracy for a given SNR, sharp accuracy increase with higher SNR, and persistent error reduction with smaller dt. Remarkably, CD-CKF accuracy shows only a mild deterioration for non-Gaussian process noise, specifically with Poisson noise, a commonly assumed form of background fluctuations in neuronal systems. Second, in simulated hemodynamic models, parametric estimates were consistently improved under CD-CKF. Critically, time-localization of the underlying neural drive, a determinant factor in fMRI-based functional connectivity studies, was significantly more accurate under CD-CKF. In conclusion, and with the CKF recently benchmarked against other advanced Bayesian techniques, the CD-CKF framework could provide significant gains in robustness and accuracy when estimating a variety of biological phenomena models where the underlying process dynamics unfold at time scales faster than those seen in collected measurements. PMID:28727850

  5. Extracting functional components of neural dynamics with Independent Component Analysis and inverse Current Source Density.

    PubMed

    Lęski, Szymon; Kublik, Ewa; Swiejkowski, Daniel A; Wróbel, Andrzej; Wójcik, Daniel K

    2010-12-01

    Local field potentials have good temporal resolution but are blurred due to the slow spatial decay of the electric field. For simultaneous recordings on regular grids one can reconstruct efficiently the current sources (CSD) using the inverse Current Source Density method (iCSD). It is possible to decompose the resultant spatiotemporal information about the current dynamics into functional components using Independent Component Analysis (ICA). We show on test data modeling recordings of evoked potentials on a grid of 4 × 5 × 7 points that meaningful results are obtained with spatial ICA decomposition of reconstructed CSD. The components obtained through decomposition of CSD are better defined and allow easier physiological interpretation than the results of similar analysis of corresponding evoked potentials in the thalamus. We show that spatiotemporal ICA decompositions can perform better for certain types of sources but it does not seem to be the case for the experimental data studied. Having found the appropriate approach to decomposing neural dynamics into functional components we use the technique to study the somatosensory evoked potentials recorded on a grid spanning a large part of the forebrain. We discuss two example components associated with the first waves of activation of the somatosensory thalamus. We show that the proposed method brings up new, more detailed information on the time and spatial location of specific activity conveyed through various parts of the somatosensory thalamus in the rat.

  6. Design of cognitive engine for cognitive radio based on the rough sets and radial basis function neural network

    NASA Astrophysics Data System (ADS)

    Yang, Yanchao; Jiang, Hong; Liu, Congbin; Lan, Zhongli

    2013-03-01

    Cognitive radio (CR) is an intelligent wireless communication system which can dynamically adjust the parameters to improve system performance depending on the environmental change and quality of service. The core technology for CR is the design of cognitive engine, which introduces reasoning and learning methods in the field of artificial intelligence, to achieve the perception, adaptation and learning capability. Considering the dynamical wireless environment and demands, this paper proposes a design of cognitive engine based on the rough sets (RS) and radial basis function neural network (RBF_NN). The method uses experienced knowledge and environment information processed by RS module to train the RBF_NN, and then the learning model is used to reconfigure communication parameters to allocate resources rationally and improve system performance. After training learning model, the performance is evaluated according to two benchmark functions. The simulation results demonstrate the effectiveness of the model and the proposed cognitive engine can effectively achieve the goal of learning and reconfiguration in cognitive radio.

  7. Regional estimation of groundwater arsenic concentrations through systematical dynamic-neural modeling

    NASA Astrophysics Data System (ADS)

    Chang, Fi-John; Chen, Pin-An; Liu, Chen-Wuing; Liao, Vivian Hsiu-Chuan; Liao, Chung-Min

    2013-08-01

    Arsenic (As) is an odorless semi-metal that occurs naturally in rock and soil, and As contamination in groundwater resources has become a serious threat to human health. Thus, assessing the spatial and temporal variability of As concentration is highly desirable, particularly in heavily As-contaminated areas. However, various difficulties may be encountered in the regional estimation of As concentration such as cost-intensive field monitoring, scarcity of field data, identification of important factors affecting As, over-fitting or poor estimation accuracy. This study develops a novel systematical dynamic-neural modeling (SDM) for effectively estimating regional As-contaminated water quality by using easily-measured water quality variables. To tackle the difficulties commonly encountered in regional estimation, the SDM comprises of a neural network and four statistical techniques: the Nonlinear Autoregressive with eXogenous input (NARX) network, Gamma test, cross-validation, Bayesian regularization method and indicator kriging (IK). For practical application, this study investigated a heavily As-contaminated area in Taiwan. The backpropagation neural network (BPNN) is adopted for comparison purpose. The results demonstrate that the NARX network (Root mean square error (RMSE): 95.11 μg l-1 for training; 106.13 μg l-1 for validation) outperforms the BPNN (RMSE: 121.54 μg l-1 for training; 143.37 μg l-1 for validation). The constructed SDM can provide reliable estimation (R2 > 0.89) of As concentration at ungauged sites based merely on three easily-measured water quality variables (Alk, Ca2+ and pH). In addition, risk maps under the threshold of the WHO drinking water standard (10 μg l-1) are derived by the IK to visually display the spatial and temporal variation of the As concentration in the whole study area at different time spans. The proposed SDM can be practically applied with satisfaction to the regional estimation in study areas of interest and the estimation of missing, hazardous or costly data to facilitate water resources management.

  8. M-type potassium conductance controls the emergence of neural phase codes: a combined experimental and neuron modelling study

    PubMed Central

    Kwag, Jeehyun; Jang, Hyun Jae; Kim, Mincheol; Lee, Sujeong

    2014-01-01

    Rate and phase codes are believed to be important in neural information processing. Hippocampal place cells provide a good example where both coding schemes coexist during spatial information processing. Spike rate increases in the place field, whereas spike phase precesses relative to the ongoing theta oscillation. However, what intrinsic mechanism allows for a single neuron to generate spike output patterns that contain both neural codes is unknown. Using dynamic clamp, we simulate an in vivo-like subthreshold dynamics of place cells to in vitro CA1 pyramidal neurons to establish an in vitro model of spike phase precession. Using this in vitro model, we show that membrane potential oscillation (MPO) dynamics is important in the emergence of spike phase codes: blocking the slowly activating, non-inactivating K+ current (IM), which is known to control subthreshold MPO, disrupts MPO and abolishes spike phase precession. We verify the importance of adaptive IM in the generation of phase codes using both an adaptive integrate-and-fire and a Hodgkin–Huxley (HH) neuron model. Especially, using the HH model, we further show that it is the perisomatically located IM with slow activation kinetics that is crucial for the generation of phase codes. These results suggest an important functional role of IM in single neuron computation, where IM serves as an intrinsic mechanism allowing for dual rate and phase coding in single neurons. PMID:25100320

  9. Quasi-periodic patterns (QPP): large-scale dynamics in resting state fMRI that correlate with local infraslow electrical activity.

    PubMed

    Thompson, Garth John; Pan, Wen-Ju; Magnuson, Matthew Evan; Jaeger, Dieter; Keilholz, Shella Dawn

    2014-01-01

    Functional connectivity measurements from resting state blood-oxygen level dependent (BOLD) functional magnetic resonance imaging (fMRI) are proving a powerful tool to probe both normal brain function and neuropsychiatric disorders. However, the neural mechanisms that coordinate these large networks are poorly understood, particularly in the context of the growing interest in network dynamics. Recent work in anesthetized rats has shown that the spontaneous BOLD fluctuations are tightly linked to infraslow local field potentials (LFPs) that are seldom recorded but comparable in frequency to the slow BOLD fluctuations. These findings support the hypothesis that long-range coordination involves low frequency neural oscillations and establishes infraslow LFPs as an excellent candidate for probing the neural underpinnings of the BOLD spatiotemporal patterns observed in both rats and humans. To further examine the link between large-scale network dynamics and infraslow LFPs, simultaneous fMRI and microelectrode recording were performed in anesthetized rats. Using an optimized filter to isolate shared components of the signals, we found that time-lagged correlation between infraslow LFPs and BOLD is comparable in spatial extent and timing to a quasi-periodic pattern (QPP) found from BOLD alone, suggesting that fMRI-measured QPPs and the infraslow LFPs share a common mechanism. As fMRI allows spatial resolution and whole brain coverage not available with electroencephalography, QPPs can be used to better understand the role of infraslow oscillations in normal brain function and neurological or psychiatric disorders. © 2013.

  10. Quasi-periodic patterns (QPP): large-scale dynamics in resting state fMRI that correlate with local infraslow electrical activity

    PubMed Central

    Thompson, Garth John; Pan, Wen-Ju; Magnuson, Matthew Evan; Jaeger, Dieter; Keilholz, Shella Dawn

    2013-01-01

    Functional connectivity measurements from resting state blood-oxygen level dependent (BOLD) functional magnetic resonance imaging (fMRI) are proving a powerful tool to probe both normal brain function and neuropsychiatric disorders. However, the neural mechanisms that coordinate these large networks are poorly understood, particularly in the context of the growing interest in network dynamics. Recent work in anesthetized rats has shown that the spontaneous BOLD fluctuations are tightly linked to infraslow local field potentials (LFPs) that are seldom recorded but comparable in frequency to the slow BOLD fluctuations. These findings support the hypothesis that long-range coordination involves low frequency neural oscillations and establishes infraslow LFPs as an excellent candidate for probing the neural underpinnings of the BOLD spatiotemporal patterns observed in both rats and humans. To further examine the link between large-scale network dynamics and infraslow LFPs, simultaneous fMRI and microelectrode recording were performed in anesthetized rats. Using an optimized filter to isolate shared components of the signals, we found that time-lagged correlation between infraslow LFPs and BOLD is comparable in spatial extent and timing to a quasi-periodic pattern (QPP) found from BOLD alone, suggesting that fMRI-measured QPPs and the infraslow LFPs share a common mechanism. As fMRI allows spatial resolution and whole brain coverage not available with electroencephalography, QPPs can be used to better understand the role of infraslow oscillations in normal brain function and neurological or psychiatric disorders. PMID:24071524

  11. Neural networks for self-learning control systems

    NASA Technical Reports Server (NTRS)

    Nguyen, Derrick H.; Widrow, Bernard

    1990-01-01

    It is shown how a neural network can learn of its own accord to control a nonlinear dynamic system. An emulator, a multilayered neural network, learns to identify the system's dynamic characteristics. The controller, another multilayered neural network, next learns to control the emulator. The self-trained controller is then used to control the actual dynamic system. The learning process continues as the emulator and controller improve and track the physical process. An example is given to illustrate these ideas. The 'truck backer-upper,' a neural network controller that steers a trailer truck while the truck is backing up to a loading dock, is demonstrated. The controller is able to guide the truck to the dock from almost any initial position. The technique explored should be applicable to a wide variety of nonlinear control problems.

  12. Biologically plausible learning in recurrent neural networks reproduces neural dynamics observed during cognitive tasks

    PubMed Central

    Miconi, Thomas

    2017-01-01

    Neural activity during cognitive tasks exhibits complex dynamics that flexibly encode task-relevant variables. Chaotic recurrent networks, which spontaneously generate rich dynamics, have been proposed as a model of cortical computation during cognitive tasks. However, existing methods for training these networks are either biologically implausible, and/or require a continuous, real-time error signal to guide learning. Here we show that a biologically plausible learning rule can train such recurrent networks, guided solely by delayed, phasic rewards at the end of each trial. Networks endowed with this learning rule can successfully learn nontrivial tasks requiring flexible (context-dependent) associations, memory maintenance, nonlinear mixed selectivities, and coordination among multiple outputs. The resulting networks replicate complex dynamics previously observed in animal cortex, such as dynamic encoding of task features and selective integration of sensory inputs. We conclude that recurrent neural networks offer a plausible model of cortical dynamics during both learning and performance of flexible behavior. DOI: http://dx.doi.org/10.7554/eLife.20899.001 PMID:28230528

  13. Biologically plausible learning in recurrent neural networks reproduces neural dynamics observed during cognitive tasks.

    PubMed

    Miconi, Thomas

    2017-02-23

    Neural activity during cognitive tasks exhibits complex dynamics that flexibly encode task-relevant variables. Chaotic recurrent networks, which spontaneously generate rich dynamics, have been proposed as a model of cortical computation during cognitive tasks. However, existing methods for training these networks are either biologically implausible, and/or require a continuous, real-time error signal to guide learning. Here we show that a biologically plausible learning rule can train such recurrent networks, guided solely by delayed, phasic rewards at the end of each trial. Networks endowed with this learning rule can successfully learn nontrivial tasks requiring flexible (context-dependent) associations, memory maintenance, nonlinear mixed selectivities, and coordination among multiple outputs. The resulting networks replicate complex dynamics previously observed in animal cortex, such as dynamic encoding of task features and selective integration of sensory inputs. We conclude that recurrent neural networks offer a plausible model of cortical dynamics during both learning and performance of flexible behavior.

  14. Automatic Target Recognition Using Nonlinear Autoregressive Neural Networks

    DTIC Science & Technology

    2014-03-27

    Lee and Chang (2009) employed a NARXNet for studying the thermodynamics in a pulsating heat pipe (PHP), a type of cooling device which contains...in Thermal Dynamics Identifcation of a Pulsating Heat Pipe . Energy Consversion and Management, 1069-1078. Lisboa, P. J. (2002). A review of...function by adjusting the values of the connections between elements. This flexibility allows ANNs to perform complex functions in fields to include

  15. Mittag-Leffler stability of fractional-order neural networks in the presence of generalized piecewise constant arguments.

    PubMed

    Wu, Ailong; Liu, Ling; Huang, Tingwen; Zeng, Zhigang

    2017-01-01

    Neurodynamic system is an emerging research field. To understand the essential motivational representations of neural activity, neurodynamics is an important question in cognitive system research. This paper is to investigate Mittag-Leffler stability of a class of fractional-order neural networks in the presence of generalized piecewise constant arguments. To identify neural types of computational principles in mathematical and computational analysis, the existence and uniqueness of the solution of neurodynamic system is the first prerequisite. We prove that the existence and uniqueness of the solution of the network holds when some conditions are satisfied. In addition, self-active neurodynamic system demands stable internal dynamical states (equilibria). The main emphasis will be then on several sufficient conditions to guarantee a unique equilibrium point. Furthermore, to provide deeper explanations of neurodynamic process, Mittag-Leffler stability is studied in detail. The established results are based on the theories of fractional differential equation and differential equation with generalized piecewise constant arguments. The derived criteria improve and extend the existing related results. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Carbon nanotube multilayered nanocomposites as multifunctional substrates for actuating neuronal differentiation and functions of neural stem cells.

    PubMed

    Shao, Han; Li, Tingting; Zhu, Rong; Xu, Xiaoting; Yu, Jiandong; Chen, Shengfeng; Song, Li; Ramakrishna, Seeram; Lei, Zhigang; Ruan, Yiwen; He, Liumin

    2018-08-01

    Carbon nanotubes (CNTs) have shown potential applications in neuroscience as growth substrates owing to their numerous unique properties. However, a key concern in the fabrication of homogeneous composites is the serious aggregation of CNTs during incorporation into the biomaterial matrix. Moreover, the regulation mechanism of CNT-based substrates on neural differentiation remains unclear. Here, a novel strategy was introduced for the construction of CNT nanocomposites via layer-by-layer assembly of negatively charged multi-walled CNTs and positively charged poly(dimethyldiallylammonium chloride). Results demonstrated that the CNT-multilayered nanocomposites provided a potent regulatory signal over neural stem cells (NSCs), including cell adhesion, viability, differentiation, neurite outgrowth, and electrophysiological maturation of NSC-derived neurons. Importantly, the dynamic molecular mechanisms in the NSC differentiation involved the integrin-mediated interactions between NSCs and CNT multilayers, thereby activating focal adhesion kinase, subsequently triggering downstream signaling events to regulate neuronal differentiation and synapse formation. This study provided insights for future applications of CNT-multilayered nanomaterials in neural fields as potent modulators of stem cell behavior. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Coherence resonance in bursting neural networks

    NASA Astrophysics Data System (ADS)

    Kim, June Hoan; Lee, Ho Jun; Min, Cheol Hong; Lee, Kyoung J.

    2015-10-01

    Synchronized neural bursts are one of the most noticeable dynamic features of neural networks, being essential for various phenomena in neuroscience, yet their complex dynamics are not well understood. With extrinsic electrical and optical manipulations on cultured neural networks, we demonstrate that the regularity (or randomness) of burst sequences is in many cases determined by a (few) low-dimensional attractor(s) working under strong neural noise. Moreover, there is an optimal level of noise strength at which the regularity of the interburst interval sequence becomes maximal—a phenomenon of coherence resonance. The experimental observations are successfully reproduced through computer simulations on a well-established neural network model, suggesting that the same phenomena may occur in many in vivo as well as in vitro neural networks.

  18. Biophysical Neural Spiking, Bursting, and Excitability Dynamics in Reconfigurable Analog VLSI.

    PubMed

    Yu, T; Sejnowski, T J; Cauwenberghs, G

    2011-10-01

    We study a range of neural dynamics under variations in biophysical parameters underlying extended Morris-Lecar and Hodgkin-Huxley models in three gating variables. The extended models are implemented in NeuroDyn, a four neuron, twelve synapse continuous-time analog VLSI programmable neural emulation platform with generalized channel kinetics and biophysical membrane dynamics. The dynamics exhibit a wide range of time scales extending beyond 100 ms neglected in typical silicon models of tonic spiking neurons. Circuit simulations and measurements show transition from tonic spiking to tonic bursting dynamics through variation of a single conductance parameter governing calcium recovery. We similarly demonstrate transition from graded to all-or-none neural excitability in the onset of spiking dynamics through the variation of channel kinetic parameters governing the speed of potassium activation. Other combinations of variations in conductance and channel kinetic parameters give rise to phasic spiking and spike frequency adaptation dynamics. The NeuroDyn chip consumes 1.29 mW and occupies 3 mm × 3 mm in 0.5 μm CMOS, supporting emerging developments in neuromorphic silicon-neuron interfaces.

  19. Containment control of networked autonomous underwater vehicles: A predictor-based neural DSC design.

    PubMed

    Peng, Zhouhua; Wang, Dan; Wang, Wei; Liu, Lu

    2015-11-01

    This paper investigates the containment control problem of networked autonomous underwater vehicles in the presence of model uncertainty and unknown ocean disturbances. A predictor-based neural dynamic surface control design method is presented to develop the distributed adaptive containment controllers, under which the trajectories of follower vehicles nearly converge to the dynamic convex hull spanned by multiple reference trajectories over a directed network. Prediction errors, rather than tracking errors, are used to update the neural adaptation laws, which are independent of the tracking error dynamics, resulting in two time-scales to govern the entire system. The stability property of the closed-loop network is established via Lyapunov analysis, and transient property is quantified in terms of L2 norms of the derivatives of neural weights, which are shown to be smaller than the classical neural dynamic surface control approach. Comparative studies are given to show the substantial improvements of the proposed new method. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  20. Dynamical system with plastic self-organized velocity field as an alternative conceptual model of a cognitive system.

    PubMed

    Janson, Natalia B; Marsden, Christopher J

    2017-12-05

    It is well known that architecturally the brain is a neural network, i.e. a collection of many relatively simple units coupled flexibly. However, it has been unclear how the possession of this architecture enables higher-level cognitive functions, which are unique to the brain. Here, we consider the brain from the viewpoint of dynamical systems theory and hypothesize that the unique feature of the brain, the self-organized plasticity of its architecture, could represent the means of enabling the self-organized plasticity of its velocity vector field. We propose that, conceptually, the principle of cognition could amount to the existence of appropriate rules governing self-organization of the velocity field of a dynamical system with an appropriate account of stimuli. To support this hypothesis, we propose a simple non-neuromorphic mathematical model with a plastic self-organized velocity field, which has no prototype in physical world. This system is shown to be capable of basic cognition, which is illustrated numerically and with musical data. Our conceptual model could provide an additional insight into the working principles of the brain. Moreover, hardware implementations of plastic velocity fields self-organizing according to various rules could pave the way to creating artificial intelligence of a novel type.

  1. The Importance of Encoding-Related Neural Dynamics in the Prediction of Inter-Individual Differences in Verbal Working Memory Performance

    PubMed Central

    Majerus, Steve; Salmon, Eric; Attout, Lucie

    2013-01-01

    Studies of brain-behaviour interactions in the field of working memory (WM) have associated WM success with activation of a fronto-parietal network during the maintenance stage, and this mainly for visuo-spatial WM. Using an inter-individual differences approach, we demonstrate here the equal importance of neural dynamics during the encoding stage, and this in the context of verbal WM tasks which are characterized by encoding phases of long duration and sustained attentional demands. Participants encoded and maintained 5-word lists, half of them containing an unexpected word intended to disturb WM encoding and associated task-related attention processes. We observed that inter-individual differences in WM performance for lists containing disturbing stimuli were related to activation levels in a region previously associated with task-related attentional processing, the left intraparietal sulcus (IPS), and this during stimulus encoding but not maintenance; functional connectivity strength between the left IPS and lateral prefrontal cortex (PFC) further predicted WM performance. This study highlights the critical role, during WM encoding, of neural substrates involved in task-related attentional processes for predicting inter-individual differences in verbal WM performance, and, more generally, provides support for attention-based models of WM. PMID:23874935

  2. An information entropy model on clinical assessment of patients based on the holographic field of meridian

    NASA Astrophysics Data System (ADS)

    Wu, Jingjing; Wu, Xinming; Li, Pengfei; Li, Nan; Mao, Xiaomei; Chai, Lihe

    2017-04-01

    Meridian system is not only the basis of traditional Chinese medicine (TCM) method (e.g. acupuncture, massage), but also the core of TCM's basic theory. This paper has introduced a new informational perspective to understand the reality and the holographic field of meridian. Based on maximum information entropy principle (MIEP), a dynamic equation for the holographic field has been deduced, which reflects the evolutionary characteristics of meridian. By using self-organizing artificial neural network as algorithm, the evolutionary dynamic equation of the holographic field can be resolved to assess properties of meridians and clinically diagnose the health characteristics of patients. Finally, through some cases from clinical patients (e.g. a 30-year-old male patient, an apoplectic patient, an epilepsy patient), we use this model to assess the evolutionary properties of meridians. It is proved that this model not only has significant implications in revealing the essence of meridian in TCM, but also may play a guiding role in clinical assessment of patients based on the holographic field of meridians.

  3. Integration of Gravitational Torques in Cerebellar Pathways Allows for the Dynamic Inverse Computation of Vertical Pointing Movements of a Robot Arm

    PubMed Central

    Gentili, Rodolphe J.; Papaxanthis, Charalambos; Ebadzadeh, Mehdi; Eskiizmirliler, Selim; Ouanezar, Sofiane; Darlot, Christian

    2009-01-01

    Background Several authors suggested that gravitational forces are centrally represented in the brain for planning, control and sensorimotor predictions of movements. Furthermore, some studies proposed that the cerebellum computes the inverse dynamics (internal inverse model) whereas others suggested that it computes sensorimotor predictions (internal forward model). Methodology/Principal Findings This study proposes a model of cerebellar pathways deduced from both biological and physical constraints. The model learns the dynamic inverse computation of the effect of gravitational torques from its sensorimotor predictions without calculating an explicit inverse computation. By using supervised learning, this model learns to control an anthropomorphic robot arm actuated by two antagonists McKibben artificial muscles. This was achieved by using internal parallel feedback loops containing neural networks which anticipate the sensorimotor consequences of the neural commands. The artificial neural networks architecture was similar to the large-scale connectivity of the cerebellar cortex. Movements in the sagittal plane were performed during three sessions combining different initial positions, amplitudes and directions of movements to vary the effects of the gravitational torques applied to the robotic arm. The results show that this model acquired an internal representation of the gravitational effects during vertical arm pointing movements. Conclusions/Significance This is consistent with the proposal that the cerebellar cortex contains an internal representation of gravitational torques which is encoded through a learning process. Furthermore, this model suggests that the cerebellum performs the inverse dynamics computation based on sensorimotor predictions. This highlights the importance of sensorimotor predictions of gravitational torques acting on upper limb movements performed in the gravitational field. PMID:19384420

  4. System Identification for Nonlinear Control Using Neural Networks

    NASA Technical Reports Server (NTRS)

    Stengel, Robert F.; Linse, Dennis J.

    1990-01-01

    An approach to incorporating artificial neural networks in nonlinear, adaptive control systems is described. The controller contains three principal elements: a nonlinear inverse dynamic control law whose coefficients depend on a comprehensive model of the plant, a neural network that models system dynamics, and a state estimator whose outputs drive the control law and train the neural network. Attention is focused on the system identification task, which combines an extended Kalman filter with generalized spline function approximation. Continual learning is possible during normal operation, without taking the system off line for specialized training. Nonlinear inverse dynamic control requires smooth derivatives as well as function estimates, imposing stringent goals on the approximating technique.

  5. The Dynamics of the Stapedial Acoustic Reflex.

    NASA Astrophysics Data System (ADS)

    Moss, Sherrin Mary

    Available from UMI in association with The British Library. This thesis aims to separate the neural and muscular components of the stapedial acoustic reflex, both anatomically and physiologically. It aims to present an hypothesis to account for the differences between ipsilateral and contralateral reflex characteristics which have so far been unexplained, and achieve a greater understanding of the mechanisms underlying the reflex dynamics. A technique enabling faithful reproduction of the time course of the reflex is used throughout the experimental work. The technique measures tympanic membrane displacement as a result of reflex stapedius muscle contraction. The recorded response can be directly related to the mechanics of the middle ear and stapedius muscle contraction. Some development of the technique is undertaken by the author. A model of the reflex neural arc and stapedius muscle dynamics is evolved that is based upon a second order system. The model is unique in that it includes a latency in the ipsilateral negative feedback loop. Oscillations commonly observed on reflex responses are seen to be produced because of the inclusion of a latency in the feedback loop. The model demonstrates and explains the complex relationships between neural and muscle dynamic parameters observed in the experimental work. This more comprehensive understanding of the interaction between the stapedius dynamics and the neural arc of the reflex would not usually have been possible using human subjects, coupled with a non-invasive measurement technique. Evidence from the experimental work revealed the ipsilateral reflex to have, on average, a 5 dB lower threshold than the contralateral reflex. The oscillatory charcteristics, and the steady state response, of the contralateral reflex are also seen to be significantly different from those of the ipsilateral reflex. An hypothesis to account for the experimental observations is proposed. It is propounded that chemical neurotransmitters, and their effect upon the contralateral reflex arc from the site of the superior olivary complex to the motoneurones innervating the stapedius, account for the difference between the contralateral and ipsilateral reflex thresholds and dynamic characteristics. In the past two years the measurement technique used for the experimental work has developed from an audiological to a neurological diagnostic tool. This has enabled the results from the study to be applied in the field for valuable biomechanical and neurological explanations of the reflex response. (Abstract shortened by UMI.).

  6. A dynamical systems approach for estimating phase interactions between rhythms of different frequencies from experimental data

    PubMed Central

    Goto, Takahiro; Aoyagi, Toshio

    2018-01-01

    Synchronization of neural oscillations as a mechanism of brain function is attracting increasing attention. Neural oscillation is a rhythmic neural activity that can be easily observed by noninvasive electroencephalography (EEG). Neural oscillations show the same frequency and cross-frequency synchronization for various cognitive and perceptual functions. However, it is unclear how this neural synchronization is achieved by a dynamical system. If neural oscillations are weakly coupled oscillators, the dynamics of neural synchronization can be described theoretically using a phase oscillator model. We propose an estimation method to identify the phase oscillator model from real data of cross-frequency synchronized activities. The proposed method can estimate the coupling function governing the properties of synchronization. Furthermore, we examine the reliability of the proposed method using time-series data obtained from numerical simulation and an electronic circuit experiment, and show that our method can estimate the coupling function correctly. Finally, we estimate the coupling function between EEG oscillation and the speech sound envelope, and discuss the validity of these results. PMID:29337999

  7. A theory of neural dimensionality, dynamics, and measurement

    NASA Astrophysics Data System (ADS)

    Ganguli, Surya

    In many experiments, neuroscientists tightly control behavior, record many trials, and obtain trial-averaged firing rates from hundreds of neurons in circuits containing millions of behaviorally relevant neurons. Dimensionality reduction has often shown that such datasets are strikingly simple; they can be described using a much smaller number of dimensions than the number of recorded neurons, and the resulting projections onto these dimensions yield a remarkably insightful dynamical portrait of circuit computation. This ubiquitous simplicity raises several profound and timely conceptual questions. What is the origin of this simplicity and its implications for the complexity of brain dynamics? Would neuronal datasets become more complex if we recorded more neurons? How and when can we trust dynamical portraits obtained from only hundreds of neurons in circuits containing millions of neurons? We present a theory that answers these questions, and test it using neural data recorded from reaching monkeys. Overall, this theory yields a picture of the neural measurement process as a random projection of neural dynamics, conceptual insights into how we can reliably recover dynamical portraits in such under-sampled measurement regimes, and quantitative guidelines for the design of future experiments. Moreover, it reveals the existence of phase transition boundaries in our ability to successfully decode cognition and behavior as a function of the number of recorded neurons, the complexity of the task, and the smoothness of neural dynamics. membership pending.

  8. Reconfigurable Control with Neural Network Augmentation for a Modified F-15 Aircraft

    NASA Technical Reports Server (NTRS)

    Burken, John J.; Williams-Hayes, Peggy; Kaneshige, John T.; Stachowiak, Susan J.

    2006-01-01

    Description of the performance of a simplified dynamic inversion controller with neural network augmentation follows. Simulation studies focus on the results with and without neural network adaptation through the use of an F-15 aircraft simulator that has been modified to include canards. Simulated control law performance with a surface failure, in addition to an aerodynamic failure, is presented. The aircraft, with adaptation, attempts to minimize the inertial cross-coupling effect of the failure (a control derivative anomaly associated with a jammed control surface). The dynamic inversion controller calculates necessary surface commands to achieve desired rates. The dynamic inversion controller uses approximate short period and roll axis dynamics. The yaw axis controller is a sideslip rate command system. Methods are described to reduce the cross-coupling effect and maintain adequate tracking errors for control surface failures. The aerodynamic failure destabilizes the pitching moment due to angle of attack. The results show that control of the aircraft with the neural networks is easier (more damped) than without the neural networks. Simulation results show neural network augmentation of the controller improves performance with aerodynamic and control surface failures in terms of tracking error and cross-coupling reduction.

  9. Adaptive Control Using Neural Network Augmentation for a Modified F-15 Aircraft

    NASA Technical Reports Server (NTRS)

    Burken, John J.; Williams-Hayes, Peggy; Karneshige, J. T.; Stachowiak, Susan J.

    2006-01-01

    Description of the performance of a simplified dynamic inversion controller with neural network augmentation follows. Simulation studies focus on the results with and without neural network adaptation through the use of an F-15 aircraft simulator that has been modified to include canards. Simulated control law performance with a surface failure, in addition to an aerodynamic failure, is presented. The aircraft, with adaptation, attempts to minimize the inertial cross-coupling effect of the failure (a control derivative anomaly associated with a jammed control surface). The dynamic inversion controller calculates necessary surface commands to achieve desired rates. The dynamic inversion controller uses approximate short period and roll axis dynamics. The yaw axis controller is a sideslip rate command system. Methods are described to reduce the cross-coupling effect and maintain adequate tracking errors for control surface failures. The aerodynamic failure destabilizes the pitching moment due to angle of attack. The results show that control of the aircraft with the neural networks is easier (more damped) than without the neural networks. Simulation results show neural network augmentation of the controller improves performance with aerodynamic and control surface failures in terms of tracking error and cross-coupling reduction.

  10. On the origin of reproducible sequential activity in neural circuits

    NASA Astrophysics Data System (ADS)

    Afraimovich, V. S.; Zhigulin, V. P.; Rabinovich, M. I.

    2004-12-01

    Robustness and reproducibility of sequential spatio-temporal responses is an essential feature of many neural circuits in sensory and motor systems of animals. The most common mathematical images of dynamical regimes in neural systems are fixed points, limit cycles, chaotic attractors, and continuous attractors (attractive manifolds of neutrally stable fixed points). These are not suitable for the description of reproducible transient sequential neural dynamics. In this paper we present the concept of a stable heteroclinic sequence (SHS), which is not an attractor. SHS opens the way for understanding and modeling of transient sequential activity in neural circuits. We show that this new mathematical object can be used to describe robust and reproducible sequential neural dynamics. Using the framework of a generalized high-dimensional Lotka-Volterra model, that describes the dynamics of firing rates in an inhibitory network, we present analytical results on the existence of the SHS in the phase space of the network. With the help of numerical simulations we confirm its robustness in presence of noise in spite of the transient nature of the corresponding trajectories. Finally, by referring to several recent neurobiological experiments, we discuss possible applications of this new concept to several problems in neuroscience.

  11. On the origin of reproducible sequential activity in neural circuits.

    PubMed

    Afraimovich, V S; Zhigulin, V P; Rabinovich, M I

    2004-12-01

    Robustness and reproducibility of sequential spatio-temporal responses is an essential feature of many neural circuits in sensory and motor systems of animals. The most common mathematical images of dynamical regimes in neural systems are fixed points, limit cycles, chaotic attractors, and continuous attractors (attractive manifolds of neutrally stable fixed points). These are not suitable for the description of reproducible transient sequential neural dynamics. In this paper we present the concept of a stable heteroclinic sequence (SHS), which is not an attractor. SHS opens the way for understanding and modeling of transient sequential activity in neural circuits. We show that this new mathematical object can be used to describe robust and reproducible sequential neural dynamics. Using the framework of a generalized high-dimensional Lotka-Volterra model, that describes the dynamics of firing rates in an inhibitory network, we present analytical results on the existence of the SHS in the phase space of the network. With the help of numerical simulations we confirm its robustness in presence of noise in spite of the transient nature of the corresponding trajectories. Finally, by referring to several recent neurobiological experiments, we discuss possible applications of this new concept to several problems in neuroscience.

  12. Techniques for extracting single-trial activity patterns from large-scale neural recordings

    PubMed Central

    Churchland, Mark M; Yu, Byron M; Sahani, Maneesh; Shenoy, Krishna V

    2008-01-01

    Summary Large, chronically-implanted arrays of microelectrodes are an increasingly common tool for recording from primate cortex, and can provide extracellular recordings from many (order of 100) neurons. While the desire for cortically-based motor prostheses has helped drive their development, such arrays also offer great potential to advance basic neuroscience research. Here we discuss the utility of array recording for the study of neural dynamics. Neural activity often has dynamics beyond that driven directly by the stimulus. While governed by those dynamics, neural responses may nevertheless unfold differently for nominally identical trials, rendering many traditional analysis methods ineffective. We review recent studies – some employing simultaneous recording, some not – indicating that such variability is indeed present both during movement generation, and during the preceding premotor computations. In such cases, large-scale simultaneous recordings have the potential to provide an unprecedented view of neural dynamics at the level of single trials. However, this enterprise will depend not only on techniques for simultaneous recording, but also on the use and further development of analysis techniques that can appropriately reduce the dimensionality of the data, and allow visualization of single-trial neural behavior. PMID:18093826

  13. Artificial neural networks for control of a grid-connected rectifier/inverter under disturbance, dynamic and power converter switching conditions.

    PubMed

    Li, Shuhui; Fairbank, Michael; Johnson, Cameron; Wunsch, Donald C; Alonso, Eduardo; Proaño, Julio L

    2014-04-01

    Three-phase grid-connected converters are widely used in renewable and electric power system applications. Traditionally, grid-connected converters are controlled with standard decoupled d-q vector control mechanisms. However, recent studies indicate that such mechanisms show limitations in their applicability to dynamic systems. This paper investigates how to mitigate such restrictions using a neural network to control a grid-connected rectifier/inverter. The neural network implements a dynamic programming algorithm and is trained by using back-propagation through time. To enhance performance and stability under disturbance, additional strategies are adopted, including the use of integrals of error signals to the network inputs and the introduction of grid disturbance voltage to the outputs of a well-trained network. The performance of the neural-network controller is studied under typical vector control conditions and compared against conventional vector control methods, which demonstrates that the neural vector control strategy proposed in this paper is effective. Even in dynamic and power converter switching environments, the neural vector controller shows strong ability to trace rapidly changing reference commands, tolerate system disturbances, and satisfy control requirements for a faulted power system.

  14. Learning Data Set Influence on Identification Accuracy of Gas Turbine Neural Network Model

    NASA Astrophysics Data System (ADS)

    Kuznetsov, A. V.; Makaryants, G. M.

    2018-01-01

    There are many gas turbine engine identification researches via dynamic neural network models. It should minimize errors between model and real object during identification process. Questions about training data set processing of neural networks are usually missed. This article presents a study about influence of data set type on gas turbine neural network model accuracy. The identification object is thermodynamic model of micro gas turbine engine. The thermodynamic model input signal is the fuel consumption and output signal is the engine rotor rotation frequency. Four types input signals was used for creating training and testing data sets of dynamic neural network models - step, fast, slow and mixed. Four dynamic neural networks were created based on these types of training data sets. Each neural network was tested via four types test data sets. In the result 16 transition processes from four neural networks and four test data sets from analogous solving results of thermodynamic model were compared. The errors comparison was made between all neural network errors in each test data set. In the comparison result it was shown error value ranges of each test data set. It is shown that error values ranges is small therefore the influence of data set types on identification accuracy is low.

  15. A neural-network-based model for the dynamic simulation of the tire/suspension system while traversing road irregularities.

    PubMed

    Guarneri, Paolo; Rocca, Gianpiero; Gobbi, Massimiliano

    2008-09-01

    This paper deals with the simulation of the tire/suspension dynamics by using recurrent neural networks (RNNs). RNNs are derived from the multilayer feedforward neural networks, by adding feedback connections between output and input layers. The optimal network architecture derives from a parametric analysis based on the optimal tradeoff between network accuracy and size. The neural network can be trained with experimental data obtained in the laboratory from simulated road profiles (cleats). The results obtained from the neural network demonstrate good agreement with the experimental results over a wide range of operation conditions. The NN model can be effectively applied as a part of vehicle system model to accurately predict elastic bushings and tire dynamics behavior. Although the neural network model, as a black-box model, does not provide a good insight of the physical behavior of the tire/suspension system, it is a useful tool for assessing vehicle ride and noise, vibration, harshness (NVH) performance due to its good computational efficiency and accuracy.

  16. Bio-inspired spiking neural network for nonlinear systems control.

    PubMed

    Pérez, Javier; Cabrera, Juan A; Castillo, Juan J; Velasco, Juan M

    2018-08-01

    Spiking neural networks (SNN) are the third generation of artificial neural networks. SNN are the closest approximation to biological neural networks. SNNs make use of temporal spike trains to command inputs and outputs, allowing a faster and more complex computation. As demonstrated by biological organisms, they are a potentially good approach to designing controllers for highly nonlinear dynamic systems in which the performance of controllers developed by conventional techniques is not satisfactory or difficult to implement. SNN-based controllers exploit their ability for online learning and self-adaptation to evolve when transferred from simulations to the real world. SNN's inherent binary and temporary way of information codification facilitates their hardware implementation compared to analog neurons. Biological neural networks often require a lower number of neurons compared to other controllers based on artificial neural networks. In this work, these neuronal systems are imitated to perform the control of non-linear dynamic systems. For this purpose, a control structure based on spiking neural networks has been designed. Particular attention has been paid to optimizing the structure and size of the neural network. The proposed structure is able to control dynamic systems with a reduced number of neurons and connections. A supervised learning process using evolutionary algorithms has been carried out to perform controller training. The efficiency of the proposed network has been verified in two examples of dynamic systems control. Simulations show that the proposed control based on SNN exhibits superior performance compared to other approaches based on Neural Networks and SNNs. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. A solution for two-dimensional mazes with use of chaotic dynamics in a recurrent neural network model.

    PubMed

    Suemitsu, Yoshikazu; Nara, Shigetoshi

    2004-09-01

    Chaotic dynamics introduced into a neural network model is applied to solving two-dimensional mazes, which are ill-posed problems. A moving object moves from the position at t to t + 1 by simply defined motion function calculated from firing patterns of the neural network model at each time step t. We have embedded several prototype attractors that correspond to the simple motion of the object orienting toward several directions in two-dimensional space in our neural network model. Introducing chaotic dynamics into the network gives outputs sampled from intermediate state points between embedded attractors in a state space, and these dynamics enable the object to move in various directions. System parameter switching between a chaotic and an attractor regime in the state space of the neural network enables the object to move to a set target in a two-dimensional maze. Results of computer simulations show that the success rate for this method over 300 trials is higher than that of random walk. To investigate why the proposed method gives better performance, we calculate and discuss statistical data with respect to dynamical structure.

  18. Effects of Aging on Cortical Neural Dynamics and Local Sleep Homeostasis in Mice

    PubMed Central

    Fisher, Simon P.; Cui, Nanyi; Peirson, Stuart N.; Foster, Russell G.

    2018-01-01

    Healthy aging is associated with marked effects on sleep, including its daily amount and architecture, as well as the specific EEG oscillations. Neither the neurophysiological underpinnings nor the biological significance of these changes are understood, and crucially the question remains whether aging is associated with reduced sleep need or a diminished capacity to generate sufficient sleep. Here we tested the hypothesis that aging may affect local cortical networks, disrupting the capacity to generate and sustain sleep oscillations, and with it the local homeostatic response to sleep loss. We performed chronic recordings of cortical neural activity and local field potentials from the motor cortex in young and older male C57BL/6J mice, during spontaneous waking and sleep, as well as during sleep after sleep deprivation. In older animals, we observed an increase in the incidence of non-rapid eye movement sleep local field potential slow waves and their associated neuronal silent (OFF) periods, whereas the overall pattern of state-dependent cortical neuronal firing was generally similar between ages. Furthermore, we observed that the response to sleep deprivation at the level of local cortical network activity was not affected by aging. Our data thus suggest that the local cortical neural dynamics and local sleep homeostatic mechanisms, at least in the motor cortex, are not impaired during healthy senescence in mice. This indicates that powerful protective or compensatory mechanisms may exist to maintain neuronal function stable across the life span, counteracting global changes in sleep amount and architecture. SIGNIFICANCE STATEMENT The biological significance of age-dependent changes in sleep is unknown but may reflect either a diminished sleep need or a reduced capacity to generate deep sleep stages. As aging has been linked to profound disruptions in cortical sleep oscillations and because sleep need is reflected in specific patterns of cortical activity, we performed chronic electrophysiological recordings of cortical neural activity during waking, sleep, and after sleep deprivation from young and older mice. We found that all main hallmarks of cortical activity during spontaneous sleep and recovery sleep after sleep deprivation were largely intact in older mice, suggesting that the well-described age-related changes in global sleep are unlikely to arise from a disruption of local network dynamics within the neocortex. PMID:29581380

  19. Effects of Aging on Cortical Neural Dynamics and Local Sleep Homeostasis in Mice.

    PubMed

    McKillop, Laura E; Fisher, Simon P; Cui, Nanyi; Peirson, Stuart N; Foster, Russell G; Wafford, Keith A; Vyazovskiy, Vladyslav V

    2018-04-18

    Healthy aging is associated with marked effects on sleep, including its daily amount and architecture, as well as the specific EEG oscillations. Neither the neurophysiological underpinnings nor the biological significance of these changes are understood, and crucially the question remains whether aging is associated with reduced sleep need or a diminished capacity to generate sufficient sleep. Here we tested the hypothesis that aging may affect local cortical networks, disrupting the capacity to generate and sustain sleep oscillations, and with it the local homeostatic response to sleep loss. We performed chronic recordings of cortical neural activity and local field potentials from the motor cortex in young and older male C57BL/6J mice, during spontaneous waking and sleep, as well as during sleep after sleep deprivation. In older animals, we observed an increase in the incidence of non-rapid eye movement sleep local field potential slow waves and their associated neuronal silent (OFF) periods, whereas the overall pattern of state-dependent cortical neuronal firing was generally similar between ages. Furthermore, we observed that the response to sleep deprivation at the level of local cortical network activity was not affected by aging. Our data thus suggest that the local cortical neural dynamics and local sleep homeostatic mechanisms, at least in the motor cortex, are not impaired during healthy senescence in mice. This indicates that powerful protective or compensatory mechanisms may exist to maintain neuronal function stable across the life span, counteracting global changes in sleep amount and architecture. SIGNIFICANCE STATEMENT The biological significance of age-dependent changes in sleep is unknown but may reflect either a diminished sleep need or a reduced capacity to generate deep sleep stages. As aging has been linked to profound disruptions in cortical sleep oscillations and because sleep need is reflected in specific patterns of cortical activity, we performed chronic electrophysiological recordings of cortical neural activity during waking, sleep, and after sleep deprivation from young and older mice. We found that all main hallmarks of cortical activity during spontaneous sleep and recovery sleep after sleep deprivation were largely intact in older mice, suggesting that the well-described age-related changes in global sleep are unlikely to arise from a disruption of local network dynamics within the neocortex. Copyright © 2018 McKillop et al.

  20. A connectionist-geostatistical approach for classification of deformation types in ice surfaces

    NASA Astrophysics Data System (ADS)

    Goetz-Weiss, L. R.; Herzfeld, U. C.; Hale, R. G.; Hunke, E. C.; Bobeck, J.

    2014-12-01

    Deformation is a class of highly non-linear geophysical processes from which one can infer other geophysical variables in a dynamical system. For example, in an ice-dynamic model, deformation is related to velocity, basal sliding, surface elevation changes, and the stress field at the surface as well as internal to a glacier. While many of these variables cannot be observed, deformation state can be an observable variable, because deformation in glaciers (once a viscosity threshold is exceeded) manifests itself in crevasses.Given the amount of information that can be inferred from observing surface deformation, an automated method for classifying surface imagery becomes increasingly desirable. In this paper a Neural Network is used to recognize classes of crevasse types over the Bering Bagley Glacier System (BBGS) during a surge (2011-2013-?). A surge is a spatially and temporally highly variable and rapid acceleration of the glacier. Therefore, many different crevasse types occur in a short time frame and in close proximity, and these crevasse fields hold information on the geophysical processes of the surge.The connectionist-geostatistical approach uses directional experimental (discrete) variograms to parameterize images into a form that the Neural Network can recognize. Recognizing that each surge wave results in different crevasse types and that environmental conditions affect the appearance in imagery, we have developed a semi-automated pre-training software to adapt the Neural Net to chaining conditions.The method is applied to airborne and satellite imagery to classify surge crevasses from the BBGS surge. This method works well for classifying spatially repetitive images such as the crevasses over Bering Glacier. We expand the network for less repetitive images in order to analyze imagery collected over the Arctic sea ice, to assess the percentage of deformed ice for model calibration.

  1. Optical Probes for Neurobiological Sensing and Imaging.

    PubMed

    Kim, Eric H; Chin, Gregory; Rong, Guoxin; Poskanzer, Kira E; Clark, Heather A

    2018-05-15

    Fluorescent nanosensors and molecular probes are next-generation tools for imaging chemical signaling inside and between cells. Electrophysiology has long been considered the gold standard in elucidating neural dynamics with high temporal resolution and precision, particularly on the single-cell level. However, electrode-based techniques face challenges in illuminating the specific chemicals involved in neural cell activation with adequate spatial information. Measuring chemical dynamics is of fundamental importance to better understand synergistic interactions between neurons as well as interactions between neurons and non-neuronal cells. Over the past decade, significant technological advances in optical probes and imaging methods have enabled entirely new possibilities for studying neural cells and circuits at the chemical level. These optical imaging modalities have shown promise for combining chemical, temporal, and spatial information. This potential makes them ideal candidates to unravel the complex neural interactions at multiple scales in the brain, which could be complemented by traditional electrophysiological methods to obtain a full spatiotemporal picture of neurochemical dynamics. Despite the potential, only a handful of probe candidates have been utilized to provide detailed chemical information in the brain. To date, most live imaging and chemical mapping studies rely on fluorescent molecular indicators to report intracellular calcium (Ca 2+ ) dynamics, which correlates with neuronal activity. Methodological advances for monitoring a full array of chemicals in the brain with improved spatial, temporal, and chemical resolution will thus enable mapping of neurochemical circuits with finer precision. On the basis of numerous studies in this exciting field, we review the current efforts to develop and apply a palette of optical probes and nanosensors for chemical sensing in the brain. There is a strong impetus to further develop technologies capable of probing entire neurobiological units with high spatiotemporal resolution. Thus, we introduce selected applications for ion and neurotransmitter detection to investigate both neurons and non-neuronal brain cells. We focus on families of optical probes because of their ability to sense a wide array of molecules and convey spatial information with minimal damage to tissue. We start with a discussion of currently available molecular probes, highlight recent advances in genetically modified fluorescent probes for ions and small molecules, and end with the latest research in nanosensors for biological imaging. Customizable, nanoscale optical sensors that accurately and dynamically monitor the local environment with high spatiotemporal resolution could lead to not only new insights into the function of all cell types but also a broader understanding of how diverse neural signaling systems act in conjunction with neighboring cells in a spatially relevant manner.

  2. Dynamic control of magnetic nanowires by light-induced domain-wall kickoffs

    NASA Astrophysics Data System (ADS)

    Heintze, Eric; El Hallak, Fadi; Clauß, Conrad; Rettori, Angelo; Pini, Maria Gloria; Totti, Federico; Dressel, Martin; Bogani, Lapo

    2013-03-01

    Controlling the speed at which systems evolve is a challenge shared by all disciplines, and otherwise unrelated areas use common theoretical frameworks towards this goal. A particularly widespread model is Glauber dynamics, which describes the time evolution of the Ising model and can be applied to any binary system. Here we show, using molecular nanowires under irradiation, that Glauber dynamics can be controlled by a novel domain-wall kickoff mechanism. In contrast to known processes, the kickoff has unambiguous fingerprints, slowing down the spin-flip attempt rate by several orders of magnitude, and following a scaling law. The required irradiance is very low, a substantial improvement over present methods of magneto-optical switching. These results provide a new way to control and study stochastic dynamic processes. Being general for Glauber dynamics, they can be extended to different kinds of magnetic nanowires and to numerous fields, ranging from social evolution to neural networks and chemical reactivity.

  3. Evolvable rough-block-based neural network and its biomedical application to hypoglycemia detection system.

    PubMed

    San, Phyo Phyo; Ling, Sai Ho; Nuryani; Nguyen, Hung

    2014-08-01

    This paper focuses on the hybridization technology using rough sets concepts and neural computing for decision and classification purposes. Based on the rough set properties, the lower region and boundary region are defined to partition the input signal to a consistent (predictable) part and an inconsistent (random) part. In this way, the neural network is designed to deal only with the boundary region, which mainly consists of an inconsistent part of applied input signal causing inaccurate modeling of the data set. Owing to different characteristics of neural network (NN) applications, the same structure of conventional NN might not give the optimal solution. Based on the knowledge of application in this paper, a block-based neural network (BBNN) is selected as a suitable classifier due to its ability to evolve internal structures and adaptability in dynamic environments. This architecture will systematically incorporate the characteristics of application to the structure of hybrid rough-block-based neural network (R-BBNN). A global training algorithm, hybrid particle swarm optimization with wavelet mutation is introduced for parameter optimization of proposed R-BBNN. The performance of the proposed R-BBNN algorithm was evaluated by an application to the field of medical diagnosis using real hypoglycemia episodes in patients with Type 1 diabetes mellitus. The performance of the proposed hybrid system has been compared with some of the existing neural networks. The comparison results indicated that the proposed method has improved classification performance and results in early convergence of the network.

  4. Machine Vision Within The Framework Of Collective Neural Assemblies

    NASA Astrophysics Data System (ADS)

    Gupta, Madan M.; Knopf, George K.

    1990-03-01

    The proposed mechanism for designing a robust machine vision system is based on the dynamic activity generated by the various neural populations embedded in nervous tissue. It is postulated that a hierarchy of anatomically distinct tissue regions are involved in visual sensory information processing. Each region may be represented as a planar sheet of densely interconnected neural circuits. Spatially localized aggregates of these circuits represent collective neural assemblies. Four dynamically coupled neural populations are assumed to exist within each assembly. In this paper we present a state-variable model for a tissue sheet derived from empirical studies of population dynamics. Each population is modelled as a nonlinear second-order system. It is possible to emulate certain observed physiological and psychophysiological phenomena of biological vision by properly programming the interconnective gains . Important early visual phenomena such as temporal and spatial noise insensitivity, contrast sensitivity and edge enhancement will be discussed for a one-dimensional tissue model.

  5. Short-term synaptic plasticity and heterogeneity in neural systems

    NASA Astrophysics Data System (ADS)

    Mejias, J. F.; Kappen, H. J.; Longtin, A.; Torres, J. J.

    2013-01-01

    We review some recent results on neural dynamics and information processing which arise when considering several biophysical factors of interest, in particular, short-term synaptic plasticity and neural heterogeneity. The inclusion of short-term synaptic plasticity leads to enhanced long-term memory capacities, a higher robustness of memory to noise, and irregularity in the duration of the so-called up cortical states. On the other hand, considering some level of neural heterogeneity in neuron models allows neural systems to optimize information transmission in rate coding and temporal coding, two strategies commonly used by neurons to codify information in many brain areas. In all these studies, analytical approximations can be made to explain the underlying dynamics of these neural systems.

  6. A mathematical analysis of the effects of Hebbian learning rules on the dynamics and structure of discrete-time random recurrent neural networks.

    PubMed

    Siri, Benoît; Berry, Hugues; Cessac, Bruno; Delord, Bruno; Quoy, Mathias

    2008-12-01

    We present a mathematical analysis of the effects of Hebbian learning in random recurrent neural networks, with a generic Hebbian learning rule, including passive forgetting and different timescales, for neuronal activity and learning dynamics. Previous numerical work has reported that Hebbian learning drives the system from chaos to a steady state through a sequence of bifurcations. Here, we interpret these results mathematically and show that these effects, involving a complex coupling between neuronal dynamics and synaptic graph structure, can be analyzed using Jacobian matrices, which introduce both a structural and a dynamical point of view on neural network evolution. Furthermore, we show that sensitivity to a learned pattern is maximal when the largest Lyapunov exponent is close to 0. We discuss how neural networks may take advantage of this regime of high functional interest.

  7. LANL*V2.0: global modeling and validation

    NASA Astrophysics Data System (ADS)

    Koller, J.; Zaharia, S.

    2011-08-01

    We describe in this paper the new version of LANL*, an artificial neural network (ANN) for calculating the magnetic drift invariant L*. This quantity is used for modeling radiation belt dynamics and for space weather applications. We have implemented the following enhancements in the new version: (1) we have removed the limitation to geosynchronous orbit and the model can now be used for a much larger region. (2) The new version is based on the improved magnetic field model by Tsyganenko and Sitnov (2005) (TS05) instead of the older model by Tsyganenko et al. (2003). We have validated the model and compared our results to L* calculations with the TS05 model based on ephemerides for CRRES, Polar, GPS, a LANL geosynchronous satellite, and a virtual RBSP type orbit. We find that the neural network performs very well for all these orbits with an error typically ΔL* < 0.2 which corresponds to an error of 3 % at geosynchronous orbit. This new LANL* V2.0 artificial neural network is orders of magnitudes faster than traditional numerical field line integration techniques with the TS05 model. It has applications to real-time radiation belt forecasting, analysis of data sets involving decades of satellite of observations, and other problems in space weather.

  8. Network complexity as a measure of information processing across resting-state networks: evidence from the Human Connectome Project

    PubMed Central

    McDonough, Ian M.; Nashiro, Kaoru

    2014-01-01

    An emerging field of research focused on fluctuations in brain signals has provided evidence that the complexity of those signals, as measured by entropy, conveys important information about network dynamics (e.g., local and distributed processing). While much research has focused on how neural complexity differs in populations with different age groups or clinical disorders, substantially less research has focused on the basic understanding of neural complexity in populations with young and healthy brain states. The present study used resting-state fMRI data from the Human Connectome Project (Van Essen et al., 2013) to test the extent that neural complexity in the BOLD signal, as measured by multiscale entropy (1) would differ from random noise, (2) would differ between four major resting-state networks previously associated with higher-order cognition, and (3) would be associated with the strength and extent of functional connectivity—a complementary method of estimating information processing. We found that complexity in the BOLD signal exhibited different patterns of complexity from white, pink, and red noise and that neural complexity was differentially expressed between resting-state networks, including the default mode, cingulo-opercular, left and right frontoparietal networks. Lastly, neural complexity across all networks was negatively associated with functional connectivity at fine scales, but was positively associated with functional connectivity at coarse scales. The present study is the first to characterize neural complexity in BOLD signals at a high temporal resolution and across different networks and might help clarify the inconsistencies between neural complexity and functional connectivity, thus informing the mechanisms underlying neural complexity. PMID:24959130

  9. Graded, Dynamically Routable Information Processing with Synfire-Gated Synfire Chains.

    PubMed

    Wang, Zhuo; Sornborger, Andrew T; Tao, Louis

    2016-06-01

    Coherent neural spiking and local field potentials are believed to be signatures of the binding and transfer of information in the brain. Coherent activity has now been measured experimentally in many regions of mammalian cortex. Recently experimental evidence has been presented suggesting that neural information is encoded and transferred in packets, i.e., in stereotypical, correlated spiking patterns of neural activity. Due to their relevance to coherent spiking, synfire chains are one of the main theoretical constructs that have been appealed to in order to describe coherent spiking and information transfer phenomena. However, for some time, it has been known that synchronous activity in feedforward networks asymptotically either approaches an attractor with fixed waveform and amplitude, or fails to propagate. This has limited the classical synfire chain's ability to explain graded neuronal responses. Recently, we have shown that pulse-gated synfire chains are capable of propagating graded information coded in mean population current or firing rate amplitudes. In particular, we showed that it is possible to use one synfire chain to provide gating pulses and a second, pulse-gated synfire chain to propagate graded information. We called these circuits synfire-gated synfire chains (SGSCs). Here, we present SGSCs in which graded information can rapidly cascade through a neural circuit, and show a correspondence between this type of transfer and a mean-field model in which gating pulses overlap in time. We show that SGSCs are robust in the presence of variability in population size, pulse timing and synaptic strength. Finally, we demonstrate the computational capabilities of SGSC-based information coding by implementing a self-contained, spike-based, modular neural circuit that is triggered by streaming input, processes the input, then makes a decision based on the processed information and shuts itself down.

  10. Low-Dimensional Models of "Neuro-Glio-Vascular Unit" for Describing Neural Dynamics under Normal and Energy-Starved Conditions.

    PubMed

    Chhabria, Karishma; Chakravarthy, V Srinivasa

    2016-01-01

    The motivation of developing simple minimal models for neuro-glio-vascular (NGV) system arises from a recent modeling study elucidating the bidirectional information flow within the NGV system having 89 dynamic equations (1). While this was one of the first attempts at formulating a comprehensive model for neuro-glio-vascular system, it poses severe restrictions in scaling up to network levels. On the contrary, low--dimensional models are convenient devices in simulating large networks that also provide an intuitive understanding of the complex interactions occurring within the NGV system. The key idea underlying the proposed models is to describe the glio-vascular system as a lumped system, which takes neural firing rate as input and returns an "energy" variable (analogous to ATP) as output. To this end, we present two models: biophysical neuro-energy (Model 1 with five variables), comprising KATP channel activity governed by neuronal ATP dynamics, and the dynamic threshold (Model 2 with three variables), depicting the dependence of neural firing threshold on the ATP dynamics. Both the models show different firing regimes, such as continuous spiking, phasic, and tonic bursting depending on the ATP production coefficient, ɛp, and external current. We then demonstrate that in a network comprising such energy-dependent neuron units, ɛp could modulate the local field potential (LFP) frequency and amplitude. Interestingly, low-frequency LFP dominates under low ɛp conditions, which is thought to be reminiscent of seizure-like activity observed in epilepsy. The proposed "neuron-energy" unit may be implemented in building models of NGV networks to simulate data obtained from multimodal neuroimaging systems, such as functional near infrared spectroscopy coupled to electroencephalogram and functional magnetic resonance imaging coupled to electroencephalogram. Such models could also provide a theoretical basis for devising optimal neurorehabilitation strategies, such as non-invasive brain stimulation for stroke patients.

  11. Space time neural networks for tether operations in space

    NASA Technical Reports Server (NTRS)

    Lea, Robert N.; Villarreal, James A.; Jani, Yashvant; Copeland, Charles

    1993-01-01

    A space shuttle flight scheduled for 1992 will attempt to prove the feasibility of operating tethered payloads in earth orbit. due to the interaction between the Earth's magnetic field and current pulsing through the tether, the tethered system may exhibit a circular transverse oscillation referred to as the 'skiprope' phenomenon. Effective damping of skiprope motion depends on rapid and accurate detection of skiprope magnitude and phase. Because of non-linear dynamic coupling, the satellite attitude behavior has characteristic oscillations during the skiprope motion. Since the satellite attitude motion has many other perturbations, the relationship between the skiprope parameters and attitude time history is very involved and non-linear. We propose a Space-Time Neural Network implementation for filtering satellite rate gyro data to rapidly detect and predict skiprope magnitude and phase. Training and testing of the skiprope detection system will be performed using a validated Orbital Operations Simulator and Space-Time Neural Network software developed in the Software Technology Branch at NASA's Lyndon B. Johnson Space Center.

  12. Evaluation and prediction of solar radiation for energy management based on neural networks

    NASA Astrophysics Data System (ADS)

    Aldoshina, O. V.; Van Tai, Dinh

    2017-08-01

    Currently, there is a high rate of distribution of renewable energy sources and distributed power generation based on intelligent networks; therefore, meteorological forecasts are particularly useful for planning and managing the energy system in order to increase its overall efficiency and productivity. The application of artificial neural networks (ANN) in the field of photovoltaic energy is presented in this article. Implemented in this study, two periodically repeating dynamic ANS, that are the concentration of the time delay of a neural network (CTDNN) and the non-linear autoregression of a network with exogenous inputs of the NAEI, are used in the development of a model for estimating and daily forecasting of solar radiation. ANN show good productivity, as reliable and accurate models of daily solar radiation are obtained. This allows to successfully predict the photovoltaic output power for this installation. The potential of the proposed method for controlling the energy of the electrical network is shown using the example of the application of the NAEI network for predicting the electric load.

  13. Density-based clustering: A 'landscape view' of multi-channel neural data for inference and dynamic complexity analysis.

    PubMed

    Baglietto, Gabriel; Gigante, Guido; Del Giudice, Paolo

    2017-01-01

    Two, partially interwoven, hot topics in the analysis and statistical modeling of neural data, are the development of efficient and informative representations of the time series derived from multiple neural recordings, and the extraction of information about the connectivity structure of the underlying neural network from the recorded neural activities. In the present paper we show that state-space clustering can provide an easy and effective option for reducing the dimensionality of multiple neural time series, that it can improve inference of synaptic couplings from neural activities, and that it can also allow the construction of a compact representation of the multi-dimensional dynamics, that easily lends itself to complexity measures. We apply a variant of the 'mean-shift' algorithm to perform state-space clustering, and validate it on an Hopfield network in the glassy phase, in which metastable states are largely uncorrelated from memories embedded in the synaptic matrix. In this context, we show that the neural states identified as clusters' centroids offer a parsimonious parametrization of the synaptic matrix, which allows a significant improvement in inferring the synaptic couplings from the neural activities. Moving to the more realistic case of a multi-modular spiking network, with spike-frequency adaptation inducing history-dependent effects, we propose a procedure inspired by Boltzmann learning, but extending its domain of application, to learn inter-module synaptic couplings so that the spiking network reproduces a prescribed pattern of spatial correlations; we then illustrate, in the spiking network, how clustering is effective in extracting relevant features of the network's state-space landscape. Finally, we show that the knowledge of the cluster structure allows casting the multi-dimensional neural dynamics in the form of a symbolic dynamics of transitions between clusters; as an illustration of the potential of such reduction, we define and analyze a measure of complexity of the neural time series.

  14. Aeroelasticity of morphing wings using neural networks

    NASA Astrophysics Data System (ADS)

    Natarajan, Anand

    In this dissertation, neural networks are designed to effectively model static non-linear aeroelastic problems in adaptive structures and linear dynamic aeroelastic systems with time varying stiffness. The use of adaptive materials in aircraft wings allows for the change of the contour or the configuration of a wing (morphing) in flight. The use of smart materials, to accomplish these deformations, can imply that the stiffness of the wing with a morphing contour changes as the contour changes. For a rapidly oscillating body in a fluid field, continuously adapting structural parameters may render the wing to behave as a time variant system. Even the internal spars/ribs of the aircraft wing which define the wing stiffness can be made adaptive, that is, their stiffness can be made to vary with time. The immediate effect on the structural dynamics of the wing, is that, the wing motion is governed by a differential equation with time varying coefficients. The study of this concept of a time varying torsional stiffness, made possible by the use of active materials and adaptive spars, in the dynamic aeroelastic behavior of an adaptable airfoil is performed here. Another type of aeroelastic problem of an adaptive structure that is investigated here, is the shape control of an adaptive bump situated on the leading edge of an airfoil. Such a bump is useful in achieving flow separation control for lateral directional maneuverability of the aircraft. Since actuators are being used to create this bump on the wing surface, the energy required to do so needs to be minimized. The adverse pressure drag as a result of this bump needs to be controlled so that the loss in lift over the wing is made minimal. The design of such a "spoiler bump" on the surface of the airfoil is an optimization problem of maximizing pressure drag due to flow separation while minimizing the loss in lift and energy required to deform the bump. One neural network is trained using the CFD code FLUENT to represent the aerodynamic loading over the bump. A second neural network is trained for calculating the actuator loads, bump displacement and lift, drag forces over the airfoil using the finite element solver, ANSYS and the previously trained neural network. This non-linear aeroelastic model of the deforming bump on an airfoil surface using neural networks can serve as a fore-runner for other non-linear aeroelastic problems.

  15. Geometric properties-dependent neural synchrony modulated by extracellular subthreshold electric field

    NASA Astrophysics Data System (ADS)

    Wei, Xile; Si, Kaili; Yi, Guosheng; Wang, Jiang; Lu, Meili

    2016-07-01

    In this paper, we use a reduced two-compartment neuron model to investigate the interaction between extracellular subthreshold electric field and synchrony in small world networks. It is observed that network synchronization is closely related to the strength of electric field and geometric properties of the two-compartment model. Specifically, increasing the electric field induces a gradual improvement in network synchrony, while increasing the geometric factor results in an abrupt decrease in synchronization of network. In addition, increasing electric field can make the network become synchronous from asynchronous when the geometric parameter is set to a given value. Furthermore, it is demonstrated that network synchrony can also be affected by the firing frequency and dynamical bifurcation feature of single neuron. These results highlight the effect of weak field on network synchrony from the view of biophysical model, which may contribute to further understanding the effect of electric field on network activity.

  16. Algorithm for predicting the evolution of series of dynamics of complex systems in solving information problems

    NASA Astrophysics Data System (ADS)

    Kasatkina, T. I.; Dushkin, A. V.; Pavlov, V. A.; Shatovkin, R. R.

    2018-03-01

    In the development of information, systems and programming to predict the series of dynamics, neural network methods have recently been applied. They are more flexible, in comparison with existing analogues and are capable of taking into account the nonlinearities of the series. In this paper, we propose a modified algorithm for predicting the series of dynamics, which includes a method for training neural networks, an approach to describing and presenting input data, based on the prediction by the multilayer perceptron method. To construct a neural network, the values of a series of dynamics at the extremum points and time values corresponding to them, formed based on the sliding window method, are used as input data. The proposed algorithm can act as an independent approach to predicting the series of dynamics, and be one of the parts of the forecasting system. The efficiency of predicting the evolution of the dynamics series for a short-term one-step and long-term multi-step forecast by the classical multilayer perceptron method and a modified algorithm using synthetic and real data is compared. The result of this modification was the minimization of the magnitude of the iterative error that arises from the previously predicted inputs to the inputs to the neural network, as well as the increase in the accuracy of the iterative prediction of the neural network.

  17. Neural dynamics underlying emotional transmissions between individuals

    PubMed Central

    Levit-Binnun, Nava; Hendler, Talma; Lerner, Yulia

    2017-01-01

    Abstract Emotional experiences are frequently shaped by the emotional responses of co-present others. Research has shown that people constantly monitor and adapt to the incoming social–emotional signals, even without face-to-face interaction. And yet, the neural processes underlying such emotional transmissions have not been directly studied. Here, we investigated how the human brain processes emotional cues which arrive from another, co-attending individual. We presented continuous emotional feedback to participants who viewed a movie in the scanner. Participants in the social group (but not in the control group) believed that the feedback was coming from another person who was co-viewing the same movie. We found that social–emotional feedback significantly affected the neural dynamics both in the core affect and in the medial pre-frontal regions. Specifically, the response time-courses in those regions exhibited increased similarity across recipients and increased neural alignment with the timeline of the feedback in the social compared with control group. Taken in conjunction with previous research, this study suggests that emotional cues from others shape the neural dynamics across the whole neural continuum of emotional processing in the brain. Moreover, it demonstrates that interpersonal neural alignment can serve as a neural mechanism through which affective information is conveyed between individuals. PMID:28575520

  18. A Combined Molecular Dynamics and Experimental Study of Doped Polypyrrole.

    PubMed

    Fonner, John M; Schmidt, Christine E; Ren, Pengyu

    2010-10-01

    Polypyrrole (PPy) is a biocompatible, electrically conductive polymer that has great potential for battery, sensor, and neural implant applications. Its amorphous structure and insolubility, however, limit the experimental techniques available to study its structure and properties at the atomic level. Previous theoretical studies of PPy in bulk are also scarce. Using ab initio calculations, we have constructed a molecular mechanics force field of chloride-doped PPy (PPyCl) and undoped PPy. This model has been designed to integrate into the OPLS force field, and parameters are available for the Gromacs and TINKER software packages. Molecular dynamics (MD) simulations of bulk PPy and PPyCl have been performed using this force field, and the effects of chain packing and electrostatic scaling on the bulk polymer density have been investigated. The density of flotation of PPyCl films has been measured experimentally. Amorphous X-ray diffraction of PPyCl was obtained and correlated with atomic structures sampled from MD simulations. The force field reported here is foundational for bridging the gap between experimental measurements and theoretical calculations for PPy based materials.

  19. Effect of chitosan conduit under a dynamic culture on the proliferation and neural differentiation of human exfoliated deciduous teeth stem cells.

    PubMed

    Su, Wen-Ta; Shih, Yi-An; Ko, Chih-Sheng

    2016-06-01

    Ex vivo engineering of artificial nerve conduit is a suitable alternative clinical treatment for nerve injuries. Stem cells from human exfoliated deciduous teeth (SHEDs) have been considered as alternative sources of adult stem cells because of their potential to differentiate into multiple cell lineages. These cells, when cultured in six-well plates, exhibited a spindle fibroblastic morphology, whereas those under a dynamic culture aggregated into neurosphere-like clusters in the chitosan conduit. In this study, we confirmed that SHEDs efficiently express the neural stem cell marker nestin, the early neural cell marker β-III-tubulin, the late neural marker neuron-specific enolase and the glial cell markers glial fibrillary acidic protein (GFAP) and 2',3'-cyclic nucleotide-3'-phosphodiesterase (CNPase). The three-dimensional chitosan conduit and dynamic culture system generated fluid shear stress and enhanced nutrient transfer, promoting the differentiation of SHEDs to neural cells. In particular, the gene expressions of GFAP and CNPase increased by 28- and 53-fold, respectively. This study provides evidence for the dynamic culture of SHEDs during ex vivo neural differentiation and demonstrates its potential for cell therapy in neurological diseases. Copyright © 2013 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  20. Dynamical information encoding in neural adaptation.

    PubMed

    Luozheng Li; Wenhao Zhang; Yuanyuan Mi; Dahui Wang; Xiaohan Lin; Si Wu

    2016-08-01

    Adaptation refers to the general phenomenon that a neural system dynamically adjusts its response property according to the statistics of external inputs. In response to a prolonged constant stimulation, neuronal firing rates always first increase dramatically at the onset of the stimulation; and afterwards, they decrease rapidly to a low level close to background activity. This attenuation of neural activity seems to be contradictory to our experience that we can still sense the stimulus after the neural system is adapted. Thus, it prompts a question: where is the stimulus information encoded during the adaptation? Here, we investigate a computational model in which the neural system employs a dynamical encoding strategy during the neural adaptation: at the early stage of the adaptation, the stimulus information is mainly encoded in the strong independent firings; and as time goes on, the information is shifted into the weak but concerted responses of neurons. We find that short-term plasticity, a general feature of synapses, provides a natural mechanism to achieve this goal. Furthermore, we demonstrate that with balanced excitatory and inhibitory inputs, this correlation-based information can be read out efficiently. The implications of this study on our understanding of neural information encoding are discussed.

  1. Pattern Storage, Bifurcations, and Groupwise Correlation Structure of an Exactly Solvable Asymmetric Neural Network Model.

    PubMed

    Fasoli, Diego; Cattani, Anna; Panzeri, Stefano

    2018-05-01

    Despite their biological plausibility, neural network models with asymmetric weights are rarely solved analytically, and closed-form solutions are available only in some limiting cases or in some mean-field approximations. We found exact analytical solutions of an asymmetric spin model of neural networks with arbitrary size without resorting to any approximation, and we comprehensively studied its dynamical and statistical properties. The network had discrete time evolution equations and binary firing rates, and it could be driven by noise with any distribution. We found analytical expressions of the conditional and stationary joint probability distributions of the membrane potentials and the firing rates. By manipulating the conditional probability distribution of the firing rates, we extend to stochastic networks the associating learning rule previously introduced by Personnaz and coworkers. The new learning rule allowed the safe storage, under the presence of noise, of point and cyclic attractors, with useful implications for content-addressable memories. Furthermore, we studied the bifurcation structure of the network dynamics in the zero-noise limit. We analytically derived examples of the codimension 1 and codimension 2 bifurcation diagrams of the network, which describe how the neuronal dynamics changes with the external stimuli. This showed that the network may undergo transitions among multistable regimes, oscillatory behavior elicited by asymmetric synaptic connections, and various forms of spontaneous symmetry breaking. We also calculated analytically groupwise correlations of neural activity in the network in the stationary regime. This revealed neuronal regimes where, statistically, the membrane potentials and the firing rates are either synchronous or asynchronous. Our results are valid for networks with any number of neurons, although our equations can be realistically solved only for small networks. For completeness, we also derived the network equations in the thermodynamic limit of infinite network size and we analytically studied their local bifurcations. All the analytical results were extensively validated by numerical simulations.

  2. Biophysical Model of Cortical Network Activity and the Influence of Electrical Stimulation

    DTIC Science & Technology

    2015-11-13

    dynamics of seizure propagation across micro domains in the vicinity of the seizure onset zone, Journal of Neural Engineering, (08 2015): 46016. doi...field potential (lfp) in a large scalesimulation of cerebral cortex , Society for Neuroscience. 19-OCT-15, . : , TOTAL: 2 Number of Peer-Reviewed...of graduating undergraduates who achieved a 3.5 GPA to 4.0 (4.0 max scale ): Number of graduating undergraduates funded by a DoD funded Center of

  3. Seeking Global Minima

    NASA Astrophysics Data System (ADS)

    Tajuddin, Wan Ahmad

    1994-02-01

    Ease in finding the configuration at the global energy minimum in a symmetric neural network is important for combinatorial optimization problems. We carry out a comprehensive survey of available strategies for seeking global minima by comparing their performances in the binary representation problem. We recall our previous comparison of steepest descent with analog dynamics, genetic hill-climbing, simulated diffusion, simulated annealing, threshold accepting and simulated tunneling. To this, we add comparisons to other strategies including taboo search and one with field-ordered updating.

  4. Morphogenesis of the mouse neural plate depends on distinct roles of cofilin 1 in apical and basal epithelial domains

    PubMed Central

    Grego-Bessa, Joaquim; Hildebrand, Jeffrey; Anderson, Kathryn V.

    2015-01-01

    The genetic control of mammalian epithelial polarity and dynamics can be studied in vivo at cellular resolution during morphogenesis of the mouse neural tube. The mouse neural plate is a simple epithelium that is transformed into a columnar pseudostratified tube over the course of ∼24 h. Apical F-actin is known to be important for neural tube closure, but the precise roles of actin dynamics in the neural epithelium are not known. To determine how the organization of the neural epithelium and neural tube closure are affected when actin dynamics are blocked, we examined the cellular basis of the neural tube closure defect in mouse mutants that lack the actin-severing protein cofilin 1 (CFL1). Although apical localization of the adherens junctions, the Par complex, the Crumbs complex and SHROOM3 is normal in the mutants, CFL1 has at least two distinct functions in the apical and basal domains of the neural plate. Apically, in the absence of CFL1 myosin light chain does not become phosphorylated, indicating that CFL1 is required for the activation of apical actomyosin required for neural tube closure. On the basal side of the neural plate, loss of CFL1 has the opposite effect on myosin: excess F-actin and myosin accumulate and the ectopic myosin light chain is phosphorylated. The basal accumulation of F-actin is associated with the assembly of ectopic basal tight junctions and focal disruptions of the basement membrane, which eventually lead to a breakdown of epithelial organization. PMID:25742799

  5. Computational Model of Primary Visual Cortex Combining Visual Attention for Action Recognition

    PubMed Central

    Shu, Na; Gao, Zhiyong; Chen, Xiangan; Liu, Haihua

    2015-01-01

    Humans can easily understand other people’s actions through visual systems, while computers cannot. Therefore, a new bio-inspired computational model is proposed in this paper aiming for automatic action recognition. The model focuses on dynamic properties of neurons and neural networks in the primary visual cortex (V1), and simulates the procedure of information processing in V1, which consists of visual perception, visual attention and representation of human action. In our model, a family of the three-dimensional spatial-temporal correlative Gabor filters is used to model the dynamic properties of the classical receptive field of V1 simple cell tuned to different speeds and orientations in time for detection of spatiotemporal information from video sequences. Based on the inhibitory effect of stimuli outside the classical receptive field caused by lateral connections of spiking neuron networks in V1, we propose surround suppressive operator to further process spatiotemporal information. Visual attention model based on perceptual grouping is integrated into our model to filter and group different regions. Moreover, in order to represent the human action, we consider the characteristic of the neural code: mean motion map based on analysis of spike trains generated by spiking neurons. The experimental evaluation on some publicly available action datasets and comparison with the state-of-the-art approaches demonstrate the superior performance of the proposed model. PMID:26132270

  6. Neural Networks Technique for Filling Gaps in Satellite Measurements: Application to Ocean Color Observations.

    PubMed

    Krasnopolsky, Vladimir; Nadiga, Sudhir; Mehra, Avichal; Bayler, Eric; Behringer, David

    2016-01-01

    A neural network (NN) technique to fill gaps in satellite data is introduced, linking satellite-derived fields of interest with other satellites and in situ physical observations. Satellite-derived "ocean color" (OC) data are used in this study because OC variability is primarily driven by biological processes related and correlated in complex, nonlinear relationships with the physical processes of the upper ocean. Specifically, ocean color chlorophyll-a fields from NOAA's operational Visible Imaging Infrared Radiometer Suite (VIIRS) are used, as well as NOAA and NASA ocean surface and upper-ocean observations employed--signatures of upper-ocean dynamics. An NN transfer function is trained, using global data for two years (2012 and 2013), and tested on independent data for 2014. To reduce the impact of noise in the data and to calculate a stable NN Jacobian for sensitivity studies, an ensemble of NNs with different weights is constructed and compared with a single NN. The impact of the NN training period on the NN's generalization ability is evaluated. The NN technique provides an accurate and computationally cheap method for filling in gaps in satellite ocean color observation fields and time series.

  7. Neural Networks Technique for Filling Gaps in Satellite Measurements: Application to Ocean Color Observations

    PubMed Central

    Nadiga, Sudhir; Mehra, Avichal; Bayler, Eric; Behringer, David

    2016-01-01

    A neural network (NN) technique to fill gaps in satellite data is introduced, linking satellite-derived fields of interest with other satellites and in situ physical observations. Satellite-derived “ocean color” (OC) data are used in this study because OC variability is primarily driven by biological processes related and correlated in complex, nonlinear relationships with the physical processes of the upper ocean. Specifically, ocean color chlorophyll-a fields from NOAA's operational Visible Imaging Infrared Radiometer Suite (VIIRS) are used, as well as NOAA and NASA ocean surface and upper-ocean observations employed—signatures of upper-ocean dynamics. An NN transfer function is trained, using global data for two years (2012 and 2013), and tested on independent data for 2014. To reduce the impact of noise in the data and to calculate a stable NN Jacobian for sensitivity studies, an ensemble of NNs with different weights is constructed and compared with a single NN. The impact of the NN training period on the NN's generalization ability is evaluated. The NN technique provides an accurate and computationally cheap method for filling in gaps in satellite ocean color observation fields and time series. PMID:26819586

  8. A one-layer recurrent neural network for constrained pseudoconvex optimization and its application for dynamic portfolio optimization.

    PubMed

    Liu, Qingshan; Guo, Zhishan; Wang, Jun

    2012-02-01

    In this paper, a one-layer recurrent neural network is proposed for solving pseudoconvex optimization problems subject to linear equality and bound constraints. Compared with the existing neural networks for optimization (e.g., the projection neural networks), the proposed neural network is capable of solving more general pseudoconvex optimization problems with equality and bound constraints. Moreover, it is capable of solving constrained fractional programming problems as a special case. The convergence of the state variables of the proposed neural network to achieve solution optimality is guaranteed as long as the designed parameters in the model are larger than the derived lower bounds. Numerical examples with simulation results illustrate the effectiveness and characteristics of the proposed neural network. In addition, an application for dynamic portfolio optimization is discussed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  9. Neural network approach to prediction of temperatures around groundwater heat pump systems

    NASA Astrophysics Data System (ADS)

    Lo Russo, Stefano; Taddia, Glenda; Gnavi, Loretta; Verda, Vittorio

    2014-01-01

    A fundamental aspect in groundwater heat pump (GWHP) plant design is the correct evaluation of the thermally affected zone that develops around the injection well. This is particularly important to avoid interference with previously existing groundwater uses (wells) and underground structures. Temperature anomalies are detected through numerical methods. Computational fluid dynamic (CFD) models are widely used in this field because they offer the opportunity to calculate the time evolution of the thermal plume produced by a heat pump. The use of neural networks is proposed to determine the time evolution of the groundwater temperature downstream of an installation as a function of the possible utilization profiles of the heat pump. The main advantage of neural network modeling is the possibility of evaluating a large number of scenarios in a very short time, which is very useful for the preliminary analysis of future multiple installations. The neural network is trained using the results from a CFD model (FEFLOW) applied to the installation at Politecnico di Torino (Italy) under several operating conditions. The final results appeared to be reliable and the temperature anomalies around the injection well appeared to be well predicted.

  10. Understanding Physiological and Degenerative Natural Vision Mechanisms to Define Contrast and Contour Operators

    PubMed Central

    Demongeot, Jacques; Fouquet, Yannick; Tayyab, Muhammad; Vuillerme, Nicolas

    2009-01-01

    Background Dynamical systems like neural networks based on lateral inhibition have a large field of applications in image processing, robotics and morphogenesis modeling. In this paper, we will propose some examples of dynamical flows used in image contrasting and contouring. Methodology First we present the physiological basis of the retina function by showing the role of the lateral inhibition in the optical illusions and pathologic processes generation. Then, based on these biological considerations about the real vision mechanisms, we study an enhancement method for contrasting medical images, using either a discrete neural network approach, or its continuous version, i.e. a non-isotropic diffusion reaction partial differential system. Following this, we introduce other continuous operators based on similar biomimetic approaches: a chemotactic contrasting method, a viability contouring algorithm and an attentional focus operator. Then, we introduce the new notion of mixed potential Hamiltonian flows; we compare it with the watershed method and we use it for contouring. Conclusions We conclude by showing the utility of these biomimetic methods with some examples of application in medical imaging and computed assisted surgery. PMID:19547712

  11. Computational modeling of spiking neural network with learning rules from STDP and intrinsic plasticity

    NASA Astrophysics Data System (ADS)

    Li, Xiumin; Wang, Wei; Xue, Fangzheng; Song, Yongduan

    2018-02-01

    Recently there has been continuously increasing interest in building up computational models of spiking neural networks (SNN), such as the Liquid State Machine (LSM). The biologically inspired self-organized neural networks with neural plasticity can enhance the capability of computational performance, with the characteristic features of dynamical memory and recurrent connection cycles which distinguish them from the more widely used feedforward neural networks. Despite a variety of computational models for brain-like learning and information processing have been proposed, the modeling of self-organized neural networks with multi-neural plasticity is still an important open challenge. The main difficulties lie in the interplay among different forms of neural plasticity rules and understanding how structures and dynamics of neural networks shape the computational performance. In this paper, we propose a novel approach to develop the models of LSM with a biologically inspired self-organizing network based on two neural plasticity learning rules. The connectivity among excitatory neurons is adapted by spike-timing-dependent plasticity (STDP) learning; meanwhile, the degrees of neuronal excitability are regulated to maintain a moderate average activity level by another learning rule: intrinsic plasticity (IP). Our study shows that LSM with STDP+IP performs better than LSM with a random SNN or SNN obtained by STDP alone. The noticeable improvement with the proposed method is due to the better reflected competition among different neurons in the developed SNN model, as well as the more effectively encoded and processed relevant dynamic information with its learning and self-organizing mechanism. This result gives insights to the optimization of computational models of spiking neural networks with neural plasticity.

  12. Beyond Point Charges: Dynamic Polarization from Neural Net Predicted Multipole Moments.

    PubMed

    Darley, Michael G; Handley, Chris M; Popelier, Paul L A

    2008-09-09

    Intramolecular polarization is the change to the electron density of a given atom upon variation in the positions of the neighboring atoms. We express the electron density in terms of multipole moments. Using glycine and N-methylacetamide (NMA) as pilot systems, we show that neural networks can capture the change in electron density due to polarization. After training, modestly sized neural networks successfully predict the atomic multipole moments from the nuclear positions of all atoms in the molecule. Accurate electrostatic energies between two atoms can be then obtained via a multipole expansion, inclusive of polarization effects. As a result polarization is successfully modeled at short-range and without an explicit polarizability tensor. This approach puts charge transfer and multipolar polarization on a common footing. The polarization procedure is formulated within the context of quantum chemical topology (QCT). Nonbonded atom-atom interactions in glycine cover an energy range of 948 kJ mol(-1), with an average energy difference between true and predicted energy of 0.2 kJ mol(-1), the largest difference being just under 1 kJ mol(-1). Very similar energy differences are found for NMA, which spans a range of 281 kJ mol(-1). The current proof-of-concept enables the construction of a new protein force field that incorporates electron density fragments that dynamically respond to their fluctuating environment.

  13. De novo peptide sequencing by deep learning

    PubMed Central

    Tran, Ngoc Hieu; Zhang, Xianglilan; Xin, Lei; Shan, Baozhen; Li, Ming

    2017-01-01

    De novo peptide sequencing from tandem MS data is the key technology in proteomics for the characterization of proteins, especially for new sequences, such as mAbs. In this study, we propose a deep neural network model, DeepNovo, for de novo peptide sequencing. DeepNovo architecture combines recent advances in convolutional neural networks and recurrent neural networks to learn features of tandem mass spectra, fragment ions, and sequence patterns of peptides. The networks are further integrated with local dynamic programming to solve the complex optimization task of de novo sequencing. We evaluated the method on a wide variety of species and found that DeepNovo considerably outperformed state of the art methods, achieving 7.7–22.9% higher accuracy at the amino acid level and 38.1–64.0% higher accuracy at the peptide level. We further used DeepNovo to automatically reconstruct the complete sequences of antibody light and heavy chains of mouse, achieving 97.5–100% coverage and 97.2–99.5% accuracy, without assisting databases. Moreover, DeepNovo is retrainable to adapt to any sources of data and provides a complete end-to-end training and prediction solution to the de novo sequencing problem. Not only does our study extend the deep learning revolution to a new field, but it also shows an innovative approach in solving optimization problems by using deep learning and dynamic programming. PMID:28720701

  14. Time-lapse imaging of neural development: zebrafish lead the way into the fourth dimension.

    PubMed

    Rieger, Sandra; Wang, Fang; Sagasti, Alvaro

    2011-07-01

    Time-lapse imaging is often the only way to appreciate fully the many dynamic cell movements critical to neural development. Zebrafish possess many advantages that make them the best vertebrate model organism for live imaging of dynamic development events. This review will discuss technical considerations of time-lapse imaging experiments in zebrafish, describe selected examples of imaging studies in zebrafish that revealed new features or principles of neural development, and consider the promise and challenges of future time-lapse studies of neural development in zebrafish embryos and adults. Copyright © 2011 Wiley-Liss, Inc.

  15. Estimating spatio-temporal dynamics of stream total phosphate concentration by soft computing techniques.

    PubMed

    Chang, Fi-John; Chen, Pin-An; Chang, Li-Chiu; Tsai, Yu-Hsuan

    2016-08-15

    This study attempts to model the spatio-temporal dynamics of total phosphate (TP) concentrations along a river for effective hydro-environmental management. We propose a systematical modeling scheme (SMS), which is an ingenious modeling process equipped with a dynamic neural network and three refined statistical methods, for reliably predicting the TP concentrations along a river simultaneously. Two different types of artificial neural network (BPNN-static neural network; NARX network-dynamic neural network) are constructed in modeling the dynamic system. The Dahan River in Taiwan is used as a study case, where ten-year seasonal water quality data collected at seven monitoring stations along the river are used for model training and validation. Results demonstrate that the NARX network can suitably capture the important dynamic features and remarkably outperforms the BPNN model, and the SMS can effectively identify key input factors, suitably overcome data scarcity, significantly increase model reliability, satisfactorily estimate site-specific TP concentration at seven monitoring stations simultaneously, and adequately reconstruct seasonal TP data into a monthly scale. The proposed SMS can reliably model the dynamic spatio-temporal water pollution variation in a river system for missing, hazardous or costly data of interest. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Dynamics of a neural system with a multiscale architecture

    PubMed Central

    Breakspear, Michael; Stam, Cornelis J

    2005-01-01

    The architecture of the brain is characterized by a modular organization repeated across a hierarchy of spatial scales—neurons, minicolumns, cortical columns, functional brain regions, and so on. It is important to consider that the processes governing neural dynamics at any given scale are not only determined by the behaviour of other neural structures at that scale, but also by the emergent behaviour of smaller scales, and the constraining influence of activity at larger scales. In this paper, we introduce a theoretical framework for neural systems in which the dynamics are nested within a multiscale architecture. In essence, the dynamics at each scale are determined by a coupled ensemble of nonlinear oscillators, which embody the principle scale-specific neurobiological processes. The dynamics at larger scales are ‘slaved’ to the emergent behaviour of smaller scales through a coupling function that depends on a multiscale wavelet decomposition. The approach is first explicated mathematically. Numerical examples are then given to illustrate phenomena such as between-scale bifurcations, and how synchronization in small-scale structures influences the dynamics in larger structures in an intuitive manner that cannot be captured by existing modelling approaches. A framework for relating the dynamical behaviour of the system to measured observables is presented and further extensions to capture wave phenomena and mode coupling are suggested. PMID:16087448

  17. Linking dynamic patterns of neural activity in orbitofrontal cortex with decision making.

    PubMed

    Rich, Erin L; Stoll, Frederic M; Rudebeck, Peter H

    2018-04-01

    Humans and animals demonstrate extraordinary flexibility in choice behavior, particularly when deciding based on subjective preferences. We evaluate options on different scales, deliberate, and often change our minds. Little is known about the neural mechanisms that underlie these dynamic aspects of decision-making, although neural activity in orbitofrontal cortex (OFC) likely plays a central role. Recent evidence from studies in macaques shows that attention modulates value responses in OFC, and that ensembles of OFC neurons dynamically signal different options during choices. When contexts change, these ensembles flexibly remap to encode the new task. Determining how these dynamic patterns emerge and relate to choices will inform models of decision-making and OFC function. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Frontal Cortex Activation Causes Rapid Plasticity of Auditory Cortical Processing

    PubMed Central

    Winkowski, Daniel E.; Bandyopadhyay, Sharba; Shamma, Shihab A.

    2013-01-01

    Neurons in the primary auditory cortex (A1) can show rapid changes in receptive fields when animals are engaged in sound detection and discrimination tasks. The source of a signal to A1 that triggers these changes is suspected to be in frontal cortical areas. How or whether activity in frontal areas can influence activity and sensory processing in A1 and the detailed changes occurring in A1 on the level of single neurons and in neuronal populations remain uncertain. Using electrophysiological techniques in mice, we found that pairing orbitofrontal cortex (OFC) stimulation with sound stimuli caused rapid changes in the sound-driven activity within A1 that are largely mediated by noncholinergic mechanisms. By integrating in vivo two-photon Ca2+ imaging of A1 with OFC stimulation, we found that pairing OFC activity with sounds caused dynamic and selective changes in sensory responses of neural populations in A1. Further, analysis of changes in signal and noise correlation after OFC pairing revealed improvement in neural population-based discrimination performance within A1. This improvement was frequency specific and dependent on correlation changes. These OFC-induced influences on auditory responses resemble behavior-induced influences on auditory responses and demonstrate that OFC activity could underlie the coordination of rapid, dynamic changes in A1 to dynamic sensory environments. PMID:24227723

  19. Empirical modeling ENSO dynamics with complex-valued artificial neural networks

    NASA Astrophysics Data System (ADS)

    Seleznev, Aleksei; Gavrilov, Andrey; Mukhin, Dmitry

    2016-04-01

    The main difficulty in empirical reconstructing the distributed dynamical systems (e.g. regional climate systems, such as El-Nino-Southern Oscillation - ENSO) is a huge amount of observational data comprising time-varying spatial fields of several variables. An efficient reduction of system's dimensionality thereby is essential for inferring an evolution operator (EO) for a low-dimensional subsystem that determines the key properties of the observed dynamics. In this work, to efficient reduction of observational data sets we use complex-valued (Hilbert) empirical orthogonal functions which are appropriate, by their nature, for describing propagating structures unlike traditional empirical orthogonal functions. For the approximation of the EO, a universal model in the form of complex-valued artificial neural network is suggested. The effectiveness of this approach is demonstrated by predicting both the Jin-Neelin-Ghil ENSO model [1] behavior and real ENSO variability from sea surface temperature anomalies data [2]. The study is supported by Government of Russian Federation (agreement #14.Z50.31.0033 with the Institute of Applied Physics of RAS). 1. Jin, F.-F., J. D. Neelin, and M. Ghil, 1996: El Ni˜no/Southern Oscillation and the annual cycle: subharmonic frequency locking and aperiodicity. Physica D, 98, 442-465. 2. http://iridl.ldeo.columbia.edu/SOURCES/.KAPLAN/.EXTENDED/.v2/.ssta/

  20. Neural-Dynamic-Method-Based Dual-Arm CMG Scheme With Time-Varying Constraints Applied to Humanoid Robots.

    PubMed

    Zhang, Zhijun; Li, Zhijun; Zhang, Yunong; Luo, Yamei; Li, Yuanqing

    2015-12-01

    We propose a dual-arm cyclic-motion-generation (DACMG) scheme by a neural-dynamic method, which can remedy the joint-angle-drift phenomenon of a humanoid robot. In particular, according to a neural-dynamic design method, first, a cyclic-motion performance index is exploited and applied. This cyclic-motion performance index is then integrated into a quadratic programming (QP)-type scheme with time-varying constraints, called the time-varying-constrained DACMG (TVC-DACMG) scheme. The scheme includes the kinematic motion equations of two arms and the time-varying joint limits. The scheme can not only generate the cyclic motion of two arms for a humanoid robot but also control the arms to move to the desired position. In addition, the scheme considers the physical limit avoidance. To solve the QP problem, a recurrent neural network is presented and used to obtain the optimal solutions. Computer simulations and physical experiments demonstrate the effectiveness and the accuracy of such a TVC-DACMG scheme and the neural network solver.

  1. Capturing the temporal evolution of choice across prefrontal cortex

    PubMed Central

    Hunt, Laurence T; Behrens, Timothy EJ; Hosokawa, Takayuki; Wallis, Jonathan D; Kennerley, Steven W

    2015-01-01

    Activity in prefrontal cortex (PFC) has been richly described using economic models of choice. Yet such descriptions fail to capture the dynamics of decision formation. Describing dynamic neural processes has proven challenging due to the problem of indexing the internal state of PFC and its trial-by-trial variation. Using primate neurophysiology and human magnetoencephalography, we here recover a single-trial index of PFC internal states from multiple simultaneously recorded PFC subregions. This index can explain the origins of neural representations of economic variables in PFC. It describes the relationship between neural dynamics and behaviour in both human and monkey PFC, directly bridging between human neuroimaging data and underlying neuronal activity. Moreover, it reveals a functionally dissociable interaction between orbitofrontal cortex, anterior cingulate cortex and dorsolateral PFC in guiding cost-benefit decisions. We cast our observations in terms of a recurrent neural network model of choice, providing formal links to mechanistic dynamical accounts of decision-making. DOI: http://dx.doi.org/10.7554/eLife.11945.001 PMID:26653139

  2. Dual adaptive dynamic control of mobile robots using neural networks.

    PubMed

    Bugeja, Marvin K; Fabri, Simon G; Camilleri, Liberato

    2009-02-01

    This paper proposes two novel dual adaptive neural control schemes for the dynamic control of nonholonomic mobile robots. The two schemes are developed in discrete time, and the robot's nonlinear dynamic functions are assumed to be unknown. Gaussian radial basis function and sigmoidal multilayer perceptron neural networks are used for function approximation. In each scheme, the unknown network parameters are estimated stochastically in real time, and no preliminary offline neural network training is used. In contrast to other adaptive techniques hitherto proposed in the literature on mobile robots, the dual control laws presented in this paper do not rely on the heuristic certainty equivalence property but account for the uncertainty in the estimates. This results in a major improvement in tracking performance, despite the plant uncertainty and unmodeled dynamics. Monte Carlo simulation and statistical hypothesis testing are used to illustrate the effectiveness of the two proposed stochastic controllers as applied to the trajectory-tracking problem of a differentially driven wheeled mobile robot.

  3. Effects of bursting dynamic features on the generation of multi-clustered structure of neural network with symmetric spike-timing-dependent plasticity learning rule.

    PubMed

    Liu, Hui; Song, Yongduan; Xue, Fangzheng; Li, Xiumin

    2015-11-01

    In this paper, the generation of multi-clustered structure of self-organized neural network with different neuronal firing patterns, i.e., bursting or spiking, has been investigated. The initially all-to-all-connected spiking neural network or bursting neural network can be self-organized into clustered structure through the symmetric spike-timing-dependent plasticity learning for both bursting and spiking neurons. However, the time consumption of this clustering procedure of the burst-based self-organized neural network (BSON) is much shorter than the spike-based self-organized neural network (SSON). Our results show that the BSON network has more obvious small-world properties, i.e., higher clustering coefficient and smaller shortest path length than the SSON network. Also, the results of larger structure entropy and activity entropy of the BSON network demonstrate that this network has higher topological complexity and dynamical diversity, which benefits for enhancing information transmission of neural circuits. Hence, we conclude that the burst firing can significantly enhance the efficiency of clustering procedure and the emergent clustered structure renders the whole network more synchronous and therefore more sensitive to weak input. This result is further confirmed from its improved performance on stochastic resonance. Therefore, we believe that the multi-clustered neural network which self-organized from the bursting dynamics has high efficiency in information processing.

  4. Effects of bursting dynamic features on the generation of multi-clustered structure of neural network with symmetric spike-timing-dependent plasticity learning rule

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Hui; Song, Yongduan; Xue, Fangzheng

    In this paper, the generation of multi-clustered structure of self-organized neural network with different neuronal firing patterns, i.e., bursting or spiking, has been investigated. The initially all-to-all-connected spiking neural network or bursting neural network can be self-organized into clustered structure through the symmetric spike-timing-dependent plasticity learning for both bursting and spiking neurons. However, the time consumption of this clustering procedure of the burst-based self-organized neural network (BSON) is much shorter than the spike-based self-organized neural network (SSON). Our results show that the BSON network has more obvious small-world properties, i.e., higher clustering coefficient and smaller shortest path length than themore » SSON network. Also, the results of larger structure entropy and activity entropy of the BSON network demonstrate that this network has higher topological complexity and dynamical diversity, which benefits for enhancing information transmission of neural circuits. Hence, we conclude that the burst firing can significantly enhance the efficiency of clustering procedure and the emergent clustered structure renders the whole network more synchronous and therefore more sensitive to weak input. This result is further confirmed from its improved performance on stochastic resonance. Therefore, we believe that the multi-clustered neural network which self-organized from the bursting dynamics has high efficiency in information processing.« less

  5. From individual spiking neurons to population behavior: Systematic elimination of short-wavelength spatial modes

    NASA Astrophysics Data System (ADS)

    Steyn-Ross, Moira L.; Steyn-Ross, D. A.

    2016-02-01

    Mean-field models of the brain approximate spiking dynamics by assuming that each neuron responds to its neighbors via a naive spatial average that neglects local fluctuations and correlations in firing activity. In this paper we address this issue by introducing a rigorous formalism to enable spatial coarse-graining of spiking dynamics, scaling from the microscopic level of a single type 1 (integrator) neuron to a macroscopic assembly of spiking neurons that are interconnected by chemical synapses and nearest-neighbor gap junctions. Spiking behavior at the single-neuron scale ℓ ≈10 μ m is described by Wilson's two-variable conductance-based equations [H. R. Wilson, J. Theor. Biol. 200, 375 (1999), 10.1006/jtbi.1999.1002], driven by fields of incoming neural activity from neighboring neurons. We map these equations to a coarser spatial resolution of grid length B ℓ , with B ≫1 being the blocking ratio linking micro and macro scales. Our method systematically eliminates high-frequency (short-wavelength) spatial modes q ⃗ in favor of low-frequency spatial modes Q ⃗ using an adiabatic elimination procedure that has been shown to be equivalent to the path-integral coarse graining applied to renormalization group theory of critical phenomena. This bottom-up neural regridding allows us to track the percolation of synaptic and ion-channel noise from the single neuron up to the scale of macroscopic population-average variables. Anticipated applications of neural regridding include extraction of the current-to-firing-rate transfer function, investigation of fluctuation criticality near phase-transition tipping points, determination of spatial scaling laws for avalanche events, and prediction of the spatial extent of self-organized macrocolumnar structures. As a first-order exemplar of the method, we recover nonlinear corrections for a coarse-grained Wilson spiking neuron embedded in a network of identical diffusively coupled neurons whose chemical synapses have been disabled. Intriguingly, we find that reblocking transforms the original type 1 Wilson integrator into a type 2 resonator whose spike-rate transfer function exhibits abrupt spiking onset with near-vertical takeoff and chaotic dynamics just above threshold.

  6. Dynamical foundations of the neural circuit for bayesian decision making.

    PubMed

    Morita, Kenji

    2009-07-01

    On the basis of accumulating behavioral and neural evidences, it has recently been proposed that the brain neural circuits of humans and animals are equipped with several specific properties, which ensure that perceptual decision making implemented by the circuits can be nearly optimal in terms of Bayesian inference. Here, I introduce the basic ideas of such a proposal and discuss its implications from the standpoint of biophysical modeling developed in the framework of dynamical systems.

  7. ER fluid applications to vibration control devices and an adaptive neural-net controller

    NASA Astrophysics Data System (ADS)

    Morishita, Shin; Ura, Tamaki

    1993-07-01

    Four applications of electrorheological (ER) fluid to vibration control actuators and an adaptive neural-net control system suitable for the controller of ER actuators are described: a shock absorber system for automobiles, a squeeze film damper bearing for rotational machines, a dynamic damper for multidegree-of-freedom structures, and a vibration isolator. An adaptive neural-net control system composed of a forward model network for structural identification and a controller network is introduced for the control system of these ER actuators. As an example study of intelligent vibration control systems, an experiment was performed in which the ER dynamic damper was attached to a beam structure and controlled by the present neural-net controller so that the vibration in several modes of the beam was reduced with a single dynamic damper.

  8. Neural dynamic optimization for control systems. I. Background.

    PubMed

    Seong, C Y; Widrow, B

    2001-01-01

    The paper presents neural dynamic optimization (NDO) as a method of optimal feedback control for nonlinear multi-input-multi-output (MIMO) systems. The main feature of NDO is that it enables neural networks to approximate the optimal feedback solution whose existence dynamic programming (DP) justifies, thereby reducing the complexities of computation and storage problems of the classical methods such as DP. This paper mainly describes the background and motivations for the development of NDO, while the two other subsequent papers of this topic present the theory of NDO and demonstrate the method with several applications including control of autonomous vehicles and of a robot arm, respectively.

  9. Neural dynamic optimization for control systems.III. Applications.

    PubMed

    Seong, C Y; Widrow, B

    2001-01-01

    For pt.II. see ibid., p. 490-501. The paper presents neural dynamic optimization (NDO) as a method of optimal feedback control for nonlinear multi-input-multi-output (MIMO) systems. The main feature of NDO is that it enables neural networks to approximate the optimal feedback solution whose existence dynamic programming (DP) justifies, thereby reducing the complexities of computation and storage problems of the classical methods such as DP. This paper demonstrates NDO with several applications including control of autonomous vehicles and of a robot-arm, while the two other companion papers of this topic describes the background for the development of NDO and present the theory of the method, respectively.

  10. Neural dynamic optimization for control systems.II. Theory.

    PubMed

    Seong, C Y; Widrow, B

    2001-01-01

    The paper presents neural dynamic optimization (NDO) as a method of optimal feedback control for nonlinear multi-input-multi-output (MIMO) systems. The main feature of NDO is that it enables neural networks to approximate the optimal feedback solution whose existence dynamic programming (DP) justifies, thereby reducing the complexities of computation and storage problems of the classical methods such as DP. This paper mainly describes the theory of NDO, while the two other companion papers of this topic explain the background for the development of NDO and demonstrate the method with several applications including control of autonomous vehicles and of a robot arm, respectively.

  11. Establishing a Dynamic Self-Adaptation Learning Algorithm of the BP Neural Network and Its Applications

    NASA Astrophysics Data System (ADS)

    Li, Xiaofeng; Xiang, Suying; Zhu, Pengfei; Wu, Min

    2015-12-01

    In order to avoid the inherent deficiencies of the traditional BP neural network, such as slow convergence speed, that easily leading to local minima, poor generalization ability and difficulty in determining the network structure, the dynamic self-adaptive learning algorithm of the BP neural network is put forward to improve the function of the BP neural network. The new algorithm combines the merit of principal component analysis, particle swarm optimization, correlation analysis and self-adaptive model, hence can effectively solve the problems of selecting structural parameters, initial connection weights and thresholds and learning rates of the BP neural network. This new algorithm not only reduces the human intervention, optimizes the topological structures of BP neural networks and improves the network generalization ability, but also accelerates the convergence speed of a network, avoids trapping into local minima, and enhances network adaptation ability and prediction ability. The dynamic self-adaptive learning algorithm of the BP neural network is used to forecast the total retail sale of consumer goods of Sichuan Province, China. Empirical results indicate that the new algorithm is superior to the traditional BP network algorithm in predicting accuracy and time consumption, which shows the feasibility and effectiveness of the new algorithm.

  12. An ensemble of dynamic neural network identifiers for fault detection and isolation of gas turbine engines.

    PubMed

    Amozegar, M; Khorasani, K

    2016-04-01

    In this paper, a new approach for Fault Detection and Isolation (FDI) of gas turbine engines is proposed by developing an ensemble of dynamic neural network identifiers. For health monitoring of the gas turbine engine, its dynamics is first identified by constructing three separate or individual dynamic neural network architectures. Specifically, a dynamic multi-layer perceptron (MLP), a dynamic radial-basis function (RBF) neural network, and a dynamic support vector machine (SVM) are trained to individually identify and represent the gas turbine engine dynamics. Next, three ensemble-based techniques are developed to represent the gas turbine engine dynamics, namely, two heterogeneous ensemble models and one homogeneous ensemble model. It is first shown that all ensemble approaches do significantly improve the overall performance and accuracy of the developed system identification scheme when compared to each of the stand-alone solutions. The best selected stand-alone model (i.e., the dynamic RBF network) and the best selected ensemble architecture (i.e., the heterogeneous ensemble) in terms of their performances in achieving an accurate system identification are then selected for solving the FDI task. The required residual signals are generated by using both a single model-based solution and an ensemble-based solution under various gas turbine engine health conditions. Our extensive simulation studies demonstrate that the fault detection and isolation task achieved by using the residuals that are obtained from the dynamic ensemble scheme results in a significantly more accurate and reliable performance as illustrated through detailed quantitative confusion matrix analysis and comparative studies. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Logic Dynamics for Deductive Inference -- Its Stability and Neural Basis

    NASA Astrophysics Data System (ADS)

    Tsuda, Ichiro

    2014-12-01

    We propose a dynamical model that represents a process of deductive inference. We discuss the stability of logic dynamics and a neural basis for the dynamics. We propose a new concept of descriptive stability, thereby enabling a structure of stable descriptions of mathematical models concerning dynamic phenomena to be clarified. The present theory is based on the wider and deeper thoughts of John S. Nicolis. In particular, it is based on our joint paper on the chaos theory of human short-term memories with a magic number of seven plus or minus two.

  14. Dynamic Information Encoding With Dynamic Synapses in Neural Adaptation

    PubMed Central

    Li, Luozheng; Mi, Yuanyuan; Zhang, Wenhao; Wang, Da-Hui; Wu, Si

    2018-01-01

    Adaptation refers to the general phenomenon that the neural system dynamically adjusts its response property according to the statistics of external inputs. In response to an invariant stimulation, neuronal firing rates first increase dramatically and then decrease gradually to a low level close to the background activity. This prompts a question: during the adaptation, how does the neural system encode the repeated stimulation with attenuated firing rates? It has been suggested that the neural system may employ a dynamical encoding strategy during the adaptation, the information of stimulus is mainly encoded by the strong independent spiking of neurons at the early stage of the adaptation; while the weak but synchronized activity of neurons encodes the stimulus information at the later stage of the adaptation. The previous study demonstrated that short-term facilitation (STF) of electrical synapses, which increases the synchronization between neurons, can provide a mechanism to realize dynamical encoding. In the present study, we further explore whether short-term plasticity (STP) of chemical synapses, an interaction form more common than electrical synapse in the cortex, can support dynamical encoding. We build a large-size network with chemical synapses between neurons. Notably, facilitation of chemical synapses only enhances pair-wise correlations between neurons mildly, but its effect on increasing synchronization of the network can be significant, and hence it can serve as a mechanism to convey the stimulus information. To read-out the stimulus information, we consider that a downstream neuron receives balanced excitatory and inhibitory inputs from the network, so that the downstream neuron only responds to synchronized firings of the network. Therefore, the response of the downstream neuron indicates the presence of the repeated stimulation. Overall, our study demonstrates that STP of chemical synapse can serve as a mechanism to realize dynamical neural encoding. We believe that our study shed lights on the mechanism underlying the efficient neural information processing via adaptation. PMID:29636675

  15. Space-Time Neural Networks

    NASA Technical Reports Server (NTRS)

    Villarreal, James A.; Shelton, Robert O.

    1992-01-01

    Concept of space-time neural network affords distributed temporal memory enabling such network to model complicated dynamical systems mathematically and to recognize temporally varying spatial patterns. Digital filters replace synaptic-connection weights of conventional back-error-propagation neural network.

  16. Neutral Theory and Scale-Free Neural Dynamics

    NASA Astrophysics Data System (ADS)

    Martinello, Matteo; Hidalgo, Jorge; Maritan, Amos; di Santo, Serena; Plenz, Dietmar; Muñoz, Miguel A.

    2017-10-01

    Neural tissues have been consistently observed to be spontaneously active and to generate highly variable (scale-free distributed) outbursts of activity in vivo and in vitro. Understanding whether these heterogeneous patterns of activity stem from the underlying neural dynamics operating at the edge of a phase transition is a fascinating possibility, as criticality has been argued to entail many possible important functional advantages in biological computing systems. Here, we employ a well-accepted model for neural dynamics to elucidate an alternative scenario in which diverse neuronal avalanches, obeying scaling, can coexist simultaneously, even if the network operates in a regime far from the edge of any phase transition. We show that perturbations to the system state unfold dynamically according to a "neutral drift" (i.e., guided only by stochasticity) with respect to the background of endogenous spontaneous activity, and that such a neutral dynamics—akin to neutral theories of population genetics and of biogeography—implies marginal propagation of perturbations and scale-free distributed causal avalanches. We argue that causal information, not easily accessible to experiments, is essential to elucidate the nature and statistics of neural avalanches, and that neutral dynamics is likely to play an important role in the cortex functioning. We discuss the implications of these findings to design new empirical approaches to shed further light on how the brain processes and stores information.

  17. Predicting cloud-to-ground lightning with neural networks

    NASA Technical Reports Server (NTRS)

    Barnes, Arnold A., Jr.; Frankel, Donald; Draper, James Stark

    1991-01-01

    A neural network is being trained to predict lightning at Cape Canaveral for periods up to two hours in advance. Inputs consist of ground based field mill data, meteorological tower data, lightning location data, and radiosonde data. High values of the field mill data and rapid changes in the field mill data, offset in time, provide the forecasts or desired output values used to train the neural network through backpropagation. Examples of input data are shown and an example of data compression using a hidden layer in the neural network is discussed.

  18. Toward an Improvement of the Analysis of Neural Coding.

    PubMed

    Alegre-Cortés, Javier; Soto-Sánchez, Cristina; Albarracín, Ana L; Farfán, Fernando D; Val-Calvo, Mikel; Ferrandez, José M; Fernandez, Eduardo

    2017-01-01

    Machine learning and artificial intelligence have strong roots on principles of neural computation. Some examples are the structure of the first perceptron, inspired in the retina, neuroprosthetics based on ganglion cell recordings or Hopfield networks. In addition, machine learning provides a powerful set of tools to analyze neural data, which has already proved its efficacy in so distant fields of research as speech recognition, behavioral states classification, or LFP recordings. However, despite the huge technological advances in neural data reduction of dimensionality, pattern selection, and clustering during the last years, there has not been a proportional development of the analytical tools used for Time-Frequency (T-F) analysis in neuroscience. Bearing this in mind, we introduce the convenience of using non-linear, non-stationary tools, EMD algorithms in particular, for the transformation of the oscillatory neural data (EEG, EMG, spike oscillations…) into the T-F domain prior to its analysis with machine learning tools. We support that to achieve meaningful conclusions, the transformed data we analyze has to be as faithful as possible to the original recording, so that the transformations forced into the data due to restrictions in the T-F computation are not extended to the results of the machine learning analysis. Moreover, bioinspired computation such as brain-machine interface may be enriched from a more precise definition of neuronal coding where non-linearities of the neuronal dynamics are considered.

  19. Reconfigurable Control with Neural Network Augmentation for a Modified F-15 Aircraft

    NASA Technical Reports Server (NTRS)

    Burken, John J.

    2007-01-01

    This paper describes the performance of a simplified dynamic inversion controller with neural network supplementation. This 6 DOF (Degree-of-Freedom) simulation study focuses on the results with and without adaptation of neural networks using a simulation of the NASA modified F-15 which has canards. One area of interest is the performance of a simulated surface failure while attempting to minimize the inertial cross coupling effect of a [B] matrix failure (a control derivative anomaly associated with a jammed or missing control surface). Another area of interest and presented is simulated aerodynamic failures ([A] matrix) such as a canard failure. The controller uses explicit models to produce desired angular rate commands. The dynamic inversion calculates the necessary surface commands to achieve the desired rates. The simplified dynamic inversion uses approximate short period and roll axis dynamics. Initial results indicated that the transient response for a [B] matrix failure using a Neural Network (NN) improved the control behavior when compared to not using a neural network for a given failure, However, further evaluation of the controller was comparable, with objections io the cross coupling effects (after changes were made to the controller). This paper describes the methods employed to reduce the cross coupling effect and maintain adequate tracking errors. The IA] matrix failure results show that control of the aircraft without adaptation is more difficult [leas damped) than with active neural networks, Simulation results show Neural Network augmentation of the controller improves performance in terms of backing error and cross coupling reduction and improved performance with aerodynamic-type failures.

  20. Convergence dynamics and pseudo almost periodicity of a class of nonautonomous RFDEs with applications

    NASA Astrophysics Data System (ADS)

    Fan, Meng; Ye, Dan

    2005-09-01

    This paper studies the dynamics of a system of retarded functional differential equations (i.e., RF=Es), which generalize the Hopfield neural network models, the bidirectional associative memory neural networks, the hybrid network models of the cellular neural network type, and some population growth model. Sufficient criteria are established for the globally exponential stability and the existence and uniqueness of pseudo almost periodic solution. The approaches are based on constructing suitable Lyapunov functionals and the well-known Banach contraction mapping principle. The paper ends with some applications of the main results to some neural network models and population growth models and numerical simulations.

  1. Neuromechanic: a computational platform for simulation and analysis of the neural control of movement

    PubMed Central

    Bunderson, Nathan E.; Bingham, Jeffrey T.; Sohn, M. Hongchul; Ting, Lena H.; Burkholder, Thomas J.

    2015-01-01

    Neuromusculoskeletal models solve the basic problem of determining how the body moves under the influence of external and internal forces. Existing biomechanical modeling programs often emphasize dynamics with the goal of finding a feed-forward neural program to replicate experimental data or of estimating force contributions or individual muscles. The computation of rigid-body dynamics, muscle forces, and activation of the muscles are often performed separately. We have developed an intrinsically forward computational platform (Neuromechanic, www.neuromechanic.com) that explicitly represents the interdependencies among rigid body dynamics, frictional contact, muscle mechanics, and neural control modules. This formulation has significant advantages for optimization and forward simulation, particularly with application to neural controllers with feedback or regulatory features. Explicit inclusion of all state dependencies allows calculation of system derivatives with respect to kinematic states as well as muscle and neural control states, thus affording a wealth of analytical tools, including linearization, stability analyses and calculation of initial conditions for forward simulations. In this review, we describe our algorithm for generating state equations and explain how they may be used in integration, linearization and stability analysis tools to provide structural insights into the neural control of movement. PMID:23027632

  2. Neuromechanic: a computational platform for simulation and analysis of the neural control of movement.

    PubMed

    Bunderson, Nathan E; Bingham, Jeffrey T; Sohn, M Hongchul; Ting, Lena H; Burkholder, Thomas J

    2012-10-01

    Neuromusculoskeletal models solve the basic problem of determining how the body moves under the influence of external and internal forces. Existing biomechanical modeling programs often emphasize dynamics with the goal of finding a feed-forward neural program to replicate experimental data or of estimating force contributions or individual muscles. The computation of rigid-body dynamics, muscle forces, and activation of the muscles are often performed separately. We have developed an intrinsically forward computational platform (Neuromechanic, www.neuromechanic.com) that explicitly represents the interdependencies among rigid body dynamics, frictional contact, muscle mechanics, and neural control modules. This formulation has significant advantages for optimization and forward simulation, particularly with application to neural controllers with feedback or regulatory features. Explicit inclusion of all state dependencies allows calculation of system derivatives with respect to kinematic states and muscle and neural control states, thus affording a wealth of analytical tools, including linearization, stability analyses and calculation of initial conditions for forward simulations. In this review, we describe our algorithm for generating state equations and explain how they may be used in integration, linearization, and stability analysis tools to provide structural insights into the neural control of movement. Copyright © 2012 John Wiley & Sons, Ltd.

  3. Data-driven inference of network connectivity for modeling the dynamics of neural codes in the insect antennal lobe

    PubMed Central

    Shlizerman, Eli; Riffell, Jeffrey A.; Kutz, J. Nathan

    2014-01-01

    The antennal lobe (AL), olfactory processing center in insects, is able to process stimuli into distinct neural activity patterns, called olfactory neural codes. To model their dynamics we perform multichannel recordings from the projection neurons in the AL driven by different odorants. We then derive a dynamic neuronal network from the electrophysiological data. The network consists of lateral-inhibitory neurons and excitatory neurons (modeled as firing-rate units), and is capable of producing unique olfactory neural codes for the tested odorants. To construct the network, we (1) design a projection, an odor space, for the neural recording from the AL, which discriminates between distinct odorants trajectories (2) characterize scent recognition, i.e., decision-making based on olfactory signals and (3) infer the wiring of the neural circuit, the connectome of the AL. We show that the constructed model is consistent with biological observations, such as contrast enhancement and robustness to noise. The study suggests a data-driven approach to answer a key biological question in identifying how lateral inhibitory neurons can be wired to excitatory neurons to permit robust activity patterns. PMID:25165442

  4. Close-field electroporation gene delivery using the cochlear implant electrode array enhances the bionic ear.

    PubMed

    Pinyon, Jeremy L; Tadros, Sherif F; Froud, Kristina E; Y Wong, Ann C; Tompson, Isabella T; Crawford, Edward N; Ko, Myungseo; Morris, Renée; Klugmann, Matthias; Housley, Gary D

    2014-04-23

    The cochlear implant is the most successful bionic prosthesis and has transformed the lives of people with profound hearing loss. However, the performance of the "bionic ear" is still largely constrained by the neural interface itself. Current spread inherent to broad monopolar stimulation of the spiral ganglion neuron somata obviates the intrinsic tonotopic mapping of the cochlear nerve. We show in the guinea pig that neurotrophin gene therapy integrated into the cochlear implant improves its performance by stimulating spiral ganglion neurite regeneration. We used the cochlear implant electrode array for novel "close-field" electroporation to transduce mesenchymal cells lining the cochlear perilymphatic canals with a naked complementary DNA gene construct driving expression of brain-derived neurotrophic factor (BDNF) and a green fluorescent protein (GFP) reporter. The focusing of electric fields by particular cochlear implant electrode configurations led to surprisingly efficient gene delivery to adjacent mesenchymal cells. The resulting BDNF expression stimulated regeneration of spiral ganglion neurites, which had atrophied 2 weeks after ototoxic treatment, in a bilateral sensorineural deafness model. In this model, delivery of a control GFP-only vector failed to restore neuron structure, with atrophied neurons indistinguishable from unimplanted cochleae. With BDNF therapy, the regenerated spiral ganglion neurites extended close to the cochlear implant electrodes, with localized ectopic branching. This neural remodeling enabled bipolar stimulation via the cochlear implant array, with low stimulus thresholds and expanded dynamic range of the cochlear nerve, determined via electrically evoked auditory brainstem responses. This development may broadly improve neural interfaces and extend molecular medicine applications.

  5. Implementation of dynamic bias for neutron-photon pulse shape discrimination by using neural network classifiers

    NASA Astrophysics Data System (ADS)

    Cao, Zhong; Miller, L. F.; Buckner, M.

    In order to accurately determine dose equivalent in radiation fields that include both neutrons and photons, it is necessary to measure the relative number of neutrons to photons and to characterize the energy dependence of the neutrons. The relationship between dose and dose equivalent begins to increase rapidly at about 100 keV; thus, it is necessary to separate neutrons from photons for neutron energies as low as about 100 keV in order to measure dose equivalent in a mixed radiation field that includes both neutrons and photons. Preceptron and back propagation neural networks that use pulse amplitude and pulse rise time information obtain separation of neutron and photons with about 5% error for neutrons with energies as low as 100 keV, and this is accomplished for neutrons with energies that range from 100 keV to several MeV. If the ratio of neutrons to photons is changed by a factor of 10, the classification error increases to about 15% for the neural networks tested. A technique that uses the output from the preceptron as a priori for a Bayesian classifier is more robust to changes in the relative number of neutrons to photons, and it obtains a 5% classification error when this ratio is changed by a factor of ten. Results from this research demonstrate that it is feasible to use commercially available instrumentation in combination with artificial intelligence techniques to develop a practical detector that will accurately measure dose equivalent in mixed neutron-photon radiation fields.

  6. Stimulus dependence of local field potential spectra: experiment versus theory.

    PubMed

    Barbieri, Francesca; Mazzoni, Alberto; Logothetis, Nikos K; Panzeri, Stefano; Brunel, Nicolas

    2014-10-29

    The local field potential (LFP) captures different neural processes, including integrative synaptic dynamics that cannot be observed by measuring only the spiking activity of small populations. Therefore, investigating how LFP power is modulated by external stimuli can offer important insights into sensory neural representations. However, gaining such insight requires developing data-driven computational models that can identify and disambiguate the neural contributions to the LFP. Here, we investigated how networks of excitatory and inhibitory integrate-and-fire neurons responding to time-dependent inputs can be used to interpret sensory modulations of LFP spectra. We computed analytically from such models the LFP spectra and the information that they convey about input and used these analytical expressions to fit the model to LFPs recorded in V1 of anesthetized macaques (Macaca mulatta) during the presentation of color movies. Our expressions explain 60%-98% of the variance of the LFP spectrum shape and its dependency upon movie scenes and we achieved this with realistic values for the best-fit parameters. In particular, synaptic best-fit parameters were compatible with experimental measurements and the predictions of firing rates, based only on the fit of LFP data, correlated with the multiunit spike rate recorded from the same location. Moreover, the parameters characterizing the input to the network across different movie scenes correlated with cross-scene changes of several image features. Our findings suggest that analytical descriptions of spiking neuron networks may become a crucial tool for the interpretation of field recordings. Copyright © 2014 the authors 0270-6474/14/3414589-17$15.00/0.

  7. Additive noise-induced Turing transitions in spatial systems with application to neural fields and the Swift Hohenberg equation

    NASA Astrophysics Data System (ADS)

    Hutt, Axel; Longtin, Andre; Schimansky-Geier, Lutz

    2008-05-01

    This work studies the spatio-temporal dynamics of a generic integral-differential equation subject to additive random fluctuations. It introduces a combination of the stochastic center manifold approach for stochastic differential equations and the adiabatic elimination for Fokker-Planck equations, and studies analytically the systems’ stability near Turing bifurcations. In addition two types of fluctuation are studied, namely fluctuations uncorrelated in space and time, and global fluctuations, which are constant in space but uncorrelated in time. We show that the global fluctuations shift the Turing bifurcation threshold. This shift is proportional to the fluctuation variance. Applications to a neural field equation and the Swift-Hohenberg equation reveal the shift of the bifurcation to larger control parameters, which represents a stabilization of the system. All analytical results are confirmed by numerical simulations of the occurring mode equations and the full stochastic integral-differential equation. To gain some insight into experimental manifestations, the sum of uncorrelated and global additive fluctuations is studied numerically and the analytical results on global fluctuations are confirmed qualitatively.

  8. Results on SSH neural network forecasting in the Mediterranean Sea

    NASA Astrophysics Data System (ADS)

    Rixen, Michel; Beckers, Jean-Marie; Alvarez, Alberto; Tintore, Joaquim

    2002-01-01

    Nowadays, satellites are the only monitoring systems that cover almost continuously all possible ocean areas and are now an essential part of operational oceanography. A novel approach based on artificial intelligence (AI) concepts, exploits pasts time series of satellite images to infer near future ocean conditions at the surface by neural networks and genetic algorithms. The size of the AI problem is drastically reduced by splitting the spatio-temporal variability contained in the remote sensing data by using empirical orthogonal function (EOF) decomposition. The problem of forecasting the dynamics of a 2D surface field can thus be reduced by selecting the most relevant empirical modes, and non-linear time series predictors are then applied on the amplitudes only. In the present case study, we use altimetric maps of the Mediterranean Sea, combining TOPEX-POSEIDON and ERS-1/2 data for the period 1992 to 1997. The learning procedure is applied to each mode individually. The final forecast is then reconstructed form the EOFs and the forecasted amplitudes and compared to the real observed field for validation of the method.

  9. Generalized activity equations for spiking neural network dynamics.

    PubMed

    Buice, Michael A; Chow, Carson C

    2013-01-01

    Much progress has been made in uncovering the computational capabilities of spiking neural networks. However, spiking neurons will always be more expensive to simulate compared to rate neurons because of the inherent disparity in time scales-the spike duration time is much shorter than the inter-spike time, which is much shorter than any learning time scale. In numerical analysis, this is a classic stiff problem. Spiking neurons are also much more difficult to study analytically. One possible approach to making spiking networks more tractable is to augment mean field activity models with some information about spiking correlations. For example, such a generalized activity model could carry information about spiking rates and correlations between spikes self-consistently. Here, we will show how this can be accomplished by constructing a complete formal probabilistic description of the network and then expanding around a small parameter such as the inverse of the number of neurons in the network. The mean field theory of the system gives a rate-like description. The first order terms in the perturbation expansion keep track of covariances.

  10. Non-Lipschitzian dynamics for neural net modelling

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    1989-01-01

    Failure of the Lipschitz condition in unstable equilibrium points of dynamical systems leads to a multiple-choice response to an initial deterministic input. The evolution of such systems is characterized by a special type of unpredictability measured by unbounded Liapunov exponents. Possible relation of these systems to future neural networks is discussed.

  11. A Study of the Solar Wind-Magnetosphere Coupling Using Neural Networks

    NASA Astrophysics Data System (ADS)

    Wu, Jian-Guo; Lundstedt, Henrik

    1996-12-01

    The interaction between solar wind plasma and interplanetary magnetic field (IMF) and Earth's magnetosphere induces geomagnetic activity. Geomagnetic storms can cause many adverse effects on technical systems in space and on the Earth. It is therefore of great significance to accurately predict geomagnetic activity so as to minimize the amount of disruption to these operational systems and to allow them to work as efficiently as possible. Dynamic neural networks are powerful in modeling the dynamics encoded in time series of data. In this study, we use partially recurrent neural networks to study the solar wind-magnetosphere coupling by predicting geomagnetic storms (as measured by the Dstindex) from solar wind measurements. The solar wind, the IMF and the geomagnetic index Dst data are hourly averaged and read from the National Space Science Data Center's OMNI database. We selected these data from the period 1963 to 1992, which cover 10552h and contain storm time periods 9552h and quiet time periods 1000h. The data are then categorized into three data sets: a training set (6634h), across-validation set (1962h), and a test set (1956h). The validation set is used to determine where the training should be stopped whereas the test set is used for neural networks to get the generalization capability (the out-of-sample performance). Based on the correlation analysis between the Dst index and various solar wind parameters (including various combinations of solar wind parameters), the best coupling functions can be found from the out-of-sample performance of trained neural networks. The coupling functions found are then used to forecast geomagnetic storms one to several hours in advance. The comparisons are made on iterating the single-step prediction several times and on making a non iterated, direct prediction. Thus, we will present the best solar wind-magnetosphere coupling functions and the corresponding prediction results. Interesting Links: Lund Space Weather and AI Center

  12. LANL* V2.0: global modeling and validation

    NASA Astrophysics Data System (ADS)

    Koller, J.; Zaharia, S.

    2011-03-01

    We describe in this paper the new version of LANL*. Just like the previous version, this new version V2.0 of LANL* is an artificial neural network (ANN) for calculating the magnetic drift invariant, L*, that is used for modeling radiation belt dynamics and for other space weather applications. We have implemented the following enhancements in the new version: (1) we have removed the limitation to geosynchronous orbit and the model can now be used for any type of orbit. (2) The new version is based on the improved magnetic field model by Tsyganenko and Sitnov (2005) (TS05) instead of the older model by Tsyganenko et al. (2003). We have validated the model and compared our results to L* calculations with the TS05 model based on ephemerides for CRRES, Polar, GPS, a LANL geosynchronous satellite, and a virtual RBSP type orbit. We find that the neural network performs very well for all these orbits with an error typically Δ L* < 0.2 which corresponds to an error of 3% at geosynchronous orbit. This new LANL-V2.0 artificial neural network is orders of magnitudes faster than traditional numerical field line integration techniques with the TS05 model. It has applications to real-time radiation belt forecasting, analysis of data sets involving decades of satellite of observations, and other problems in space weather.

  13. Research on wind field algorithm of wind lidar based on BP neural network and grey prediction

    NASA Astrophysics Data System (ADS)

    Chen, Yong; Chen, Chun-Li; Luo, Xiong; Zhang, Yan; Yang, Ze-hou; Zhou, Jie; Shi, Xiao-ding; Wang, Lei

    2018-01-01

    This paper uses the BP neural network and grey algorithm to forecast and study radar wind field. In order to reduce the residual error in the wind field prediction which uses BP neural network and grey algorithm, calculating the minimum value of residual error function, adopting the residuals of the gray algorithm trained by BP neural network, using the trained network model to forecast the residual sequence, using the predicted residual error sequence to modify the forecast sequence of the grey algorithm. The test data show that using the grey algorithm modified by BP neural network can effectively reduce the residual value and improve the prediction precision.

  14. Technologies for imaging neural activity in large volumes

    PubMed Central

    Ji, Na; Freeman, Jeremy; Smith, Spencer L.

    2017-01-01

    Neural circuitry has evolved to form distributed networks that act dynamically across large volumes. Collecting data from individual planes, conventional microscopy cannot sample circuitry across large volumes at the temporal resolution relevant to neural circuit function and behaviors. Here, we review emerging technologies for rapid volume imaging of neural circuitry. We focus on two critical challenges: the inertia of optical systems, which limits image speed, and aberrations, which restrict the image volume. Optical sampling time must be long enough to ensure high-fidelity measurements, but optimized sampling strategies and point spread function engineering can facilitate rapid volume imaging of neural activity within this constraint. We also discuss new computational strategies for the processing and analysis of volume imaging data of increasing size and complexity. Together, optical and computational advances are providing a broader view of neural circuit dynamics, and help elucidate how brain regions work in concert to support behavior. PMID:27571194

  15. A One-Layer Recurrent Neural Network for Real-Time Portfolio Optimization With Probability Criterion.

    PubMed

    Liu, Qingshan; Dang, Chuangyin; Huang, Tingwen

    2013-02-01

    This paper presents a decision-making model described by a recurrent neural network for dynamic portfolio optimization. The portfolio-optimization problem is first converted into a constrained fractional programming problem. Since the objective function in the programming problem is not convex, the traditional optimization techniques are no longer applicable for solving this problem. Fortunately, the objective function in the fractional programming is pseudoconvex on the feasible region. It leads to a one-layer recurrent neural network modeled by means of a discontinuous dynamic system. To ensure the optimal solutions for portfolio optimization, the convergence of the proposed neural network is analyzed and proved. In fact, the neural network guarantees to get the optimal solutions for portfolio-investment advice if some mild conditions are satisfied. A numerical example with simulation results substantiates the effectiveness and illustrates the characteristics of the proposed neural network.

  16. Robust fault detection of wind energy conversion systems based on dynamic neural networks.

    PubMed

    Talebi, Nasser; Sadrnia, Mohammad Ali; Darabi, Ahmad

    2014-01-01

    Occurrence of faults in wind energy conversion systems (WECSs) is inevitable. In order to detect the occurred faults at the appropriate time, avoid heavy economic losses, ensure safe system operation, prevent damage to adjacent relevant systems, and facilitate timely repair of failed components; a fault detection system (FDS) is required. Recurrent neural networks (RNNs) have gained a noticeable position in FDSs and they have been widely used for modeling of complex dynamical systems. One method for designing an FDS is to prepare a dynamic neural model emulating the normal system behavior. By comparing the outputs of the real system and neural model, incidence of the faults can be identified. In this paper, by utilizing a comprehensive dynamic model which contains both mechanical and electrical components of the WECS, an FDS is suggested using dynamic RNNs. The presented FDS detects faults of the generator's angular velocity sensor, pitch angle sensors, and pitch actuators. Robustness of the FDS is achieved by employing an adaptive threshold. Simulation results show that the proposed scheme is capable to detect the faults shortly and it has very low false and missed alarms rate.

  17. Robust Fault Detection of Wind Energy Conversion Systems Based on Dynamic Neural Networks

    PubMed Central

    Talebi, Nasser; Sadrnia, Mohammad Ali; Darabi, Ahmad

    2014-01-01

    Occurrence of faults in wind energy conversion systems (WECSs) is inevitable. In order to detect the occurred faults at the appropriate time, avoid heavy economic losses, ensure safe system operation, prevent damage to adjacent relevant systems, and facilitate timely repair of failed components; a fault detection system (FDS) is required. Recurrent neural networks (RNNs) have gained a noticeable position in FDSs and they have been widely used for modeling of complex dynamical systems. One method for designing an FDS is to prepare a dynamic neural model emulating the normal system behavior. By comparing the outputs of the real system and neural model, incidence of the faults can be identified. In this paper, by utilizing a comprehensive dynamic model which contains both mechanical and electrical components of the WECS, an FDS is suggested using dynamic RNNs. The presented FDS detects faults of the generator's angular velocity sensor, pitch angle sensors, and pitch actuators. Robustness of the FDS is achieved by employing an adaptive threshold. Simulation results show that the proposed scheme is capable to detect the faults shortly and it has very low false and missed alarms rate. PMID:24744774

  18. Soft tissue deformation modelling through neural dynamics-based reaction-diffusion mechanics.

    PubMed

    Zhang, Jinao; Zhong, Yongmin; Gu, Chengfan

    2018-05-30

    Soft tissue deformation modelling forms the basis of development of surgical simulation, surgical planning and robotic-assisted minimally invasive surgery. This paper presents a new methodology for modelling of soft tissue deformation based on reaction-diffusion mechanics via neural dynamics. The potential energy stored in soft tissues due to a mechanical load to deform tissues away from their rest state is treated as the equivalent transmembrane potential energy, and it is distributed in the tissue masses in the manner of reaction-diffusion propagation of nonlinear electrical waves. The reaction-diffusion propagation of mechanical potential energy and nonrigid mechanics of motion are combined to model soft tissue deformation and its dynamics, both of which are further formulated as the dynamics of cellular neural networks to achieve real-time computational performance. The proposed methodology is implemented with a haptic device for interactive soft tissue deformation with force feedback. Experimental results demonstrate that the proposed methodology exhibits nonlinear force-displacement relationship for nonlinear soft tissue deformation. Homogeneous, anisotropic and heterogeneous soft tissue material properties can be modelled through the inherent physical properties of mass points. Graphical abstract Soft tissue deformation modelling with haptic feedback via neural dynamics-based reaction-diffusion mechanics.

  19. Empirical Modeling of the Plasmasphere Dynamics Using Neural Networks

    NASA Astrophysics Data System (ADS)

    Zhelavskaya, I. S.; Shprits, Y.; Spasojevic, M.

    2017-12-01

    We present a new empirical model for reconstructing the global dynamics of the cold plasma density distribution based only on solar wind data and geomagnetic indices. Utilizing the density database obtained using the NURD (Neural-network-based Upper hybrid Resonance Determination) algorithm for the period of October 1, 2012 - July 1, 2016, in conjunction with solar wind data and geomagnetic indices, we develop a neural network model that is capable of globally reconstructing the dynamics of the cold plasma density distribution for 2 ≤ L ≤ 6 and all local times. We validate and test the model by measuring its performance on independent datasets withheld from the training set and by comparing the model predicted global evolution with global images of He+ distribution in the Earth's plasmasphere from the IMAGE Extreme UltraViolet (EUV) instrument. We identify the parameters that best quantify the plasmasphere dynamics by training and comparing multiple neural networks with different combinations of input parameters (geomagnetic indices, solar wind data, and different durations of their time history). We demonstrate results of both local and global plasma density reconstruction. This study illustrates how global dynamics can be reconstructed from local in-situ observations by using machine learning techniques.

  20. The Roles and Regulation of Polycomb Complexes in Neural Development

    PubMed Central

    Corley, Matthew; Kroll, Kristen L.

    2014-01-01

    In the developing mammalian nervous system, common progenitors integrate both cell extrinsic and intrinsic regulatory programs to produce distinct neuronal and glial cell types as development proceeds. This spatiotemporal restriction of neural progenitor differentiation is enforced, in part, by the dynamic reorganization of chromatin into repressive domains by Polycomb Repressive Complexes, effectively limiting the expression of fate-determining genes. Here, we review distinct roles that the Polycomb Repressive Complexes play during neurogenesis and gliogenesis, while also highlighting recent work describing the molecular mechanisms that govern their dynamic activity in neural development. Further investigation of how Polycomb complexes are regulated in neural development will enable more precise manipulation of neural progenitor differentiation, facilitating the efficient generation of specific neuronal and glial cell types for many biological applications. PMID:25367430

  1. Chaos control of the brushless direct current motor using adaptive dynamic surface control based on neural network with the minimum weights.

    PubMed

    Luo, Shaohua; Wu, Songli; Gao, Ruizhen

    2015-07-01

    This paper investigates chaos control for the brushless DC motor (BLDCM) system by adaptive dynamic surface approach based on neural network with the minimum weights. The BLDCM system contains parameter perturbation, chaotic behavior, and uncertainty. With the help of radial basis function (RBF) neural network to approximate the unknown nonlinear functions, the adaptive law is established to overcome uncertainty of the control gain. By introducing the RBF neural network and adaptive technology into the dynamic surface control design, a robust chaos control scheme is developed. It is proved that the proposed control approach can guarantee that all signals in the closed-loop system are globally uniformly bounded, and the tracking error converges to a small neighborhood of the origin. Simulation results are provided to show that the proposed approach works well in suppressing chaos and parameter perturbation.

  2. Chaos control of the brushless direct current motor using adaptive dynamic surface control based on neural network with the minimum weights

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Shaohua; Department of Mechanical Engineering, Chongqing Aerospace Polytechnic, Chongqing, 400021; Wu, Songli

    2015-07-15

    This paper investigates chaos control for the brushless DC motor (BLDCM) system by adaptive dynamic surface approach based on neural network with the minimum weights. The BLDCM system contains parameter perturbation, chaotic behavior, and uncertainty. With the help of radial basis function (RBF) neural network to approximate the unknown nonlinear functions, the adaptive law is established to overcome uncertainty of the control gain. By introducing the RBF neural network and adaptive technology into the dynamic surface control design, a robust chaos control scheme is developed. It is proved that the proposed control approach can guarantee that all signals in themore » closed-loop system are globally uniformly bounded, and the tracking error converges to a small neighborhood of the origin. Simulation results are provided to show that the proposed approach works well in suppressing chaos and parameter perturbation.« less

  3. Back-propagation learning of infinite-dimensional dynamical systems.

    PubMed

    Tokuda, Isao; Tokunaga, Ryuji; Aihara, Kazuyuki

    2003-10-01

    This paper presents numerical studies of applying back-propagation learning to a delayed recurrent neural network (DRNN). The DRNN is a continuous-time recurrent neural network having time delayed feedbacks and the back-propagation learning is to teach spatio-temporal dynamics to the DRNN. Since the time-delays make the dynamics of the DRNN infinite-dimensional, the learning algorithm and the learning capability of the DRNN are different from those of the ordinary recurrent neural network (ORNN) having no time-delays. First, two types of learning algorithms are developed for a class of DRNNs. Then, using chaotic signals generated from the Mackey-Glass equation and the Rössler equations, learning capability of the DRNN is examined. Comparing the learning algorithms, learning capability, and robustness against noise of the DRNN with those of the ORNN and time delay neural network, advantages as well as disadvantages of the DRNN are investigated.

  4. Within- and across-trial dynamics of human EEG reveal cooperative interplay between reinforcement learning and working memory.

    PubMed

    Collins, Anne G E; Frank, Michael J

    2018-03-06

    Learning from rewards and punishments is essential to survival and facilitates flexible human behavior. It is widely appreciated that multiple cognitive and reinforcement learning systems contribute to decision-making, but the nature of their interactions is elusive. Here, we leverage methods for extracting trial-by-trial indices of reinforcement learning (RL) and working memory (WM) in human electro-encephalography to reveal single-trial computations beyond that afforded by behavior alone. Neural dynamics confirmed that increases in neural expectation were predictive of reduced neural surprise in the following feedback period, supporting central tenets of RL models. Within- and cross-trial dynamics revealed a cooperative interplay between systems for learning, in which WM contributes expectations to guide RL, despite competition between systems during choice. Together, these results provide a deeper understanding of how multiple neural systems interact for learning and decision-making and facilitate analysis of their disruption in clinical populations.

  5. Dynamics of individual perceptual decisions

    PubMed Central

    Clark, Torin K.; Lu, Yue M.; Karmali, Faisal

    2015-01-01

    Perceptual decision making is fundamental to a broad range of fields including neurophysiology, economics, medicine, advertising, law, etc. Although recent findings have yielded major advances in our understanding of perceptual decision making, decision making as a function of time and frequency (i.e., decision-making dynamics) is not well understood. To limit the review length, we focus most of this review on human findings. Animal findings, which are extensively reviewed elsewhere, are included when beneficial or necessary. We attempt to put these various findings and data sets, which can appear to be unrelated in the absence of a formal dynamic analysis, into context using published models. Specifically, by adding appropriate dynamic mechanisms (e.g., high-pass filters) to existing models, it appears that a number of otherwise seemingly disparate findings from the literature might be explained. One hypothesis that arises through this dynamic analysis is that decision making includes phasic (high pass) neural mechanisms, an evidence accumulator and/or some sort of midtrial decision-making mechanism (e.g., peak detector and/or decision boundary). PMID:26467513

  6. Generalized Adaptive Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Tawel, Raoul

    1993-01-01

    Mathematical model of supervised learning by artificial neural network provides for simultaneous adjustments of both temperatures of neurons and synaptic weights, and includes feedback as well as feedforward synaptic connections. Extension of mathematical model described in "Adaptive Neurons For Artificial Neural Networks" (NPO-17803). Dynamics of neural network represented in new model by less-restrictive continuous formalism.

  7. Impact of leakage delay on bifurcation in high-order fractional BAM neural networks.

    PubMed

    Huang, Chengdai; Cao, Jinde

    2018-02-01

    The effects of leakage delay on the dynamics of neural networks with integer-order have lately been received considerable attention. It has been confirmed that fractional neural networks more appropriately uncover the dynamical properties of neural networks, but the results of fractional neural networks with leakage delay are relatively few. This paper primarily concentrates on the issue of bifurcation for high-order fractional bidirectional associative memory(BAM) neural networks involving leakage delay. The first attempt is made to tackle the stability and bifurcation of high-order fractional BAM neural networks with time delay in leakage terms in this paper. The conditions for the appearance of bifurcation for the proposed systems with leakage delay are firstly established by adopting time delay as a bifurcation parameter. Then, the bifurcation criteria of such system without leakage delay are successfully acquired. Comparative analysis wondrously detects that the stability performance of the proposed high-order fractional neural networks is critically weakened by leakage delay, they cannot be overlooked. Numerical examples are ultimately exhibited to attest the efficiency of the theoretical results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Adaptive Calibration of Dynamic Accommodation—Implications for Accommodating Intraocular Lenses

    PubMed Central

    Schor, Clifton M.; Bharadwaj, Shrikant R.

    2009-01-01

    PURPOSE When the aging lens is replaced with prosthetic accommodating intraocular lenses (IOLs), with effective viscoelasticities different from those of the natural lens, mismatches could arise between the neural control of accommodation and the biomechanical properties of the new lens. These mismatches could lead to either unstable oscillations or sluggishness of dynamic accommodation. Using computer simulations, we investigated whether optimal accommodative responses could be restored through recalibration of the neural control of accommodation. Using human experiments, we also investigated whether the accommodative system has the capacity for adaptive recalibration in response to changes in lens biomechanics. METHODS Dynamic performance of two accommodating IOL prototypes was simulated for a 45-year-old accommodative system, before and after neural recalibration, using a dynamic model of accommodation. Accommodating IOL I, a prototype for an injectable accommodating IOL, was less stiff and less viscous than the natural 45-year-old lens. Accommodating IOL II, a prototype for a translating accommodating IOL, was less stiff and more viscous than the natural 45-year-old lens. Short-term adaptive recalibration of dynamic accommodation was stimulated using a double-step adaptation paradigm that optically induced changes in neuromuscular effort mimicking responses to changes in lens biomechanics. RESULTS Model simulations indicate that the unstable oscillations or sluggishness of dynamic accommodation resulting from mismatches between neural control and lens biomechanics might be restored through neural recalibration. CONCLUSIONS Empirical measures reveal that the accommodative system is capable of adaptive recalibration in response to optical loads that simulate effects of changing lens biomechanics. PMID:19044245

  9. Feedback control stabilization of critical dynamics via resource transport on multilayer networks: How glia enable learning dynamics in the brain

    NASA Astrophysics Data System (ADS)

    Virkar, Yogesh S.; Shew, Woodrow L.; Restrepo, Juan G.; Ott, Edward

    2016-10-01

    Learning and memory are acquired through long-lasting changes in synapses. In the simplest models, such synaptic potentiation typically leads to runaway excitation, but in reality there must exist processes that robustly preserve overall stability of the neural system dynamics. How is this accomplished? Various approaches to this basic question have been considered. Here we propose a particularly compelling and natural mechanism for preserving stability of learning neural systems. This mechanism is based on the global processes by which metabolic resources are distributed to the neurons by glial cells. Specifically, we introduce and study a model composed of two interacting networks: a model neural network interconnected by synapses that undergo spike-timing-dependent plasticity; and a model glial network interconnected by gap junctions that diffusively transport metabolic resources among the glia and, ultimately, to neural synapses where they are consumed. Our main result is that the biophysical constraints imposed by diffusive transport of metabolic resources through the glial network can prevent runaway growth of synaptic strength, both during ongoing activity and during learning. Our findings suggest a previously unappreciated role for glial transport of metabolites in the feedback control stabilization of neural network dynamics during learning.

  10. Large Deviations for Nonlocal Stochastic Neural Fields

    PubMed Central

    2014-01-01

    We study the effect of additive noise on integro-differential neural field equations. In particular, we analyze an Amari-type model driven by a Q-Wiener process, and focus on noise-induced transitions and escape. We argue that proving a sharp Kramers’ law for neural fields poses substantial difficulties, but that one may transfer techniques from stochastic partial differential equations to establish a large deviation principle (LDP). Then we demonstrate that an efficient finite-dimensional approximation of the stochastic neural field equation can be achieved using a Galerkin method and that the resulting finite-dimensional rate function for the LDP can have a multiscale structure in certain cases. These results form the starting point for an efficient practical computation of the LDP. Our approach also provides the technical basis for further rigorous study of noise-induced transitions in neural fields based on Galerkin approximations. Mathematics Subject Classification (2000): 60F10, 60H15, 65M60, 92C20. PMID:24742297

  11. Dynamic Organization of Hierarchical Memories

    PubMed Central

    Kurikawa, Tomoki; Kaneko, Kunihiko

    2016-01-01

    In the brain, external objects are categorized in a hierarchical way. Although it is widely accepted that objects are represented as static attractors in neural state space, this view does not take account interaction between intrinsic neural dynamics and external input, which is essential to understand how neural system responds to inputs. Indeed, structured spontaneous neural activity without external inputs is known to exist, and its relationship with evoked activities is discussed. Then, how categorical representation is embedded into the spontaneous and evoked activities has to be uncovered. To address this question, we studied bifurcation process with increasing input after hierarchically clustered associative memories are learned. We found a “dynamic categorization”; neural activity without input wanders globally over the state space including all memories. Then with the increase of input strength, diffuse representation of higher category exhibits transitions to focused ones specific to each object. The hierarchy of memories is embedded in the transition probability from one memory to another during the spontaneous dynamics. With increased input strength, neural activity wanders over a narrower state space including a smaller set of memories, showing more specific category or memory corresponding to the applied input. Moreover, such coarse-to-fine transitions are also observed temporally during transient process under constant input, which agrees with experimental findings in the temporal cortex. These results suggest the hierarchy emerging through interaction with an external input underlies hierarchy during transient process, as well as in the spontaneous activity. PMID:27618549

  12. Sensory noise predicts divisive reshaping of receptive fields

    PubMed Central

    Deneve, Sophie; Gutkin, Boris

    2017-01-01

    In order to respond reliably to specific features of their environment, sensory neurons need to integrate multiple incoming noisy signals. Crucially, they also need to compete for the interpretation of those signals with other neurons representing similar features. The form that this competition should take depends critically on the noise corrupting these signals. In this study we show that for the type of noise commonly observed in sensory systems, whose variance scales with the mean signal, sensory neurons should selectively divide their input signals by their predictions, suppressing ambiguous cues while amplifying others. Any change in the stimulus context alters which inputs are suppressed, leading to a deep dynamic reshaping of neural receptive fields going far beyond simple surround suppression. Paradoxically, these highly variable receptive fields go alongside and are in fact required for an invariant representation of external sensory features. In addition to offering a normative account of context-dependent changes in sensory responses, perceptual inference in the presence of signal-dependent noise accounts for ubiquitous features of sensory neurons such as divisive normalization, gain control and contrast dependent temporal dynamics. PMID:28622330

  13. Sensory noise predicts divisive reshaping of receptive fields.

    PubMed

    Chalk, Matthew; Masset, Paul; Deneve, Sophie; Gutkin, Boris

    2017-06-01

    In order to respond reliably to specific features of their environment, sensory neurons need to integrate multiple incoming noisy signals. Crucially, they also need to compete for the interpretation of those signals with other neurons representing similar features. The form that this competition should take depends critically on the noise corrupting these signals. In this study we show that for the type of noise commonly observed in sensory systems, whose variance scales with the mean signal, sensory neurons should selectively divide their input signals by their predictions, suppressing ambiguous cues while amplifying others. Any change in the stimulus context alters which inputs are suppressed, leading to a deep dynamic reshaping of neural receptive fields going far beyond simple surround suppression. Paradoxically, these highly variable receptive fields go alongside and are in fact required for an invariant representation of external sensory features. In addition to offering a normative account of context-dependent changes in sensory responses, perceptual inference in the presence of signal-dependent noise accounts for ubiquitous features of sensory neurons such as divisive normalization, gain control and contrast dependent temporal dynamics.

  14. Spatio-Temporal Fluctuations of Neural Dynamics in Mild Cognitive Impairment and Alzheimer's Disease.

    PubMed

    Poza, Jesús; Gómez, Carlos; García, María; Tola-Arribas, Miguel A; Carreres, Alicia; Cano, Mónica; Hornero, Roberto

    2017-01-01

    An accurate characterization of neural dynamics in mild cognitive impairment (MCI) is of paramount importance to gain further insights into the underlying neural mechanisms in Alzheimer's disease (AD). Nevertheless, there has been relatively little research on brain dynamics in prodromal AD. As a consequence, its neural substrates remain unclear. In the present research, electroencephalographic (EEG) recordings from patients with dementia due to AD, subjects with MCI due to AD and healthy controls (HC) were analyzed using relative power (RP) in conventional EEG frequency bands and a novel parameter useful to explore the spatio-temporal fluctuations of neural dynamics: the spectral flux (SF). Our results suggest that dementia due to AD is associated with a significant slowing of EEG activity and several significant alterations in spectral fluctuations at low (i.e. theta) and high (i.e. beta and gamma) frequency bands compared to HC (p < 0.05). Furthermore, subjects with MCI due to AD exhibited a specific frequency-dependent pattern of spatio-temporal abnormalities, which can help identify neural mechanisms involved in cognitive impairment preceding AD. Classification analyses using linear discriminant analysis with a leave-one-out cross-validation procedure showed that the combination of RP and within-electrode SF at the beta band was useful to obtain a 77.3 % of accuracy to discriminate between HC and AD patients. In the case of comparison between HC and MCI subjects, the classification accuracy reached a value of 79.2 %, combining within-electrode SF at beta and gamma bands. SF has proven to be a useful measure to obtain an original description of brain dynamics at different stages of AD. Consequently, SF may contribute to gain a more comprehensive understanding into neural substrates underlying MCI, as well as to develop potential early AD biomarkers. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  15. Modular, Hierarchical Learning By Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Baldi, Pierre F.; Toomarian, Nikzad

    1996-01-01

    Modular and hierarchical approach to supervised learning by artificial neural networks leads to neural networks more structured than neural networks in which all neurons fully interconnected. These networks utilize general feedforward flow of information and sparse recurrent connections to achieve dynamical effects. The modular organization, sparsity of modular units and connections, and fact that learning is much more circumscribed are all attractive features for designing neural-network hardware. Learning streamlined by imitating some aspects of biological neural networks.

  16. Hopf bifurcation in a nonlocal nonlinear transport equation stemming from stochastic neural dynamics

    NASA Astrophysics Data System (ADS)

    Drogoul, Audric; Veltz, Romain

    2017-02-01

    In this work, we provide three different numerical evidences for the occurrence of a Hopf bifurcation in a recently derived [De Masi et al., J. Stat. Phys. 158, 866-902 (2015) and Fournier and löcherbach, Ann. Inst. H. Poincaré Probab. Stat. 52, 1844-1876 (2016)] mean field limit of a stochastic network of excitatory spiking neurons. The mean field limit is a challenging nonlocal nonlinear transport equation with boundary conditions. The first evidence relies on the computation of the spectrum of the linearized equation. The second stems from the simulation of the full mean field. Finally, the last evidence comes from the simulation of the network for a large number of neurons. We provide a "recipe" to find such bifurcation which nicely complements the works in De Masi et al. [J. Stat. Phys. 158, 866-902 (2015)] and Fournier and löcherbach [Ann. Inst. H. Poincaré Probab. Stat. 52, 1844-1876 (2016)]. This suggests in return to revisit theoretically these mean field equations from a dynamical point of view. Finally, this work shows how the noise level impacts the transition from asynchronous activity to partial synchronization in excitatory globally pulse-coupled networks.

  17. Neural Dynamics Associated with Semantic and Episodic Memory for Faces: Evidence from Multiple Frequency Bands

    ERIC Educational Resources Information Center

    Zion-Golumbic, Elana; Kutas, Marta; Bentin, Shlomo

    2010-01-01

    Prior semantic knowledge facilitates episodic recognition memory for faces. To examine the neural manifestation of the interplay between semantic and episodic memory, we investigated neuroelectric dynamics during the creation (study) and the retrieval (test) of episodic memories for famous and nonfamous faces. Episodic memory effects were evident…

  18. Mitochondrial dynamics in the regulation of neurogenesis: From development to the adult brain.

    PubMed

    Khacho, Mireille; Slack, Ruth S

    2018-01-01

    Mitochondria are classically known to be the cellular energy producers, but a renewed appreciation for these organelles has developed with the accumulating discoveries of additional functions. The importance of mitochondria within the brain has been long known, particularly given the high-energy demanding nature of neurons. The energy demands imposed by neurons require the well-orchestrated morphological adaptation and distribution of mitochondria. Recent studies now reveal the importance of mitochondrial dynamics not only in mature neurons but also during neural development, particularly during the process of neurogenesis and neural stem cell fate decisions. In this review, we will highlight the recent findings that illustrate the importance of mitochondrial dynamics in neurodevelopment and neural stem cell function. Developmental Dynamics 247:47-53, 2018. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  19. Resting-State Functional Connectivity Emerges from Structurally and Dynamically Shaped Slow Linear Fluctuations

    PubMed Central

    Deco, Gustavo; Mantini, Dante; Romani, Gian Luca; Hagmann, Patric; Corbetta, Maurizio

    2013-01-01

    Brain fluctuations at rest are not random but are structured in spatial patterns of correlated activity across different brain areas. The question of how resting-state functional connectivity (FC) emerges from the brain's anatomical connections has motivated several experimental and computational studies to understand structure–function relationships. However, the mechanistic origin of resting state is obscured by large-scale models' complexity, and a close structure–function relation is still an open problem. Thus, a realistic but simple enough description of relevant brain dynamics is needed. Here, we derived a dynamic mean field model that consistently summarizes the realistic dynamics of a detailed spiking and conductance-based synaptic large-scale network, in which connectivity is constrained by diffusion imaging data from human subjects. The dynamic mean field approximates the ensemble dynamics, whose temporal evolution is dominated by the longest time scale of the system. With this reduction, we demonstrated that FC emerges as structured linear fluctuations around a stable low firing activity state close to destabilization. Moreover, the model can be further and crucially simplified into a set of motion equations for statistical moments, providing a direct analytical link between anatomical structure, neural network dynamics, and FC. Our study suggests that FC arises from noise propagation and dynamical slowing down of fluctuations in an anatomically constrained dynamical system. Altogether, the reduction from spiking models to statistical moments presented here provides a new framework to explicitly understand the building up of FC through neuronal dynamics underpinned by anatomical connections and to drive hypotheses in task-evoked studies and for clinical applications. PMID:23825427

  20. Resting-state functional connectivity emerges from structurally and dynamically shaped slow linear fluctuations.

    PubMed

    Deco, Gustavo; Ponce-Alvarez, Adrián; Mantini, Dante; Romani, Gian Luca; Hagmann, Patric; Corbetta, Maurizio

    2013-07-03

    Brain fluctuations at rest are not random but are structured in spatial patterns of correlated activity across different brain areas. The question of how resting-state functional connectivity (FC) emerges from the brain's anatomical connections has motivated several experimental and computational studies to understand structure-function relationships. However, the mechanistic origin of resting state is obscured by large-scale models' complexity, and a close structure-function relation is still an open problem. Thus, a realistic but simple enough description of relevant brain dynamics is needed. Here, we derived a dynamic mean field model that consistently summarizes the realistic dynamics of a detailed spiking and conductance-based synaptic large-scale network, in which connectivity is constrained by diffusion imaging data from human subjects. The dynamic mean field approximates the ensemble dynamics, whose temporal evolution is dominated by the longest time scale of the system. With this reduction, we demonstrated that FC emerges as structured linear fluctuations around a stable low firing activity state close to destabilization. Moreover, the model can be further and crucially simplified into a set of motion equations for statistical moments, providing a direct analytical link between anatomical structure, neural network dynamics, and FC. Our study suggests that FC arises from noise propagation and dynamical slowing down of fluctuations in an anatomically constrained dynamical system. Altogether, the reduction from spiking models to statistical moments presented here provides a new framework to explicitly understand the building up of FC through neuronal dynamics underpinned by anatomical connections and to drive hypotheses in task-evoked studies and for clinical applications.

  1. Forecasting influenza-like illness dynamics for military populations using neural networks and social media

    PubMed Central

    Ayton, Ellyn; Porterfield, Katherine; Corley, Courtney D.

    2017-01-01

    This work is the first to take advantage of recurrent neural networks to predict influenza-like illness (ILI) dynamics from various linguistic signals extracted from social media data. Unlike other approaches that rely on timeseries analysis of historical ILI data and the state-of-the-art machine learning models, we build and evaluate the predictive power of neural network architectures based on Long Short Term Memory (LSTMs) units capable of nowcasting (predicting in “real-time”) and forecasting (predicting the future) ILI dynamics in the 2011 – 2014 influenza seasons. To build our models we integrate information people post in social media e.g., topics, embeddings, word ngrams, stylistic patterns, and communication behavior using hashtags and mentions. We then quantitatively evaluate the predictive power of different social media signals and contrast the performance of the-state-of-the-art regression models with neural networks using a diverse set of evaluation metrics. Finally, we combine ILI and social media signals to build a joint neural network model for ILI dynamics prediction. Unlike the majority of the existing work, we specifically focus on developing models for local rather than national ILI surveillance, specifically for military rather than general populations in 26 U.S. and six international locations., and analyze how model performance depends on the amount of social media data available per location. Our approach demonstrates several advantages: (a) Neural network architectures that rely on LSTM units trained on social media data yield the best performance compared to previously used regression models. (b) Previously under-explored language and communication behavior features are more predictive of ILI dynamics than stylistic and topic signals expressed in social media. (c) Neural network models learned exclusively from social media signals yield comparable or better performance to the models learned from ILI historical data, thus, signals from social media can be potentially used to accurately forecast ILI dynamics for the regions where ILI historical data is not available. (d) Neural network models learned from combined ILI and social media signals significantly outperform models that rely solely on ILI historical data, which adds to a great potential of alternative public sources for ILI dynamics prediction. (e) Location-specific models outperform previously used location-independent models e.g., U.S. only. (f) Prediction results significantly vary across geolocations depending on the amount of social media data available and ILI activity patterns. (g) Model performance improves with more tweets available per geo-location e.g., the error gets lower and the Pearson score gets higher for locations with more tweets. PMID:29244814

  2. Forecasting influenza-like illness dynamics for military populations using neural networks and social media.

    PubMed

    Volkova, Svitlana; Ayton, Ellyn; Porterfield, Katherine; Corley, Courtney D

    2017-01-01

    This work is the first to take advantage of recurrent neural networks to predict influenza-like illness (ILI) dynamics from various linguistic signals extracted from social media data. Unlike other approaches that rely on timeseries analysis of historical ILI data and the state-of-the-art machine learning models, we build and evaluate the predictive power of neural network architectures based on Long Short Term Memory (LSTMs) units capable of nowcasting (predicting in "real-time") and forecasting (predicting the future) ILI dynamics in the 2011 - 2014 influenza seasons. To build our models we integrate information people post in social media e.g., topics, embeddings, word ngrams, stylistic patterns, and communication behavior using hashtags and mentions. We then quantitatively evaluate the predictive power of different social media signals and contrast the performance of the-state-of-the-art regression models with neural networks using a diverse set of evaluation metrics. Finally, we combine ILI and social media signals to build a joint neural network model for ILI dynamics prediction. Unlike the majority of the existing work, we specifically focus on developing models for local rather than national ILI surveillance, specifically for military rather than general populations in 26 U.S. and six international locations., and analyze how model performance depends on the amount of social media data available per location. Our approach demonstrates several advantages: (a) Neural network architectures that rely on LSTM units trained on social media data yield the best performance compared to previously used regression models. (b) Previously under-explored language and communication behavior features are more predictive of ILI dynamics than stylistic and topic signals expressed in social media. (c) Neural network models learned exclusively from social media signals yield comparable or better performance to the models learned from ILI historical data, thus, signals from social media can be potentially used to accurately forecast ILI dynamics for the regions where ILI historical data is not available. (d) Neural network models learned from combined ILI and social media signals significantly outperform models that rely solely on ILI historical data, which adds to a great potential of alternative public sources for ILI dynamics prediction. (e) Location-specific models outperform previously used location-independent models e.g., U.S. only. (f) Prediction results significantly vary across geolocations depending on the amount of social media data available and ILI activity patterns. (g) Model performance improves with more tweets available per geo-location e.g., the error gets lower and the Pearson score gets higher for locations with more tweets.

  3. Open quantum generalisation of Hopfield neural networks

    NASA Astrophysics Data System (ADS)

    Rotondo, P.; Marcuzzi, M.; Garrahan, J. P.; Lesanovsky, I.; Müller, M.

    2018-03-01

    We propose a new framework to understand how quantum effects may impact on the dynamics of neural networks. We implement the dynamics of neural networks in terms of Markovian open quantum systems, which allows us to treat thermal and quantum coherent effects on the same footing. In particular, we propose an open quantum generalisation of the Hopfield neural network, the simplest toy model of associative memory. We determine its phase diagram and show that quantum fluctuations give rise to a qualitatively new non-equilibrium phase. This novel phase is characterised by limit cycles corresponding to high-dimensional stationary manifolds that may be regarded as a generalisation of storage patterns to the quantum domain.

  4. Finding the beat: a neural perspective across humans and non-human primates.

    PubMed

    Merchant, Hugo; Grahn, Jessica; Trainor, Laurel; Rohrmeier, Martin; Fitch, W Tecumseh

    2015-03-19

    Humans possess an ability to perceive and synchronize movements to the beat in music ('beat perception and synchronization'), and recent neuroscientific data have offered new insights into this beat-finding capacity at multiple neural levels. Here, we review and compare behavioural and neural data on temporal and sequential processing during beat perception and entrainment tasks in macaques (including direct neural recording and local field potential (LFP)) and humans (including fMRI, EEG and MEG). These abilities rest upon a distributed set of circuits that include the motor cortico-basal-ganglia-thalamo-cortical (mCBGT) circuit, where the supplementary motor cortex (SMA) and the putamen are critical cortical and subcortical nodes, respectively. In addition, a cortical loop between motor and auditory areas, connected through delta and beta oscillatory activity, is deeply involved in these behaviours, with motor regions providing the predictive timing needed for the perception of, and entrainment to, musical rhythms. The neural discharge rate and the LFP oscillatory activity in the gamma- and beta-bands in the putamen and SMA of monkeys are tuned to the duration of intervals produced during a beat synchronization-continuation task (SCT). Hence, the tempo during beat synchronization is represented by different interval-tuned cells that are activated depending on the produced interval. In addition, cells in these areas are tuned to the serial-order elements of the SCT. Thus, the underpinnings of beat synchronization are intrinsically linked to the dynamics of cell populations tuned for duration and serial order throughout the mCBGT. We suggest that a cross-species comparison of behaviours and the neural circuits supporting them sets the stage for a new generation of neurally grounded computational models for beat perception and synchronization. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  5. Finding the beat: a neural perspective across humans and non-human primates

    PubMed Central

    Merchant, Hugo; Grahn, Jessica; Trainor, Laurel; Rohrmeier, Martin; Fitch, W. Tecumseh

    2015-01-01

    Humans possess an ability to perceive and synchronize movements to the beat in music (‘beat perception and synchronization’), and recent neuroscientific data have offered new insights into this beat-finding capacity at multiple neural levels. Here, we review and compare behavioural and neural data on temporal and sequential processing during beat perception and entrainment tasks in macaques (including direct neural recording and local field potential (LFP)) and humans (including fMRI, EEG and MEG). These abilities rest upon a distributed set of circuits that include the motor cortico-basal-ganglia–thalamo-cortical (mCBGT) circuit, where the supplementary motor cortex (SMA) and the putamen are critical cortical and subcortical nodes, respectively. In addition, a cortical loop between motor and auditory areas, connected through delta and beta oscillatory activity, is deeply involved in these behaviours, with motor regions providing the predictive timing needed for the perception of, and entrainment to, musical rhythms. The neural discharge rate and the LFP oscillatory activity in the gamma- and beta-bands in the putamen and SMA of monkeys are tuned to the duration of intervals produced during a beat synchronization–continuation task (SCT). Hence, the tempo during beat synchronization is represented by different interval-tuned cells that are activated depending on the produced interval. In addition, cells in these areas are tuned to the serial-order elements of the SCT. Thus, the underpinnings of beat synchronization are intrinsically linked to the dynamics of cell populations tuned for duration and serial order throughout the mCBGT. We suggest that a cross-species comparison of behaviours and the neural circuits supporting them sets the stage for a new generation of neurally grounded computational models for beat perception and synchronization. PMID:25646516

  6. Forecasting influenza-like illness dynamics for military populations using neural networks and social media

    DOE PAGES

    Volkova, Svitlana; Ayton, Ellyn; Porterfield, Katherine; ...

    2017-12-15

    This work is the first to take advantage of recurrent neural networks to predict influenza-like-illness (ILI) dynamics from various linguistic signals extracted from social media data. Unlike other approaches that rely on timeseries analysis of historical ILI data [1, 2] and the state-of-the-art machine learning models [3, 4], we build and evaluate the predictive power of Long Short Term Memory (LSTMs) architectures capable of nowcasting (predicting in \\real-time") and forecasting (predicting the future) ILI dynamics in the 2011 { 2014 influenza seasons. To build our models we integrate information people post in social media e.g., topics, stylistic and syntactic patterns,more » emotions and opinions, and communication behavior. We then quantitatively evaluate the predictive power of different social media signals and contrast the performance of the-state-of-the-art regression models with neural networks. Finally, we combine ILI and social media signals to build joint neural network models for ILI dynamics prediction. Unlike the majority of the existing work, we specifically focus on developing models for local rather than national ILI surveillance [1], specifically for military rather than general populations [3] in 26 U.S. and six international locations. Our approach demonstrates several advantages: (a) Neural network models learned from social media data yield the best performance compared to previously used regression models. (b) Previously under-explored language and communication behavior features are more predictive of ILI dynamics than syntactic and stylistic signals expressed in social media. (c) Neural network models learned exclusively from social media signals yield comparable or better performance to the models learned from ILI historical data, thus, signals from social media can be potentially used to accurately forecast ILI dynamics for the regions where ILI historical data is not available. (d) Neural network models learned from combined ILI and social media signals significantly outperform models that rely solely on ILI historical data, which adds to a great potential of alternative public sources for ILI dynamics prediction. (e) Location-specific models outperform previously used location-independent models e.g., U.S. only. (f) Prediction results significantly vary across geolocations depending on the amount of social media data available and ILI activity patterns.« less

  7. Forecasting influenza-like illness dynamics for military populations using neural networks and social media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Volkova, Svitlana; Ayton, Ellyn; Porterfield, Katherine

    This work is the first to take advantage of recurrent neural networks to predict influenza-like-illness (ILI) dynamics from various linguistic signals extracted from social media data. Unlike other approaches that rely on timeseries analysis of historical ILI data [1, 2] and the state-of-the-art machine learning models [3, 4], we build and evaluate the predictive power of Long Short Term Memory (LSTMs) architectures capable of nowcasting (predicting in \\real-time") and forecasting (predicting the future) ILI dynamics in the 2011 { 2014 influenza seasons. To build our models we integrate information people post in social media e.g., topics, stylistic and syntactic patterns,more » emotions and opinions, and communication behavior. We then quantitatively evaluate the predictive power of different social media signals and contrast the performance of the-state-of-the-art regression models with neural networks. Finally, we combine ILI and social media signals to build joint neural network models for ILI dynamics prediction. Unlike the majority of the existing work, we specifically focus on developing models for local rather than national ILI surveillance [1], specifically for military rather than general populations [3] in 26 U.S. and six international locations. Our approach demonstrates several advantages: (a) Neural network models learned from social media data yield the best performance compared to previously used regression models. (b) Previously under-explored language and communication behavior features are more predictive of ILI dynamics than syntactic and stylistic signals expressed in social media. (c) Neural network models learned exclusively from social media signals yield comparable or better performance to the models learned from ILI historical data, thus, signals from social media can be potentially used to accurately forecast ILI dynamics for the regions where ILI historical data is not available. (d) Neural network models learned from combined ILI and social media signals significantly outperform models that rely solely on ILI historical data, which adds to a great potential of alternative public sources for ILI dynamics prediction. (e) Location-specific models outperform previously used location-independent models e.g., U.S. only. (f) Prediction results significantly vary across geolocations depending on the amount of social media data available and ILI activity patterns.« less

  8. Point process modeling and estimation: Advances in the analysis of dynamic neural spiking data

    NASA Astrophysics Data System (ADS)

    Deng, Xinyi

    2016-08-01

    A common interest of scientists in many fields is to understand the relationship between the dynamics of a physical system and the occurrences of discrete events within such physical system. Seismologists study the connection between mechanical vibrations of the Earth and the occurrences of earthquakes so that future earthquakes can be better predicted. Astrophysicists study the association between the oscillating energy of celestial regions and the emission of photons to learn the Universe's various objects and their interactions. Neuroscientists study the link between behavior and the millisecond-timescale spike patterns of neurons to understand higher brain functions. Such relationships can often be formulated within the framework of state-space models with point process observations. The basic idea is that the dynamics of the physical systems are driven by the dynamics of some stochastic state variables and the discrete events we observe in an interval are noisy observations with distributions determined by the state variables. This thesis proposes several new methodological developments that advance the framework of state-space models with point process observations at the intersection of statistics and neuroscience. In particular, we develop new methods 1) to characterize the rhythmic spiking activity using history-dependent structure, 2) to model population spike activity using marked point process models, 3) to allow for real-time decision making, and 4) to take into account the need for dimensionality reduction for high-dimensional state and observation processes. We applied these methods to a novel problem of tracking rhythmic dynamics in the spiking of neurons in the subthalamic nucleus of Parkinson's patients with the goal of optimizing placement of deep brain stimulation electrodes. We developed a decoding algorithm that can make decision in real-time (for example, to stimulate the neurons or not) based on various sources of information present in population spiking data. Lastly, we proposed a general three-step paradigm that allows us to relate behavioral outcomes of various tasks to simultaneously recorded neural activity across multiple brain areas, which is a step towards closed-loop therapies for psychological diseases using real-time neural stimulation. These methods are suitable for real-time implementation for content-based feedback experiments.

  9. Prediction of rainfall anomalies during the dry to wet transition season over the Southern Amazonia using machine learning tools

    NASA Astrophysics Data System (ADS)

    Shan, X.; Zhang, K.; Zhuang, Y.; Fu, R.; Hong, Y.

    2017-12-01

    Seasonal prediction of rainfall during the dry-to-wet transition season in austral spring (September-November) over southern Amazonia is central for improving planting crops and fire mitigation in that region. Previous studies have identified the key large-scale atmospheric dynamic and thermodynamics pre-conditions during the dry season (June-August) that influence the rainfall anomalies during the dry to wet transition season over Southern Amazonia. Based on these key pre-conditions during dry season, we have evaluated several statistical models and developed a Neural Network based statistical prediction system to predict rainfall during the dry to wet transition for Southern Amazonia (5-15°S, 50-70°W). Multivariate Empirical Orthogonal Function (EOF) Analysis is applied to the following four fields during JJA from the ECMWF Reanalysis (ERA-Interim) spanning from year 1979 to 2015: geopotential height at 200 hPa, surface relative humidity, convective inhibition energy (CIN) index and convective available potential energy (CAPE), to filter out noise and highlight the most coherent spatial and temporal variations. The first 10 EOF modes are retained for inputs to the statistical models, accounting for at least 70% of the total variance in the predictor fields. We have tested several linear and non-linear statistical methods. While the regularized Ridge Regression and Lasso Regression can generally capture the spatial pattern and magnitude of rainfall anomalies, we found that that Neural Network performs best with an accuracy greater than 80%, as expected from the non-linear dependence of the rainfall on the large-scale atmospheric thermodynamic conditions and circulation. Further tests of various prediction skill metrics and hindcasts also suggest this Neural Network prediction approach can significantly improve seasonal prediction skill than the dynamic predictions and regression based statistical predictions. Thus, this statistical prediction system could have shown potential to improve real-time seasonal rainfall predictions in the future.

  10. Chaotic itinerancy in the oscillator neural network without Lyapunov functions.

    PubMed

    Uchiyama, Satoki; Fujisaka, Hirokazu

    2004-09-01

    Chaotic itinerancy (CI), which is defined as an incessant spontaneous switching phenomenon among attractor ruins in deterministic dynamical systems without Lyapunov functions, is numerically studied in the case of an oscillator neural network model. The model is the pseudoinverse-matrix version of the previous model [S. Uchiyama and H. Fujisaka, Phys. Rev. E 65, 061912 (2002)] that was studied theoretically with the aid of statistical neurodynamics. It is found that CI in neural nets can be understood as the intermittent dynamics of weakly destabilized chaotic retrieval solutions. Copyright 2004 American Institute of Physics

  11. A spiking neural integrator model of the adaptive control of action by the medial prefrontal cortex.

    PubMed

    Bekolay, Trevor; Laubach, Mark; Eliasmith, Chris

    2014-01-29

    Subjects performing simple reaction-time tasks can improve reaction times by learning the expected timing of action-imperative stimuli and preparing movements in advance. Success or failure on the previous trial is often an important factor for determining whether a subject will attempt to time the stimulus or wait for it to occur before initiating action. The medial prefrontal cortex (mPFC) has been implicated in enabling the top-down control of action depending on the outcome of the previous trial. Analysis of spike activity from the rat mPFC suggests that neural integration is a key mechanism for adaptive control in precisely timed tasks. We show through simulation that a spiking neural network consisting of coupled neural integrators captures the neural dynamics of the experimentally recorded mPFC. Errors lead to deviations in the normal dynamics of the system, a process that could enable learning from past mistakes. We expand on this coupled integrator network to construct a spiking neural network that performs a reaction-time task by following either a cue-response or timing strategy, and show that it performs the task with similar reaction times as experimental subjects while maintaining the same spiking dynamics as the experimentally recorded mPFC.

  12. A Three-Threshold Learning Rule Approaches the Maximal Capacity of Recurrent Neural Networks

    PubMed Central

    Alemi, Alireza; Baldassi, Carlo; Brunel, Nicolas; Zecchina, Riccardo

    2015-01-01

    Understanding the theoretical foundations of how memories are encoded and retrieved in neural populations is a central challenge in neuroscience. A popular theoretical scenario for modeling memory function is the attractor neural network scenario, whose prototype is the Hopfield model. The model simplicity and the locality of the synaptic update rules come at the cost of a poor storage capacity, compared with the capacity achieved with perceptron learning algorithms. Here, by transforming the perceptron learning rule, we present an online learning rule for a recurrent neural network that achieves near-maximal storage capacity without an explicit supervisory error signal, relying only upon locally accessible information. The fully-connected network consists of excitatory binary neurons with plastic recurrent connections and non-plastic inhibitory feedback stabilizing the network dynamics; the memory patterns to be memorized are presented online as strong afferent currents, producing a bimodal distribution for the neuron synaptic inputs. Synapses corresponding to active inputs are modified as a function of the value of the local fields with respect to three thresholds. Above the highest threshold, and below the lowest threshold, no plasticity occurs. In between these two thresholds, potentiation/depression occurs when the local field is above/below an intermediate threshold. We simulated and analyzed a network of binary neurons implementing this rule and measured its storage capacity for different sizes of the basins of attraction. The storage capacity obtained through numerical simulations is shown to be close to the value predicted by analytical calculations. We also measured the dependence of capacity on the strength of external inputs. Finally, we quantified the statistics of the resulting synaptic connectivity matrix, and found that both the fraction of zero weight synapses and the degree of symmetry of the weight matrix increase with the number of stored patterns. PMID:26291608

  13. A Three-Threshold Learning Rule Approaches the Maximal Capacity of Recurrent Neural Networks.

    PubMed

    Alemi, Alireza; Baldassi, Carlo; Brunel, Nicolas; Zecchina, Riccardo

    2015-08-01

    Understanding the theoretical foundations of how memories are encoded and retrieved in neural populations is a central challenge in neuroscience. A popular theoretical scenario for modeling memory function is the attractor neural network scenario, whose prototype is the Hopfield model. The model simplicity and the locality of the synaptic update rules come at the cost of a poor storage capacity, compared with the capacity achieved with perceptron learning algorithms. Here, by transforming the perceptron learning rule, we present an online learning rule for a recurrent neural network that achieves near-maximal storage capacity without an explicit supervisory error signal, relying only upon locally accessible information. The fully-connected network consists of excitatory binary neurons with plastic recurrent connections and non-plastic inhibitory feedback stabilizing the network dynamics; the memory patterns to be memorized are presented online as strong afferent currents, producing a bimodal distribution for the neuron synaptic inputs. Synapses corresponding to active inputs are modified as a function of the value of the local fields with respect to three thresholds. Above the highest threshold, and below the lowest threshold, no plasticity occurs. In between these two thresholds, potentiation/depression occurs when the local field is above/below an intermediate threshold. We simulated and analyzed a network of binary neurons implementing this rule and measured its storage capacity for different sizes of the basins of attraction. The storage capacity obtained through numerical simulations is shown to be close to the value predicted by analytical calculations. We also measured the dependence of capacity on the strength of external inputs. Finally, we quantified the statistics of the resulting synaptic connectivity matrix, and found that both the fraction of zero weight synapses and the degree of symmetry of the weight matrix increase with the number of stored patterns.

  14. EDITORIAL: The present and future

    NASA Astrophysics Data System (ADS)

    Durand, Dominique M.

    2006-09-01

    Neural engineering has grown substantially in the last few years and it is time to review the progress of the first journal in this field. Journal of Neural Engineering (JNE) is a quarterly publication that started in 2004. The journal is now in its third volume and eleven issues, consisting of 114 articles in total, have been published since its launch. The editorial processing times have been kept to a minimum, the receipt to first decision time is 41 days, on average, and the time from receipt to publication has been maintained below three months. It is also worth noting that it is free to publish in Journal of Neural Engineering—there are no author fees—and once published the articles are free online for the first month. The journal has been listed in Pubmed® since 2005 and has been accepted by ISI® in 2006. Who is reading Journal of Neural Engineering? The number of readers of JNE has increased significantly from 8050 full-text downloads in 2004 to 14 900 in 2005 and the first seven months of 2006 have already seen 12 800 downloads. The top users in 2005 were the Microsoft Corporation, Stanford University and the University of Michigan. The list of top ten users also includes non-US institutions: University of Toronto, University of Tokyo, Hong Kong Polytechnic, National Library of China and University College London, reflecting the international flavor of the journal. What are the hot topics in neural engineering? Based on the number of downloads and citations for 2004-2005, the top three topics are: (1) Brain-computer interfaces (2) Visual prostheses (3) Neural modelling Several other topics such as microelectrode arrays, neural signal processing, neural dynamics and neural circuit engineering are also in the top ten. Where are Journal of Neural Engineering articles cited? JNE articles have reached a wide audience and have been cited in of some of the best journals in physiology and neuroscience such as Nature Neuroscience, Journal of Neuroscience, Trends in Neuroscience, Journal of Physiology, Proceedings of the National Academy of Science as well as in engineering and physics journals such as Annals of Biomedical Engineering, Physical Review Letters and IEEE Transactions on Biomedical Engineering. However, the number of citations in clinical journals is limited. What is special about Journal of Neural Engineering? JNE has published two special issues: (1) The Eye and the Chip (visual prostheses) (vol. 2, (1), 2005) and (2) Sensory Integration: Role of Internal Models (vol. 2, (3), 2005). These special issues have attracted a lot of attention based on the number of article downloads. JNE also publishes tutorials intended to provide background information on specific topics such as classification, sensory substitution and cortical neural prosthetics. A series of tutorials from the 3rd Neuro-IT and Neuroengineering Summer School has been published with the first appearing in vol. 2 (4), 2005. What is in the future for Journal of Neural Engineering? The goal of any journal should be to provide a particular field with the best venue for scientists and engineers to make their work available and noticeable to the rest of the community. In particular, attracting a strong readership base and high quality manuscripts should be the first priority. Providing accurate, reliable and speedy reviews should be the next. With an international board of experts in the field of neural engineering, a solid base of reviewers, readers and contributors, JNE is in a strong position to continue to serve the neural engineering community. However, this is still a small community and growth is essential for continued success in this area. There are two areas of expansion of great interest for the field of neural engineering currently poised between basic science on one hand and clinical implementation on the other: translational neuroscience and therapeutic neural engineering. We should strive to bridge the gap between basic neuroscience, clinical science and engineering by attracting contributions from neuroscientists and clinicians with an interest in neural engineering. I urge members of the neural engineering community to encourage their colleagues in these areas to consider JNE for publication of those manuscripts at the interface with neuroscience and engineering. I would like to take this opportunity to acknowledge the work of the board members, the reviewers of the articles and the staff at the Institute of Physics Publishing for their contribution to the Journal of Neural Engineering.

  15. Dynamic musical communication of core affect

    PubMed Central

    Flaig, Nicole K.; Large, Edward W.

    2013-01-01

    Is there something special about the way music communicates feelings? Theorists since Meyer (1956) have attempted to explain how music could stimulate varied and subtle affective experiences by violating learned expectancies, or by mimicking other forms of social interaction. Our proposal is that music speaks to the brain in its own language; it need not imitate any other form of communication. We review recent theoretical and empirical literature, which suggests that all conscious processes consist of dynamic neural events, produced by spatially dispersed processes in the physical brain. Intentional thought and affective experience arise as dynamical aspects of neural events taking place in multiple brain areas simultaneously. At any given moment, this content comprises a unified “scene” that is integrated into a dynamic core through synchrony of neuronal oscillations. We propose that (1) neurodynamic synchrony with musical stimuli gives rise to musical qualia including tonal and temporal expectancies, and that (2) music-synchronous responses couple into core neurodynamics, enabling music to directly modulate core affect. Expressive music performance, for example, may recruit rhythm-synchronous neural responses to support affective communication. We suggest that the dynamic relationship between musical expression and the experience of affect presents a unique opportunity for the study of emotional experience. This may help elucidate the neural mechanisms underlying arousal and valence, and offer a new approach to exploring the complex dynamics of the how and why of emotional experience. PMID:24672492

  16. Dynamic musical communication of core affect.

    PubMed

    Flaig, Nicole K; Large, Edward W

    2014-01-01

    Is there something special about the way music communicates feelings? Theorists since Meyer (1956) have attempted to explain how music could stimulate varied and subtle affective experiences by violating learned expectancies, or by mimicking other forms of social interaction. Our proposal is that music speaks to the brain in its own language; it need not imitate any other form of communication. We review recent theoretical and empirical literature, which suggests that all conscious processes consist of dynamic neural events, produced by spatially dispersed processes in the physical brain. Intentional thought and affective experience arise as dynamical aspects of neural events taking place in multiple brain areas simultaneously. At any given moment, this content comprises a unified "scene" that is integrated into a dynamic core through synchrony of neuronal oscillations. We propose that (1) neurodynamic synchrony with musical stimuli gives rise to musical qualia including tonal and temporal expectancies, and that (2) music-synchronous responses couple into core neurodynamics, enabling music to directly modulate core affect. Expressive music performance, for example, may recruit rhythm-synchronous neural responses to support affective communication. We suggest that the dynamic relationship between musical expression and the experience of affect presents a unique opportunity for the study of emotional experience. This may help elucidate the neural mechanisms underlying arousal and valence, and offer a new approach to exploring the complex dynamics of the how and why of emotional experience.

  17. Good practice for conducting and reporting MEG research

    PubMed Central

    Gross, Joachim; Baillet, Sylvain; Barnes, Gareth R.; Henson, Richard N.; Hillebrand, Arjan; Jensen, Ole; Jerbi, Karim; Litvak, Vladimir; Maess, Burkhard; Oostenveld, Robert; Parkkonen, Lauri; Taylor, Jason R.; van Wassenhove, Virginie; Wibral, Michael; Schoffelen, Jan-Mathijs

    2013-01-01

    Magnetoencephalographic (MEG) recordings are a rich source of information about the neural dynamics underlying cognitive processes in the brain, with excellent temporal and good spatial resolution. In recent years there have been considerable advances in MEG hardware developments and methods. Sophisticated analysis techniques are now routinely applied and continuously improved, leading to fascinating insights into the intricate dynamics of neural processes. However, the rapidly increasing level of complexity of the different steps in a MEG study make it difficult for novices, and sometimes even for experts, to stay aware of possible limitations and caveats. Furthermore, the complexity of MEG data acquisition and data analysis requires special attention when describing MEG studies in publications, in order to facilitate interpretation and reproduction of the results. This manuscript aims at making recommendations for a number of important data acquisition and data analysis steps and suggests details that should be specified in manuscripts reporting MEG studies. These recommendations will hopefully serve as guidelines that help to strengthen the position of the MEG research community within the field of neuroscience, and may foster discussion in order to further enhance the quality and impact of MEG research. PMID:23046981

  18. Dynamic Adaptive Neural Network Arrays: A Neuromorphic Architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Disney, Adam; Reynolds, John

    2015-01-01

    Dynamic Adaptive Neural Network Array (DANNA) is a neuromorphic hardware implementation. It differs from most other neuromorphic projects in that it allows for programmability of structure, and it is trained or designed using evolutionary optimization. This paper describes the DANNA structure, how DANNA is trained using evolutionary optimization, and an application of DANNA to a very simple classification task.

  19. A Neural Network Model of the Structure and Dynamics of Human Personality

    ERIC Educational Resources Information Center

    Read, Stephen J.; Monroe, Brian M.; Brownstein, Aaron L.; Yang, Yu; Chopra, Gurveen; Miller, Lynn C.

    2010-01-01

    We present a neural network model that aims to bridge the historical gap between dynamic and structural approaches to personality. The model integrates work on the structure of the trait lexicon, the neurobiology of personality, temperament, goal-based models of personality, and an evolutionary analysis of motives. It is organized in terms of two…

  20. A subthreshold aVLSI implementation of the Izhikevich simple neuron model.

    PubMed

    Rangan, Venkat; Ghosh, Abhishek; Aparin, Vladimir; Cauwenberghs, Gert

    2010-01-01

    We present a circuit architecture for compact analog VLSI implementation of the Izhikevich neuron model, which efficiently describes a wide variety of neuron spiking and bursting dynamics using two state variables and four adjustable parameters. Log-domain circuit design utilizing MOS transistors in subthreshold results in high energy efficiency, with less than 1pJ of energy consumed per spike. We also discuss the effects of parameter variations on the dynamics of the equations, and present simulation results that replicate several types of neural dynamics. The low power operation and compact analog VLSI realization make the architecture suitable for human-machine interface applications in neural prostheses and implantable bioelectronics, as well as large-scale neural emulation tools for computational neuroscience.

  1. Different dynamic resting state fMRI patterns are linked to different frequencies of neural activity

    PubMed Central

    Thompson, Garth John; Pan, Wen-Ju

    2015-01-01

    Resting state functional magnetic resonance imaging (rsfMRI) results have indicated that network mapping can contribute to understanding behavior and disease, but it has been difficult to translate the maps created with rsfMRI to neuroelectrical states in the brain. Recently, dynamic analyses have revealed multiple patterns in the rsfMRI signal that are strongly associated with particular bands of neural activity. To further investigate these findings, simultaneously recorded invasive electrophysiology and rsfMRI from rats were used to examine two types of electrical activity (directly measured low-frequency/infraslow activity and band-limited power of higher frequencies) and two types of dynamic rsfMRI (quasi-periodic patterns or QPP, and sliding window correlation or SWC). The relationship between neural activity and dynamic rsfMRI was tested under three anesthetic states in rats: dexmedetomidine and high and low doses of isoflurane. Under dexmedetomidine, the lightest anesthetic, infraslow electrophysiology correlated with QPP but not SWC, whereas band-limited power in higher frequencies correlated with SWC but not QPP. Results were similar under isoflurane; however, the QPP was also correlated to band-limited power, possibly due to the burst-suppression state induced by the anesthetic agent. The results provide additional support for the hypothesis that the two types of dynamic rsfMRI are linked to different frequencies of neural activity, but isoflurane anesthesia may make this relationship more complicated. Understanding which neural frequency bands appear as particular dynamic patterns in rsfMRI may ultimately help isolate components of the rsfMRI signal that are of interest to disorders such as schizophrenia and attention deficit disorder. PMID:26041826

  2. Localized states in an unbounded neural field equation with smooth firing rate function: a multi-parameter analysis.

    PubMed

    Faye, Grégory; Rankin, James; Chossat, Pascal

    2013-05-01

    The existence of spatially localized solutions in neural networks is an important topic in neuroscience as these solutions are considered to characterize working (short-term) memory. We work with an unbounded neural network represented by the neural field equation with smooth firing rate function and a wizard hat spatial connectivity. Noting that stationary solutions of our neural field equation are equivalent to homoclinic orbits in a related fourth order ordinary differential equation, we apply normal form theory for a reversible Hopf bifurcation to prove the existence of localized solutions; further, we present results concerning their stability. Numerical continuation is used to compute branches of localized solution that exhibit snaking-type behaviour. We describe in terms of three parameters the exact regions for which localized solutions persist.

  3. Neural network-based sliding mode control for atmospheric-actuated spacecraft formation using switching strategy

    NASA Astrophysics Data System (ADS)

    Sun, Ran; Wang, Jihe; Zhang, Dexin; Shao, Xiaowei

    2018-02-01

    This paper presents an adaptive neural networks-based control method for spacecraft formation with coupled translational and rotational dynamics using only aerodynamic forces. It is assumed that each spacecraft is equipped with several large flat plates. A coupled orbit-attitude dynamic model is considered based on the specific configuration of atmospheric-based actuators. For this model, a neural network-based adaptive sliding mode controller is implemented, accounting for system uncertainties and external perturbations. To avoid invalidation of the neural networks destroying stability of the system, a switching control strategy is proposed which combines an adaptive neural networks controller dominating in its active region and an adaptive sliding mode controller outside the neural active region. An optimal process is developed to determine the control commands for the plates system. The stability of the closed-loop system is proved by a Lyapunov-based method. Comparative results through numerical simulations illustrate the effectiveness of executing attitude control while maintaining the relative motion, and higher control accuracy can be achieved by using the proposed neural-based switching control scheme than using only adaptive sliding mode controller.

  4. Adult subependymal neural precursors, but not differentiated cells, undergo rapid cathodal migration in the presence of direct current electric fields.

    PubMed

    Babona-Pilipos, Robart; Droujinine, Ilia A; Popovic, Milos R; Morshead, Cindi M

    2011-01-01

    The existence of neural stem and progenitor cells (together termed neural precursor cells) in the adult mammalian brain has sparked great interest in utilizing these cells for regenerative medicine strategies. Endogenous neural precursors within the adult forebrain subependyma can be activated following injury, resulting in their proliferation and migration toward lesion sites where they differentiate into neural cells. The administration of growth factors and immunomodulatory agents following injury augments this activation and has been shown to result in behavioural functional recovery following stroke. With the goal of enhancing neural precursor migration to facilitate the repair process we report that externally applied direct current electric fields induce rapid and directed cathodal migration of pure populations of undifferentiated adult subependyma-derived neural precursors. Using time-lapse imaging microscopy in vitro we performed an extensive single-cell kinematic analysis demonstrating that this galvanotactic phenomenon is a feature of undifferentiated precursors, and not differentiated phenotypes. Moreover, we have shown that the migratory response of the neural precursors is a direct effect of the electric field and not due to chemotactic gradients. We also identified that epidermal growth factor receptor (EGFR) signaling plays a role in the galvanotactic response as blocking EGFR significantly attenuates the migratory behaviour. These findings suggest direct current electric fields may be implemented in endogenous repair paradigms to promote migration and tissue repair following neurotrauma.

  5. Encoding sensory and motor patterns as time-invariant trajectories in recurrent neural networks

    PubMed Central

    2018-01-01

    Much of the information the brain processes and stores is temporal in nature—a spoken word or a handwritten signature, for example, is defined by how it unfolds in time. However, it remains unclear how neural circuits encode complex time-varying patterns. We show that by tuning the weights of a recurrent neural network (RNN), it can recognize and then transcribe spoken digits. The model elucidates how neural dynamics in cortical networks may resolve three fundamental challenges: first, encode multiple time-varying sensory and motor patterns as stable neural trajectories; second, generalize across relevant spatial features; third, identify the same stimuli played at different speeds—we show that this temporal invariance emerges because the recurrent dynamics generate neural trajectories with appropriately modulated angular velocities. Together our results generate testable predictions as to how recurrent networks may use different mechanisms to generalize across the relevant spatial and temporal features of complex time-varying stimuli. PMID:29537963

  6. Application of neural models as controllers in mobile robot velocity control loop

    NASA Astrophysics Data System (ADS)

    Cerkala, Jakub; Jadlovska, Anna

    2017-01-01

    This paper presents the application of an inverse neural models used as controllers in comparison to classical PI controllers for velocity tracking control task used in two-wheel, differentially driven mobile robot. The PI controller synthesis is based on linear approximation of actuators with equivalent load. In order to obtain relevant datasets for training of feed-forward multi-layer perceptron based neural network used as neural model, the mathematical model of mobile robot, that combines its kinematic and dynamic properties such as chassis dimensions, center of gravity offset, friction and actuator parameters is used. Neural models are trained off-line to act as an inverse dynamics of DC motors with particular load using data collected in simulation experiment for motor input voltage step changes within bounded operating area. The performances of PI controllers versus inverse neural models in mobile robot internal velocity control loops are demonstrated and compared in simulation experiment of navigation control task for line segment motion in plane.

  7. Neural networks: Alternatives to conventional techniques for automatic docking

    NASA Technical Reports Server (NTRS)

    Vinz, Bradley L.

    1994-01-01

    Automatic docking of orbiting spacecraft is a crucial operation involving the identification of vehicle orientation as well as complex approach dynamics. The chaser spacecraft must be able to recognize the target spacecraft within a scene and achieve accurate closing maneuvers. In a video-based system, a target scene must be captured and transformed into a pattern of pixels. Successful recognition lies in the interpretation of this pattern. Due to their powerful pattern recognition capabilities, artificial neural networks offer a potential role in interpretation and automatic docking processes. Neural networks can reduce the computational time required by existing image processing and control software. In addition, neural networks are capable of recognizing and adapting to changes in their dynamic environment, enabling enhanced performance, redundancy, and fault tolerance. Most neural networks are robust to failure, capable of continued operation with a slight degradation in performance after minor failures. This paper discusses the particular automatic docking tasks neural networks can perform as viable alternatives to conventional techniques.

  8. Encoding sensory and motor patterns as time-invariant trajectories in recurrent neural networks.

    PubMed

    Goudar, Vishwa; Buonomano, Dean V

    2018-03-14

    Much of the information the brain processes and stores is temporal in nature-a spoken word or a handwritten signature, for example, is defined by how it unfolds in time. However, it remains unclear how neural circuits encode complex time-varying patterns. We show that by tuning the weights of a recurrent neural network (RNN), it can recognize and then transcribe spoken digits. The model elucidates how neural dynamics in cortical networks may resolve three fundamental challenges: first, encode multiple time-varying sensory and motor patterns as stable neural trajectories; second, generalize across relevant spatial features; third, identify the same stimuli played at different speeds-we show that this temporal invariance emerges because the recurrent dynamics generate neural trajectories with appropriately modulated angular velocities. Together our results generate testable predictions as to how recurrent networks may use different mechanisms to generalize across the relevant spatial and temporal features of complex time-varying stimuli. © 2018, Goudar et al.

  9. Path optimisation of a mobile robot using an artificial neural network controller

    NASA Astrophysics Data System (ADS)

    Singh, M. K.; Parhi, D. R.

    2011-01-01

    This article proposed a novel approach for design of an intelligent controller for an autonomous mobile robot using a multilayer feed forward neural network, which enables the robot to navigate in a real world dynamic environment. The inputs to the proposed neural controller consist of left, right and front obstacle distance with respect to its position and target angle. The output of the neural network is steering angle. A four layer neural network has been designed to solve the path and time optimisation problem of mobile robots, which deals with the cognitive tasks such as learning, adaptation, generalisation and optimisation. A back propagation algorithm is used to train the network. This article also analyses the kinematic design of mobile robots for dynamic movements. The simulation results are compared with experimental results, which are satisfactory and show very good agreement. The training of the neural nets and the control performance analysis has been done in a real experimental setup.

  10. Dynamics of neural cryptography

    NASA Astrophysics Data System (ADS)

    Ruttor, Andreas; Kinzel, Wolfgang; Kanter, Ido

    2007-05-01

    Synchronization of neural networks has been used for public channel protocols in cryptography. In the case of tree parity machines the dynamics of both bidirectional synchronization and unidirectional learning is driven by attractive and repulsive stochastic forces. Thus it can be described well by a random walk model for the overlap between participating neural networks. For that purpose transition probabilities and scaling laws for the step sizes are derived analytically. Both these calculations as well as numerical simulations show that bidirectional interaction leads to full synchronization on average. In contrast, successful learning is only possible by means of fluctuations. Consequently, synchronization is much faster than learning, which is essential for the security of the neural key-exchange protocol. However, this qualitative difference between bidirectional and unidirectional interaction vanishes if tree parity machines with more than three hidden units are used, so that those neural networks are not suitable for neural cryptography. In addition, the effective number of keys which can be generated by the neural key-exchange protocol is calculated using the entropy of the weight distribution. As this quantity increases exponentially with the system size, brute-force attacks on neural cryptography can easily be made unfeasible.

  11. Dynamics of neural cryptography.

    PubMed

    Ruttor, Andreas; Kinzel, Wolfgang; Kanter, Ido

    2007-05-01

    Synchronization of neural networks has been used for public channel protocols in cryptography. In the case of tree parity machines the dynamics of both bidirectional synchronization and unidirectional learning is driven by attractive and repulsive stochastic forces. Thus it can be described well by a random walk model for the overlap between participating neural networks. For that purpose transition probabilities and scaling laws for the step sizes are derived analytically. Both these calculations as well as numerical simulations show that bidirectional interaction leads to full synchronization on average. In contrast, successful learning is only possible by means of fluctuations. Consequently, synchronization is much faster than learning, which is essential for the security of the neural key-exchange protocol. However, this qualitative difference between bidirectional and unidirectional interaction vanishes if tree parity machines with more than three hidden units are used, so that those neural networks are not suitable for neural cryptography. In addition, the effective number of keys which can be generated by the neural key-exchange protocol is calculated using the entropy of the weight distribution. As this quantity increases exponentially with the system size, brute-force attacks on neural cryptography can easily be made unfeasible.

  12. Dynamics of neural cryptography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruttor, Andreas; Kinzel, Wolfgang; Kanter, Ido

    2007-05-15

    Synchronization of neural networks has been used for public channel protocols in cryptography. In the case of tree parity machines the dynamics of both bidirectional synchronization and unidirectional learning is driven by attractive and repulsive stochastic forces. Thus it can be described well by a random walk model for the overlap between participating neural networks. For that purpose transition probabilities and scaling laws for the step sizes are derived analytically. Both these calculations as well as numerical simulations show that bidirectional interaction leads to full synchronization on average. In contrast, successful learning is only possible by means of fluctuations. Consequently,more » synchronization is much faster than learning, which is essential for the security of the neural key-exchange protocol. However, this qualitative difference between bidirectional and unidirectional interaction vanishes if tree parity machines with more than three hidden units are used, so that those neural networks are not suitable for neural cryptography. In addition, the effective number of keys which can be generated by the neural key-exchange protocol is calculated using the entropy of the weight distribution. As this quantity increases exponentially with the system size, brute-force attacks on neural cryptography can easily be made unfeasible.« less

  13. Invariant measures in brain dynamics

    NASA Astrophysics Data System (ADS)

    Boyarsky, Abraham; Góra, Paweł

    2006-10-01

    This note concerns brain activity at the level of neural ensembles and uses ideas from ergodic dynamical systems to model and characterize chaotic patterns among these ensembles during conscious mental activity. Central to our model is the definition of a space of neural ensembles and the assumption of discrete time ensemble dynamics. We argue that continuous invariant measures draw the attention of deeper brain processes, engendering emergent properties such as consciousness. Invariant measures supported on a finite set of ensembles reflect periodic behavior, whereas the existence of continuous invariant measures reflect the dynamics of nonrepeating ensemble patterns that elicit the interest of deeper mental processes. We shall consider two different ways to achieve continuous invariant measures on the space of neural ensembles: (1) via quantum jitters, and (2) via sensory input accompanied by inner thought processes which engender a “folding” property on the space of ensembles.

  14. Dynamic gesture recognition using neural networks: a fundament for advanced interaction construction

    NASA Astrophysics Data System (ADS)

    Boehm, Klaus; Broll, Wolfgang; Sokolewicz, Michael A.

    1994-04-01

    Interaction in virtual reality environments is still a challenging task. Static hand posture recognition is currently the most common and widely used method for interaction using glove input devices. In order to improve the naturalness of interaction, and thereby decrease the user-interface learning time, there is a need to be able to recognize dynamic gestures. In this paper we describe our approach to overcoming the difficulties of dynamic gesture recognition (DGR) using neural networks. Backpropagation neural networks have already proven themselves to be appropriate and efficient for posture recognition. However, the extensive amount of data involved in DGR requires a different approach. Because of features such as topology preservation and automatic-learning, Kohonen Feature Maps are particularly suitable for the reduction of the high dimensional data space that is the result of a dynamic gesture, and are thus implemented for this task.

  15. Genetic Effects on Sensorineural Hearing Loss and Evidence-based Treatment for Sensorineural Hearing Loss.

    PubMed

    Yu, Yong-qiang; Yang, Huai-an; Xiao, Ming; Wang, Jing-wei; Huang, Dong-yan; Bhambhani, Yagesh; Sonnenberg, Lyn; Clark, Brenda; Jin, Yuan-zhe; Fu, Wei-neng; Zhang, Jie; Yu, Qian; Liang, Xue-ting; Zhang, Ming

    2015-09-01

    In this article, the mechanism of inheritance behind inherited hearing loss and genetic susceptibility in noise-induced hearing loss are reviewed. Conventional treatments for sensorineural hearing loss (SNHL), i.e. hearing aid and cochlear implant, are effective for some cases, but not without limitations. For example, they provide little benefit for patients of profound SNHL or neural hearing loss, especially when the hearing loss is in poor dynamic range and with low frequency resolution. We emphasize the most recent evidence-based treatment in this field, which includes gene therapy and allotransplantation of stem cells. Their promising results have shown that they might be options of treatment for profound SNHL and neural hearing loss. Although some treatments are still at the experimental stage, it is helpful to be aware of the novel therapies and endeavour to explore the feasibility of their clinical application.

  16. Towards biological plausibility of electronic noses: A spiking neural network based approach for tea odour classification.

    PubMed

    Sarkar, Sankho Turjo; Bhondekar, Amol P; Macaš, Martin; Kumar, Ritesh; Kaur, Rishemjit; Sharma, Anupma; Gulati, Ashu; Kumar, Amod

    2015-11-01

    The paper presents a novel encoding scheme for neuronal code generation for odour recognition using an electronic nose (EN). This scheme is based on channel encoding using multiple Gaussian receptive fields superimposed over the temporal EN responses. The encoded data is further applied to a spiking neural network (SNN) for pattern classification. Two forms of SNN, a back-propagation based SpikeProp and a dynamic evolving SNN are used to learn the encoded responses. The effects of information encoding on the performance of SNNs have been investigated. Statistical tests have been performed to determine the contribution of the SNN and the encoding scheme to overall odour discrimination. The approach has been implemented in odour classification of orthodox black tea (Kangra-Himachal Pradesh Region) thereby demonstrating a biomimetic approach for EN data analysis. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Mirroring and beyond: coupled dynamics as a generalized framework for modelling social interactions

    PubMed Central

    Hasson, Uri; Frith, Chris D.

    2016-01-01

    When people observe one another, behavioural alignment can be detected at many levels, from the physical to the mental. Likewise, when people process the same highly complex stimulus sequences, such as films and stories, alignment is detected in the elicited brain activity. In early sensory areas, shared neural patterns are coupled to the low-level properties of the stimulus (shape, motion, volume, etc.), while in high-order brain areas, shared neural patterns are coupled to high-levels aspects of the stimulus, such as meaning. Successful social interactions require such alignments (both behavioural and neural), as communication cannot occur without shared understanding. However, we need to go beyond simple, symmetric (mirror) alignment once we start interacting. Interactions are dynamic processes, which involve continuous mutual adaptation, development of complementary behaviour and division of labour such as leader–follower roles. Here, we argue that interacting individuals are dynamically coupled rather than simply aligned. This broader framework for understanding interactions can encompass both processes by which behaviour and brain activity mirror each other (neural alignment), and situations in which behaviour and brain activity in one participant are coupled (but not mirrored) to the dynamics in the other participant. To apply these more sophisticated accounts of social interactions to the study of the underlying neural processes we need to develop new experimental paradigms and novel methods of data analysis PMID:27069044

  18. State-Dependent Decoding Algorithms Improve the Performance of a Bidirectional BMI in Anesthetized Rats.

    PubMed

    De Feo, Vito; Boi, Fabio; Safaai, Houman; Onken, Arno; Panzeri, Stefano; Vato, Alessandro

    2017-01-01

    Brain-machine interfaces (BMIs) promise to improve the quality of life of patients suffering from sensory and motor disabilities by creating a direct communication channel between the brain and the external world. Yet, their performance is currently limited by the relatively small amount of information that can be decoded from neural activity recorded form the brain. We have recently proposed that such decoding performance may be improved when using state-dependent decoding algorithms that predict and discount the large component of the trial-to-trial variability of neural activity which is due to the dependence of neural responses on the network's current internal state. Here we tested this idea by using a bidirectional BMI to investigate the gain in performance arising from using a state-dependent decoding algorithm. This BMI, implemented in anesthetized rats, controlled the movement of a dynamical system using neural activity decoded from motor cortex and fed back to the brain the dynamical system's position by electrically microstimulating somatosensory cortex. We found that using state-dependent algorithms that tracked the dynamics of ongoing activity led to an increase in the amount of information extracted form neural activity by 22%, with a consequently increase in all of the indices measuring the BMI's performance in controlling the dynamical system. This suggests that state-dependent decoding algorithms may be used to enhance BMIs at moderate computational cost.

  19. Fluctuation-Driven Neural Dynamics Reproduce Drosophila Locomotor Patterns

    PubMed Central

    Cruchet, Steeve; Gustafson, Kyle; Benton, Richard; Floreano, Dario

    2015-01-01

    The neural mechanisms determining the timing of even simple actions, such as when to walk or rest, are largely mysterious. One intriguing, but untested, hypothesis posits a role for ongoing activity fluctuations in neurons of central action selection circuits that drive animal behavior from moment to moment. To examine how fluctuating activity can contribute to action timing, we paired high-resolution measurements of freely walking Drosophila melanogaster with data-driven neural network modeling and dynamical systems analysis. We generated fluctuation-driven network models whose outputs—locomotor bouts—matched those measured from sensory-deprived Drosophila. From these models, we identified those that could also reproduce a second, unrelated dataset: the complex time-course of odor-evoked walking for genetically diverse Drosophila strains. Dynamical models that best reproduced both Drosophila basal and odor-evoked locomotor patterns exhibited specific characteristics. First, ongoing fluctuations were required. In a stochastic resonance-like manner, these fluctuations allowed neural activity to escape stable equilibria and to exceed a threshold for locomotion. Second, odor-induced shifts of equilibria in these models caused a depression in locomotor frequency following olfactory stimulation. Our models predict that activity fluctuations in action selection circuits cause behavioral output to more closely match sensory drive and may therefore enhance navigation in complex sensory environments. Together these data reveal how simple neural dynamics, when coupled with activity fluctuations, can give rise to complex patterns of animal behavior. PMID:26600381

  20. Parallel and orthogonal stimulus in ultradiluted neural networks

    NASA Astrophysics Data System (ADS)

    Sobral, G. A., Jr.; Vieira, V. M.; Lyra, M. L.; da Silva, C. R.

    2006-10-01

    Extending a model due to Derrida, Gardner, and Zippelius, we have studied the recognition ability of an extreme and asymmetrically diluted version of the Hopfield model for associative memory by including the effect of a stimulus in the dynamics of the system. We obtain exact results for the dynamic evolution of the average network superposition. The stimulus field was considered as proportional to the overlapping of the state of the system with a particular stimulated pattern. Two situations were analyzed, namely, the external stimulus acting on the initialization pattern (parallel stimulus) and the external stimulus acting on a pattern orthogonal to the initialization one (orthogonal stimulus). In both cases, we obtained the complete phase diagram in the parameter space composed of the stimulus field, thermal noise, and network capacity. Our results show that the system improves its recognition ability for parallel stimulus. For orthogonal stimulus two recognition phases emerge with the system locking at the initialization or stimulated pattern. We confront our analytical results with numerical simulations for the noiseless case T=0 .

  1. A Pilot Study of Individual Muscle Force Prediction during Elbow Flexion and Extension in the Neurorehabilitation Field

    PubMed Central

    Hou, Jiateng; Sun, Yingfei; Sun, Lixin; Pan, Bingyu; Huang, Zhipei; Wu, Jiankang; Zhang, Zhiqiang

    2016-01-01

    This paper proposes a neuromusculoskeletal (NMS) model to predict individual muscle force during elbow flexion and extension. Four male subjects were asked to do voluntary elbow flexion and extension. An inertial sensor and surface electromyography (sEMG) sensors were attached to subject's forearm. Joint angle calculated by fusion of acceleration and angular rate using an extended Kalman filter (EKF) and muscle activations obtained from the sEMG signals were taken as the inputs of the proposed NMS model to determine individual muscle force. The result shows that our NMS model can predict individual muscle force accurately, with the ability to reflect subject-specific joint dynamics and neural control solutions. Our method incorporates sEMG and motion data, making it possible to get a deeper understanding of neurological, physiological, and anatomical characteristics of human dynamic movement. We demonstrate the potential of the proposed NMS model for evaluating the function of upper limb movements in the field of neurorehabilitation. PMID:27916853

  2. Local classifiers for evoked potentials recorded from behaving rats.

    PubMed

    Jakuczun, Wit; Kublik, Ewa; Wójcik, Daniel K; Wróbel, Andrzej

    2005-01-01

    Dynamic states of the brain determine the way information is processed in local neural networks. We have applied classical conditioning paradigm in order to study whether habituated and aroused states can be differentiated in single barrel column of rat's somatosensory cortex by means of analysis of field potentials evoked by stimulation of a single vibrissa. A new method using local classifiers is presented which allows for reliable and meaningful classification of single evoked potentials which might be consequently attributed to different functional states of the cortical column.

  3. EEG-fMRI Bayesian framework for neural activity estimation: a simulation study

    NASA Astrophysics Data System (ADS)

    Croce, Pierpaolo; Basti, Alessio; Marzetti, Laura; Zappasodi, Filippo; Del Gratta, Cosimo

    2016-12-01

    Objective. Due to the complementary nature of electroencephalography (EEG) and functional magnetic resonance imaging (fMRI), and given the possibility of simultaneous acquisition, the joint data analysis can afford a better understanding of the underlying neural activity estimation. In this simulation study we want to show the benefit of the joint EEG-fMRI neural activity estimation in a Bayesian framework. Approach. We built a dynamic Bayesian framework in order to perform joint EEG-fMRI neural activity time course estimation. The neural activity is originated by a given brain area and detected by means of both measurement techniques. We have chosen a resting state neural activity situation to address the worst case in terms of the signal-to-noise ratio. To infer information by EEG and fMRI concurrently we used a tool belonging to the sequential Monte Carlo (SMC) methods: the particle filter (PF). Main results. First, despite a high computational cost, we showed the feasibility of such an approach. Second, we obtained an improvement in neural activity reconstruction when using both EEG and fMRI measurements. Significance. The proposed simulation shows the improvements in neural activity reconstruction with EEG-fMRI simultaneous data. The application of such an approach to real data allows a better comprehension of the neural dynamics.

  4. EEG-fMRI Bayesian framework for neural activity estimation: a simulation study.

    PubMed

    Croce, Pierpaolo; Basti, Alessio; Marzetti, Laura; Zappasodi, Filippo; Gratta, Cosimo Del

    2016-12-01

    Due to the complementary nature of electroencephalography (EEG) and functional magnetic resonance imaging (fMRI), and given the possibility of simultaneous acquisition, the joint data analysis can afford a better understanding of the underlying neural activity estimation. In this simulation study we want to show the benefit of the joint EEG-fMRI neural activity estimation in a Bayesian framework. We built a dynamic Bayesian framework in order to perform joint EEG-fMRI neural activity time course estimation. The neural activity is originated by a given brain area and detected by means of both measurement techniques. We have chosen a resting state neural activity situation to address the worst case in terms of the signal-to-noise ratio. To infer information by EEG and fMRI concurrently we used a tool belonging to the sequential Monte Carlo (SMC) methods: the particle filter (PF). First, despite a high computational cost, we showed the feasibility of such an approach. Second, we obtained an improvement in neural activity reconstruction when using both EEG and fMRI measurements. The proposed simulation shows the improvements in neural activity reconstruction with EEG-fMRI simultaneous data. The application of such an approach to real data allows a better comprehension of the neural dynamics.

  5. Functional neural networks underlying response inhibition in adolescents and adults.

    PubMed

    Stevens, Michael C; Kiehl, Kent A; Pearlson, Godfrey D; Calhoun, Vince D

    2007-07-19

    This study provides the first description of neural network dynamics associated with response inhibition in healthy adolescents and adults. Functional and effective connectivity analyses of whole brain hemodynamic activity elicited during performance of a Go/No-Go task were used to identify functionally integrated neural networks and characterize their causal interactions. Three response inhibition circuits formed a hierarchical, inter-dependent system wherein thalamic modulation of input to premotor cortex by fronto-striatal regions led to response suppression. Adolescents differed from adults in the degree of network engagement, regional fronto-striatal-thalamic connectivity, and network dynamics. We identify and characterize several age-related differences in the function of neural circuits that are associated with behavioral performance changes across adolescent development.

  6. Neural network-based adaptive dynamic surface control for permanent magnet synchronous motors.

    PubMed

    Yu, Jinpeng; Shi, Peng; Dong, Wenjie; Chen, Bing; Lin, Chong

    2015-03-01

    This brief considers the problem of neural networks (NNs)-based adaptive dynamic surface control (DSC) for permanent magnet synchronous motors (PMSMs) with parameter uncertainties and load torque disturbance. First, NNs are used to approximate the unknown and nonlinear functions of PMSM drive system and a novel adaptive DSC is constructed to avoid the explosion of complexity in the backstepping design. Next, under the proposed adaptive neural DSC, the number of adaptive parameters required is reduced to only one, and the designed neural controllers structure is much simpler than some existing results in literature, which can guarantee that the tracking error converges to a small neighborhood of the origin. Then, simulations are given to illustrate the effectiveness and potential of the new design technique.

  7. On neural networks in identification and control of dynamic systems

    NASA Technical Reports Server (NTRS)

    Phan, Minh; Juang, Jer-Nan; Hyland, David C.

    1993-01-01

    This paper presents a discussion of the applicability of neural networks in the identification and control of dynamic systems. Emphasis is placed on the understanding of how the neural networks handle linear systems and how the new approach is related to conventional system identification and control methods. Extensions of the approach to nonlinear systems are then made. The paper explains the fundamental concepts of neural networks in their simplest terms. Among the topics discussed are feed forward and recurrent networks in relation to the standard state-space and observer models, linear and nonlinear auto-regressive models, linear, predictors, one-step ahead control, and model reference adaptive control for linear and nonlinear systems. Numerical examples are presented to illustrate the application of these important concepts.

  8. Functional neural networks underlying response inhibition in adolescents and adults

    PubMed Central

    Stevens, Michael C.; Kiehl, Kent A.; Pearlson, Godfrey D.; Calhoun, Vince D.

    2008-01-01

    This study provides the first description of neural network dynamics associated with response inhibition in healthy adolescents and adults. Functional and effective connectivity analyses of whole brain hemodynamic activity elicited during performance of a Go/No-Go task were used to identify functionally-integrated neural networks and characterize their causal interactions. Three response inhibition circuits formed a hierarchical, inter-dependent system wherein thalamic modulation of input to premotor cortex by frontostriatal regions led to response suppression. Adolescents differed from adults in the degree of network engagement, regional fronto-striatal-thalamic connectivity, and network dynamics. We identify and characterize several age-related differences in the function of neural circuits that are associated with behavioral performance changes across adolescent development. PMID:17467816

  9. Dynamical synapses enhance neural information processing: gracefulness, accuracy, and mobility.

    PubMed

    Fung, C C Alan; Wong, K Y Michael; Wang, He; Wu, Si

    2012-05-01

    Experimental data have revealed that neuronal connection efficacy exhibits two forms of short-term plasticity: short-term depression (STD) and short-term facilitation (STF). They have time constants residing between fast neural signaling and rapid learning and may serve as substrates for neural systems manipulating temporal information on relevant timescales. This study investigates the impact of STD and STF on the dynamics of continuous attractor neural networks and their potential roles in neural information processing. We find that STD endows the network with slow-decaying plateau behaviors: the network that is initially being stimulated to an active state decays to a silent state very slowly on the timescale of STD rather than on that of neuralsignaling. This provides a mechanism for neural systems to hold sensory memory easily and shut off persistent activities gracefully. With STF, we find that the network can hold a memory trace of external inputs in the facilitated neuronal interactions, which provides a way to stabilize the network response to noisy inputs, leading to improved accuracy in population decoding. Furthermore, we find that STD increases the mobility of the network states. The increased mobility enhances the tracking performance of the network in response to time-varying stimuli, leading to anticipative neural responses. In general, we find that STD and STP tend to have opposite effects on network dynamics and complementary computational advantages, suggesting that the brain may employ a strategy of weighting them differentially depending on the computational purpose.

  10. An annealed chaotic maximum neural network for bipartite subgraph problem.

    PubMed

    Wang, Jiahai; Tang, Zheng; Wang, Ronglong

    2004-04-01

    In this paper, based on maximum neural network, we propose a new parallel algorithm that can help the maximum neural network escape from local minima by including a transient chaotic neurodynamics for bipartite subgraph problem. The goal of the bipartite subgraph problem, which is an NP- complete problem, is to remove the minimum number of edges in a given graph such that the remaining graph is a bipartite graph. Lee et al. presented a parallel algorithm using the maximum neural model (winner-take-all neuron model) for this NP- complete problem. The maximum neural model always guarantees a valid solution and greatly reduces the search space without a burden on the parameter-tuning. However, the model has a tendency to converge to a local minimum easily because it is based on the steepest descent method. By adding a negative self-feedback to the maximum neural network, we proposed a new parallel algorithm that introduces richer and more flexible chaotic dynamics and can prevent the network from getting stuck at local minima. After the chaotic dynamics vanishes, the proposed algorithm is then fundamentally reined by the gradient descent dynamics and usually converges to a stable equilibrium point. The proposed algorithm has the advantages of both the maximum neural network and the chaotic neurodynamics. A large number of instances have been simulated to verify the proposed algorithm. The simulation results show that our algorithm finds the optimum or near-optimum solution for the bipartite subgraph problem superior to that of the best existing parallel algorithms.

  11. Firing rate dynamics in recurrent spiking neural networks with intrinsic and network heterogeneity.

    PubMed

    Ly, Cheng

    2015-12-01

    Heterogeneity of neural attributes has recently gained a lot of attention and is increasing recognized as a crucial feature in neural processing. Despite its importance, this physiological feature has traditionally been neglected in theoretical studies of cortical neural networks. Thus, there is still a lot unknown about the consequences of cellular and circuit heterogeneity in spiking neural networks. In particular, combining network or synaptic heterogeneity and intrinsic heterogeneity has yet to be considered systematically despite the fact that both are known to exist and likely have significant roles in neural network dynamics. In a canonical recurrent spiking neural network model, we study how these two forms of heterogeneity lead to different distributions of excitatory firing rates. To analytically characterize how these types of heterogeneities affect the network, we employ a dimension reduction method that relies on a combination of Monte Carlo simulations and probability density function equations. We find that the relationship between intrinsic and network heterogeneity has a strong effect on the overall level of heterogeneity of the firing rates. Specifically, this relationship can lead to amplification or attenuation of firing rate heterogeneity, and these effects depend on whether the recurrent network is firing asynchronously or rhythmically firing. These observations are captured with the aforementioned reduction method, and furthermore simpler analytic descriptions based on this dimension reduction method are developed. The final analytic descriptions provide compact and descriptive formulas for how the relationship between intrinsic and network heterogeneity determines the firing rate heterogeneity dynamics in various settings.

  12. Topological determinants of self-sustained activity in a simple model of excitable dynamics on graphs

    PubMed Central

    Fretter, Christoph; Lesne, Annick; Hilgetag, Claus C.; Hütt, Marc-Thorsten

    2017-01-01

    Simple models of excitable dynamics on graphs are an efficient framework for studying the interplay between network topology and dynamics. This topic is of practical relevance to diverse fields, ranging from neuroscience to engineering. Here we analyze how a single excitation propagates through a random network as a function of the excitation threshold, that is, the relative amount of activity in the neighborhood required for the excitation of a node. We observe that two sharp transitions delineate a region of sustained activity. Using analytical considerations and numerical simulation, we show that these transitions originate from the presence of barriers to propagation and the excitation of topological cycles, respectively, and can be predicted from the network topology. Our findings are interpreted in the context of network reverberations and self-sustained activity in neural systems, which is a question of long-standing interest in computational neuroscience. PMID:28186182

  13. Unraveling the sub-processes of selective attention: insights from dynamic modeling and continuous behavior.

    PubMed

    Frisch, Simon; Dshemuchadse, Maja; Görner, Max; Goschke, Thomas; Scherbaum, Stefan

    2015-11-01

    Selective attention biases information processing toward stimuli that are relevant for achieving our goals. However, the nature of this bias is under debate: Does it solely rely on the amplification of goal-relevant information or is there a need for additional inhibitory processes that selectively suppress currently distracting information? Here, we explored the processes underlying selective attention with a dynamic, modeling-based approach that focuses on the continuous evolution of behavior over time. We present two dynamic neural field models incorporating the diverging theoretical assumptions. Simulations with both models showed that they make similar predictions with regard to response times but differ markedly with regard to their continuous behavior. Human data observed via mouse tracking as a continuous measure of performance revealed evidence for the model solely based on amplification but no indication of persisting selective distracter inhibition.

  14. Slow synaptic dynamics in a network: From exponential to power-law forgetting

    NASA Astrophysics Data System (ADS)

    Luck, J. M.; Mehta, A.

    2014-09-01

    We investigate a mean-field model of interacting synapses on a directed neural network. Our interest lies in the slow adaptive dynamics of synapses, which are driven by the fast dynamics of the neurons they connect. Cooperation is modeled from the usual Hebbian perspective, while competition is modeled by an original polarity-driven rule. The emergence of a critical manifold culminating in a tricritical point is crucially dependent on the presence of synaptic competition. This leads to a universal 1/t power-law relaxation of the mean synaptic strength along the critical manifold and an equally universal 1/√t relaxation at the tricritical point, to be contrasted with the exponential relaxation that is otherwise generic. In turn, this leads to the natural emergence of long- and short-term memory from different parts of parameter space in a synaptic network, which is the most original and important result of our present investigations.

  15. Topological determinants of self-sustained activity in a simple model of excitable dynamics on graphs.

    PubMed

    Fretter, Christoph; Lesne, Annick; Hilgetag, Claus C; Hütt, Marc-Thorsten

    2017-02-10

    Simple models of excitable dynamics on graphs are an efficient framework for studying the interplay between network topology and dynamics. This topic is of practical relevance to diverse fields, ranging from neuroscience to engineering. Here we analyze how a single excitation propagates through a random network as a function of the excitation threshold, that is, the relative amount of activity in the neighborhood required for the excitation of a node. We observe that two sharp transitions delineate a region of sustained activity. Using analytical considerations and numerical simulation, we show that these transitions originate from the presence of barriers to propagation and the excitation of topological cycles, respectively, and can be predicted from the network topology. Our findings are interpreted in the context of network reverberations and self-sustained activity in neural systems, which is a question of long-standing interest in computational neuroscience.

  16. Topological determinants of self-sustained activity in a simple model of excitable dynamics on graphs

    NASA Astrophysics Data System (ADS)

    Fretter, Christoph; Lesne, Annick; Hilgetag, Claus C.; Hütt, Marc-Thorsten

    2017-02-01

    Simple models of excitable dynamics on graphs are an efficient framework for studying the interplay between network topology and dynamics. This topic is of practical relevance to diverse fields, ranging from neuroscience to engineering. Here we analyze how a single excitation propagates through a random network as a function of the excitation threshold, that is, the relative amount of activity in the neighborhood required for the excitation of a node. We observe that two sharp transitions delineate a region of sustained activity. Using analytical considerations and numerical simulation, we show that these transitions originate from the presence of barriers to propagation and the excitation of topological cycles, respectively, and can be predicted from the network topology. Our findings are interpreted in the context of network reverberations and self-sustained activity in neural systems, which is a question of long-standing interest in computational neuroscience.

  17. Unsupervised Discovery of Demixed, Low-Dimensional Neural Dynamics across Multiple Timescales through Tensor Component Analysis.

    PubMed

    Williams, Alex H; Kim, Tony Hyun; Wang, Forea; Vyas, Saurabh; Ryu, Stephen I; Shenoy, Krishna V; Schnitzer, Mark; Kolda, Tamara G; Ganguli, Surya

    2018-06-27

    Perceptions, thoughts, and actions unfold over millisecond timescales, while learned behaviors can require many days to mature. While recent experimental advances enable large-scale and long-term neural recordings with high temporal fidelity, it remains a formidable challenge to extract unbiased and interpretable descriptions of how rapid single-trial circuit dynamics change slowly over many trials to mediate learning. We demonstrate a simple tensor component analysis (TCA) can meet this challenge by extracting three interconnected, low-dimensional descriptions of neural data: neuron factors, reflecting cell assemblies; temporal factors, reflecting rapid circuit dynamics mediating perceptions, thoughts, and actions within each trial; and trial factors, describing both long-term learning and trial-to-trial changes in cognitive state. We demonstrate the broad applicability of TCA by revealing insights into diverse datasets derived from artificial neural networks, large-scale calcium imaging of rodent prefrontal cortex during maze navigation, and multielectrode recordings of macaque motor cortex during brain machine interface learning. Copyright © 2018 Elsevier Inc. All rights reserved.

  18. Dynamic reconfiguration of frontal brain networks during executive cognition in humans

    PubMed Central

    Braun, Urs; Schäfer, Axel; Walter, Henrik; Erk, Susanne; Romanczuk-Seiferth, Nina; Haddad, Leila; Schweiger, Janina I.; Grimm, Oliver; Heinz, Andreas; Tost, Heike; Meyer-Lindenberg, Andreas; Bassett, Danielle S.

    2015-01-01

    The brain is an inherently dynamic system, and executive cognition requires dynamically reconfiguring, highly evolving networks of brain regions that interact in complex and transient communication patterns. However, a precise characterization of these reconfiguration processes during cognitive function in humans remains elusive. Here, we use a series of techniques developed in the field of “dynamic network neuroscience” to investigate the dynamics of functional brain networks in 344 healthy subjects during a working-memory challenge (the “n-back” task). In contrast to a control condition, in which dynamic changes in cortical networks were spread evenly across systems, the effortful working-memory condition was characterized by a reconfiguration of frontoparietal and frontotemporal networks. This reconfiguration, which characterizes “network flexibility,” employs transient and heterogeneous connectivity between frontal systems, which we refer to as “integration.” Frontal integration predicted neuropsychological measures requiring working memory and executive cognition, suggesting that dynamic network reconfiguration between frontal systems supports those functions. Our results characterize dynamic reconfiguration of large-scale distributed neural circuits during executive cognition in humans and have implications for understanding impaired cognitive function in disorders affecting connectivity, such as schizophrenia or dementia. PMID:26324898

  19. Set selection dynamical system neural networks with partial memories, with applications to Sudoku and KenKen puzzles.

    PubMed

    Boreland, B; Clement, G; Kunze, H

    2015-08-01

    After reviewing set selection and memory model dynamical system neural networks, we introduce a neural network model that combines set selection with partial memories (stored memories on subsets of states in the network). We establish that feasible equilibria with all states equal to ± 1 correspond to answers to a particular set theoretic problem. We show that KenKen puzzles can be formulated as a particular case of this set theoretic problem and use the neural network model to solve them; in addition, we use a similar approach to solve Sudoku. We illustrate the approach in examples. As a heuristic experiment, we use online or print resources to identify the difficulty of the puzzles and compare these difficulties to the number of iterations used by the appropriate neural network solver, finding a strong relationship. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Hideen Markov Models and Neural Networks for Fault Detection in Dynamic Systems

    NASA Technical Reports Server (NTRS)

    Smyth, Padhraic

    1994-01-01

    None given. (From conclusion): Neural networks plus Hidden Markov Models(HMM)can provide excellene detection and false alarm rate performance in fault detection applications. Modified models allow for novelty detection. Also covers some key contributions of neural network model, and application status.

  1. Dynamic neural activity during stress signals resilient coping

    PubMed Central

    Sinha, Rajita; Lacadie, Cheryl M.; Constable, R. Todd; Seo, Dongju

    2016-01-01

    Active coping underlies a healthy stress response, but neural processes supporting such resilient coping are not well-known. Using a brief, sustained exposure paradigm contrasting highly stressful, threatening, and violent stimuli versus nonaversive neutral visual stimuli in a functional magnetic resonance imaging (fMRI) study, we show significant subjective, physiologic, and endocrine increases and temporally related dynamically distinct patterns of neural activation in brain circuits underlying the stress response. First, stress-specific sustained increases in the amygdala, striatum, hypothalamus, midbrain, right insula, and right dorsolateral prefrontal cortex (DLPFC) regions supported the stress processing and reactivity circuit. Second, dynamic neural activation during stress versus neutral runs, showing early increases followed by later reduced activation in the ventrolateral prefrontal cortex (VLPFC), dorsal anterior cingulate cortex (dACC), left DLPFC, hippocampus, and left insula, suggested a stress adaptation response network. Finally, dynamic stress-specific mobilization of the ventromedial prefrontal cortex (VmPFC), marked by initial hypoactivity followed by increased VmPFC activation, pointed to the VmPFC as a key locus of the emotional and behavioral control network. Consistent with this finding, greater neural flexibility signals in the VmPFC during stress correlated with active coping ratings whereas lower dynamic activity in the VmPFC also predicted a higher level of maladaptive coping behaviors in real life, including binge alcohol intake, emotional eating, and frequency of arguments and fights. These findings demonstrate acute functional neuroplasticity during stress, with distinct and separable brain networks that underlie critical components of the stress response, and a specific role for VmPFC neuroflexibility in stress-resilient coping. PMID:27432990

  2. Decoding information about dynamically occluded objects in visual cortex

    PubMed Central

    Erlikhman, Gennady; Caplovitz, Gideon P.

    2016-01-01

    During dynamic occlusion, an object passes behind an occluding surface and then later reappears. Even when completely occluded from view, such objects are experienced as continuing to exist or persist behind the occluder, even though they are no longer visible. The contents and neural basis of this persistent representation remain poorly understood. Questions remain as to whether there is information maintained about the object itself (i.e. its shape or identity) or, non-object-specific information such as its position or velocity as it is tracked behind an occluder as well as which areas of visual cortex represent such information. Recent studies have found that early visual cortex is activated by “invisible” objects during visual imagery and by unstimulated regions along the path of apparent motion, suggesting that some properties of dynamically occluded objects may also be neurally represented in early visual cortex. We applied functional magnetic resonance imaging in human subjects to examine the representation of information within visual cortex during dynamic occlusion. For gradually occluded, but not for instantly disappearing objects, there was an increase in activity in early visual cortex (V1, V2, and V3). This activity was spatially-specific, corresponding to the occluded location in the visual field. However, the activity did not encode enough information about object identity to discriminate between different kinds of occluded objects (circles vs. stars) using MVPA. In contrast, object identity could be decoded in spatially-specific subregions of higher-order, topographically organized areas such as ventral, lateral, and temporal occipital areas (VO, LO, and TO) as well as the functionally defined LOC and hMT+. These results suggest that early visual cortex may represent the dynamically occluded object’s position or motion path, while later visual areas represent object-specific information. PMID:27663987

  3. Development and application of deep convolutional neural network in target detection

    NASA Astrophysics Data System (ADS)

    Jiang, Xiaowei; Wang, Chunping; Fu, Qiang

    2018-04-01

    With the development of big data and algorithms, deep convolution neural networks with more hidden layers have more powerful feature learning and feature expression ability than traditional machine learning methods, making artificial intelligence surpass human level in many fields. This paper first reviews the development and application of deep convolutional neural networks in the field of object detection in recent years, then briefly summarizes and ponders some existing problems in the current research, and the future development of deep convolutional neural network is prospected.

  4. Developmental metaplasticity in neural circuit codes of firing and structure.

    PubMed

    Baram, Yoram

    2017-01-01

    Firing-rate dynamics have been hypothesized to mediate inter-neural information transfer in the brain. While the Hebbian paradigm, relating learning and memory to firing activity, has put synaptic efficacy variation at the center of cortical plasticity, we suggest that the external expression of plasticity by changes in the firing-rate dynamics represents a more general notion of plasticity. Hypothesizing that time constants of plasticity and firing dynamics increase with age, and employing the filtering property of the neuron, we obtain the elementary code of global attractors associated with the firing-rate dynamics in each developmental stage. We define a neural circuit connectivity code as an indivisible set of circuit structures generated by membrane and synapse activation and silencing. Synchronous firing patterns under parameter uniformity, and asynchronous circuit firing are shown to be driven, respectively, by membrane and synapse silencing and reactivation, and maintained by the neuronal filtering property. Analytic, graphical and simulation representation of the discrete iteration maps and of the global attractor codes of neural firing rate are found to be consistent with previous empirical neurobiological findings, which have lacked, however, a specific correspondence between firing modes, time constants, circuit connectivity and cortical developmental stages. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. The physics of functional magnetic resonance imaging (fMRI)

    NASA Astrophysics Data System (ADS)

    Buxton, Richard B.

    2013-09-01

    Functional magnetic resonance imaging (fMRI) is a methodology for detecting dynamic patterns of activity in the working human brain. Although the initial discoveries that led to fMRI are only about 20 years old, this new field has revolutionized the study of brain function. The ability to detect changes in brain activity has a biophysical basis in the magnetic properties of deoxyhemoglobin, and a physiological basis in the way blood flow increases more than oxygen metabolism when local neural activity increases. These effects translate to a subtle increase in the local magnetic resonance signal, the blood oxygenation level dependent (BOLD) effect, when neural activity increases. With current techniques, this pattern of activation can be measured with resolution approaching 1 mm3 spatially and 1 s temporally. This review focuses on the physical basis of the BOLD effect, the imaging methods used to measure it, the possible origins of the physiological effects that produce a mismatch of blood flow and oxygen metabolism during neural activation, and the mathematical models that have been developed to understand the measured signals. An overarching theme is the growing field of quantitative fMRI, in which other MRI methods are combined with BOLD methods and analyzed within a theoretical modeling framework to derive quantitative estimates of oxygen metabolism and other physiological variables. That goal is the current challenge for fMRI: to move fMRI from a mapping tool to a quantitative probe of brain physiology.

  6. The physics of functional magnetic resonance imaging (fMRI)

    PubMed Central

    Buxton, Richard B

    2015-01-01

    Functional magnetic resonance imaging (fMRI) is a methodology for detecting dynamic patterns of activity in the working human brain. Although the initial discoveries that led to fMRI are only about 20 years old, this new field has revolutionized the study of brain function. The ability to detect changes in brain activity has a biophysical basis in the magnetic properties of deoxyhemoglobin, and a physiological basis in the way blood flow increases more than oxygen metabolism when local neural activity increases. These effects translate to a subtle increase in the local magnetic resonance signal, the blood oxygenation level dependent (BOLD) effect, when neural activity increases. With current techniques, this pattern of activation can be measured with resolution approaching 1 mm3 spatially and 1 s temporally. This review focuses on the physical basis of the BOLD effect, the imaging methods used to measure it, the possible origins of the physiological effects that produce a mismatch of blood flow and oxygen metabolism during neural activation, and the mathematical models that have been developed to understand the measured signals. An overarching theme is the growing field of quantitative fMRI, in which other MRI methods are combined with BOLD methods and analyzed within a theoretical modeling framework to derive quantitative estimates of oxygen metabolism and other physiological variables. That goal is the current challenge for fMRI: to move fMRI from a mapping tool to a quantitative probe of brain physiology. PMID:24006360

  7. Neural dynamics of image representation in the primary visual cortex

    PubMed Central

    Yan, Xiaogang; Khambhati, Ankit; Liu, Lei; Lee, Tai Sing

    2013-01-01

    Horizontal connections in the primary visual cortex have been hypothesized to play a number of computational roles: association field for contour completion, surface interpolation, surround suppression, and saliency computation. Here, we argue that horizontal connections might also serve a critical role of computing the appropriate codes for image representation. That the early visual cortex or V1 explicitly represents the image we perceive has been a common assumption on computational theories of efficient coding (Olshausen and Field 1996), yet such a framework for understanding the circuitry in V1 has not been seriously entertained in the neurophysiological community. In fact, a number of recent fMRI and neurophysiological studies cast doubt on the neural validity of such an isomorphic representation (Cornelissen et al. 2006, von der Heydt et al. 2003). In this study, we investigated, neurophysiologically, how V1 neurons respond to uniform color surfaces and show that spiking activities of neurons can be decomposed into three components: a bottom-up feedforward input, an articulation of color tuning and a contextual modulation signal that is inversely proportional to the distance away from the bounding contrast border. We demonstrate through computational simulations that the behaviors of a model for image representation are consistent with many aspects of our neural observations. We conclude that the hypothesis of isomorphic representation of images in V1 remains viable and this hypothesis suggests an additional new interpretation of the functional roles of horizontal connections in the primary visual cortex. PMID:22944076

  8. The physics of functional magnetic resonance imaging (fMRI).

    PubMed

    Buxton, Richard B

    2013-09-01

    Functional magnetic resonance imaging (fMRI) is a methodology for detecting dynamic patterns of activity in the working human brain. Although the initial discoveries that led to fMRI are only about 20 years old, this new field has revolutionized the study of brain function. The ability to detect changes in brain activity has a biophysical basis in the magnetic properties of deoxyhemoglobin, and a physiological basis in the way blood flow increases more than oxygen metabolism when local neural activity increases. These effects translate to a subtle increase in the local magnetic resonance signal, the blood oxygenation level dependent (BOLD) effect, when neural activity increases. With current techniques, this pattern of activation can be measured with resolution approaching 1 mm(3) spatially and 1 s temporally. This review focuses on the physical basis of the BOLD effect, the imaging methods used to measure it, the possible origins of the physiological effects that produce a mismatch of blood flow and oxygen metabolism during neural activation, and the mathematical models that have been developed to understand the measured signals. An overarching theme is the growing field of quantitative fMRI, in which other MRI methods are combined with BOLD methods and analyzed within a theoretical modeling framework to derive quantitative estimates of oxygen metabolism and other physiological variables. That goal is the current challenge for fMRI: to move fMRI from a mapping tool to a quantitative probe of brain physiology.

  9. Neural network error correction for solving coupled ordinary differential equations

    NASA Technical Reports Server (NTRS)

    Shelton, R. O.; Darsey, J. A.; Sumpter, B. G.; Noid, D. W.

    1992-01-01

    A neural network is presented to learn errors generated by a numerical algorithm for solving coupled nonlinear differential equations. The method is based on using a neural network to correctly learn the error generated by, for example, Runge-Kutta on a model molecular dynamics (MD) problem. The neural network programs used in this study were developed by NASA. Comparisons are made for training the neural network using backpropagation and a new method which was found to converge with fewer iterations. The neural net programs, the MD model and the calculations are discussed.

  10. Dynamic Pricing in Electronic Commerce Using Neural Network

    NASA Astrophysics Data System (ADS)

    Ghose, Tapu Kumar; Tran, Thomas T.

    In this paper, we propose an approach where feed-forward neural network is used for dynamically calculating a competitive price of a product in order to maximize sellers’ revenue. In the approach we considered that along with product price other attributes such as product quality, delivery time, after sales service and seller’s reputation contribute in consumers purchase decision. We showed that once the sellers, by using their limited prior knowledge, set an initial price of a product our model adjusts the price automatically with the help of neural network so that sellers’ revenue is maximized.

  11. Compliance control with embedded neural elements

    NASA Technical Reports Server (NTRS)

    Venkataraman, S. T.; Gulati, S.

    1992-01-01

    The authors discuss a control approach that embeds the neural elements within a model-based compliant control architecture for robotic tasks that involve contact with unstructured environments. Compliance control experiments have been performed on actual robotics hardware to demonstrate the performance of contact control schemes with neural elements. System parameters were identified under the assumption that environment dynamics have a fixed nonlinear structure. A robotics research arm, placed in contact with a single degree-of-freedom electromechanical environment dynamics emulator, was commanded to move through a desired trajectory. The command was implemented by using a compliant control strategy.

  12. Motor Cortex Reorganization across the Lifespan

    ERIC Educational Resources Information Center

    Plowman, Emily K.; Kleim, Jeffrey A.

    2010-01-01

    The brain is a highly dynamic structure with the capacity for profound structural and functional change. Such neural plasticity has been well characterized within motor cortex and is believed to represent one of the neural mechanisms for acquiring and modifying motor behaviors. A number of behavioral and neural signals have been identified that…

  13. Closed-Loop and Activity-Guided Optogenetic Control

    PubMed Central

    Grosenick, Logan; Marshel, James H.; Deisseroth, Karl

    2016-01-01

    Advances in optical manipulation and observation of neural activity have set the stage for widespread implementation of closed-loop and activity-guided optical control of neural circuit dynamics. Closing the loop optogenetically (i.e., basing optogenetic stimulation on simultaneously observed dynamics in a principled way) is a powerful strategy for causal investigation of neural circuitry. In particular, observing and feeding back the effects of circuit interventions on physiologically relevant timescales is valuable for directly testing whether inferred models of dynamics, connectivity, and causation are accurate in vivo. Here we highlight technical and theoretical foundations as well as recent advances and opportunities in this area, and we review in detail the known caveats and limitations of optogenetic experimentation in the context of addressing these challenges with closed-loop optogenetic control in behaving animals. PMID:25856490

  14. Method and system for pattern analysis using a coarse-coded neural network

    NASA Technical Reports Server (NTRS)

    Spirkovska, Liljana (Inventor); Reid, Max B. (Inventor)

    1994-01-01

    A method and system for performing pattern analysis with a neural network coarse-coding a pattern to be analyzed so as to form a plurality of sub-patterns collectively defined by data. Each of the sub-patterns comprises sets of pattern data. The neural network includes a plurality fields, each field being associated with one of the sub-patterns so as to receive the sub-pattern data therefrom. Training and testing by the neural network then proceeds in the usual way, with one modification: the transfer function thresholds the value obtained from summing the weighted products of each field over all sub-patterns associated with each pattern being analyzed by the system.

  15. Neural network adaptive control of wing-rock motion of aircraft model mounted on three-degree-of-freedom dynamic rig in wind tunnel

    NASA Astrophysics Data System (ADS)

    Ignatyev, D. I.

    2018-06-01

    High-angles-of-attack dynamics of aircraft are complicated with dangerous phenomena such as wing rock, stall, and spin. Autonomous dynamically scaled aircraft model mounted in three-degree-of-freedom (3DoF) dynamic rig is proposed for studying aircraft dynamics and prototyping of control laws in wind tunnel. Dynamics of the scaled aircraft model in 3DoF manoeuvre rig in wind tunnel is considered. The model limit-cycle oscillations are obtained at high angles of attack. A neural network (NN) adaptive control suppressing wing rock motion is designed. The wing rock suppression with the proposed control law is validated using nonlinear time-domain simulations.

  16. Bioinspired Nanocomplex for Spatiotemporal Imaging of Sequential mRNA Expression in Differentiating Neural Stem Cells

    PubMed Central

    2015-01-01

    Messenger RNA plays a pivotal role in regulating cellular activities. The expression dynamics of specific mRNA contains substantial information on the intracellular milieu. Unlike the imaging of stationary mRNAs, real-time intracellular imaging of the dynamics of mRNA expression is of great value for investigating mRNA biology and exploring specific cellular cascades. In addition to advanced imaging methods, timely extracellular stimulation is another key factor in regulating the mRNA expression repertoire. The integration of effective stimulation and imaging into a single robust system would significantly improve stimulation efficiency and imaging accuracy, producing fewer unwanted artifacts. In this study, we developed a multifunctional nanocomplex to enable self-activating and spatiotemporal imaging of the dynamics of mRNA sequential expression during the neural stem cell differentiation process. This nanocomplex showed improved enzymatic stability, fast recognition kinetics, and high specificity. With a mechanism regulated by endogenous cell machinery, this nanocomplex realized the successive stimulating motif release and the dynamic imaging of chronological mRNA expression during neural stem cell differentiation without the use of transgenetic manipulation. The dynamic imaging montage of mRNA expression ultimately facilitated genetic heterogeneity analysis. In vivo lateral ventricle injection of this nanocomplex enabled endogenous neural stem cell activation and labeling at their specific differentiation stages. This nanocomplex is highly amenable as an alternative tool to explore the dynamics of intricate mRNA activities in various physiological and pathological conditions. PMID:25494492

  17. Bioinspired nanocomplex for spatiotemporal imaging of sequential mRNA expression in differentiating neural stem cells.

    PubMed

    Wang, Zhe; Zhang, Ruili; Wang, Zhongliang; Wang, He-Fang; Wang, Yu; Zhao, Jun; Wang, Fu; Li, Weitao; Niu, Gang; Kiesewetter, Dale O; Chen, Xiaoyuan

    2014-12-23

    Messenger RNA plays a pivotal role in regulating cellular activities. The expression dynamics of specific mRNA contains substantial information on the intracellular milieu. Unlike the imaging of stationary mRNAs, real-time intracellular imaging of the dynamics of mRNA expression is of great value for investigating mRNA biology and exploring specific cellular cascades. In addition to advanced imaging methods, timely extracellular stimulation is another key factor in regulating the mRNA expression repertoire. The integration of effective stimulation and imaging into a single robust system would significantly improve stimulation efficiency and imaging accuracy, producing fewer unwanted artifacts. In this study, we developed a multifunctional nanocomplex to enable self-activating and spatiotemporal imaging of the dynamics of mRNA sequential expression during the neural stem cell differentiation process. This nanocomplex showed improved enzymatic stability, fast recognition kinetics, and high specificity. With a mechanism regulated by endogenous cell machinery, this nanocomplex realized the successive stimulating motif release and the dynamic imaging of chronological mRNA expression during neural stem cell differentiation without the use of transgenetic manipulation. The dynamic imaging montage of mRNA expression ultimately facilitated genetic heterogeneity analysis. In vivo lateral ventricle injection of this nanocomplex enabled endogenous neural stem cell activation and labeling at their specific differentiation stages. This nanocomplex is highly amenable as an alternative tool to explore the dynamics of intricate mRNA activities in various physiological and pathological conditions.

  18. A Dynamic Connectome Supports the Emergence of Stable Computational Function of Neural Circuits through Reward-Based Learning.

    PubMed

    Kappel, David; Legenstein, Robert; Habenschuss, Stefan; Hsieh, Michael; Maass, Wolfgang

    2018-01-01

    Synaptic connections between neurons in the brain are dynamic because of continuously ongoing spine dynamics, axonal sprouting, and other processes. In fact, it was recently shown that the spontaneous synapse-autonomous component of spine dynamics is at least as large as the component that depends on the history of pre- and postsynaptic neural activity. These data are inconsistent with common models for network plasticity and raise the following questions: how can neural circuits maintain a stable computational function in spite of these continuously ongoing processes, and what could be functional uses of these ongoing processes? Here, we present a rigorous theoretical framework for these seemingly stochastic spine dynamics and rewiring processes in the context of reward-based learning tasks. We show that spontaneous synapse-autonomous processes, in combination with reward signals such as dopamine, can explain the capability of networks of neurons in the brain to configure themselves for specific computational tasks, and to compensate automatically for later changes in the network or task. Furthermore, we show theoretically and through computer simulations that stable computational performance is compatible with continuously ongoing synapse-autonomous changes. After reaching good computational performance it causes primarily a slow drift of network architecture and dynamics in task-irrelevant dimensions, as observed for neural activity in motor cortex and other areas. On the more abstract level of reinforcement learning the resulting model gives rise to an understanding of reward-driven network plasticity as continuous sampling of network configurations.

  19. A Dynamic Connectome Supports the Emergence of Stable Computational Function of Neural Circuits through Reward-Based Learning

    PubMed Central

    Habenschuss, Stefan; Hsieh, Michael

    2018-01-01

    Synaptic connections between neurons in the brain are dynamic because of continuously ongoing spine dynamics, axonal sprouting, and other processes. In fact, it was recently shown that the spontaneous synapse-autonomous component of spine dynamics is at least as large as the component that depends on the history of pre- and postsynaptic neural activity. These data are inconsistent with common models for network plasticity and raise the following questions: how can neural circuits maintain a stable computational function in spite of these continuously ongoing processes, and what could be functional uses of these ongoing processes? Here, we present a rigorous theoretical framework for these seemingly stochastic spine dynamics and rewiring processes in the context of reward-based learning tasks. We show that spontaneous synapse-autonomous processes, in combination with reward signals such as dopamine, can explain the capability of networks of neurons in the brain to configure themselves for specific computational tasks, and to compensate automatically for later changes in the network or task. Furthermore, we show theoretically and through computer simulations that stable computational performance is compatible with continuously ongoing synapse-autonomous changes. After reaching good computational performance it causes primarily a slow drift of network architecture and dynamics in task-irrelevant dimensions, as observed for neural activity in motor cortex and other areas. On the more abstract level of reinforcement learning the resulting model gives rise to an understanding of reward-driven network plasticity as continuous sampling of network configurations. PMID:29696150

  20. Driving working memory with frequency-tuned noninvasive brain stimulation.

    PubMed

    Albouy, Philippe; Baillet, Sylvain; Zatorre, Robert J

    2018-04-29

    Frequency-tuned noninvasive brain stimulation is a recent approach in cognitive neuroscience that involves matching the frequency of transcranially applied electromagnetic fields to that of specific oscillatory components of the underlying neurophysiology. The objective of this method is to modulate ongoing/intrinsic brain oscillations, which correspond to rhythmic fluctuations of neural excitability, to causally change behavior. We review the impact of frequency-tuned noninvasive brain stimulation on the research field of human working memory. We argue that this is a powerful method to probe and understand the mechanisms of memory functions, targeting specifically task-related oscillatory dynamics, neuronal representations, and brain networks. We report the main behavioral and neurophysiological outcomes published to date, in particular, how functionally relevant oscillatory signatures in signal power and interregional connectivity yield causal changes of working memory abilities. We also present recent developments of the technique that aim to modulate cross-frequency coupling in polyrhythmic neural activity. Overall, the method has led to significant advances in our understanding of the mechanisms of systems neuroscience, and the role of brain oscillations in cognition and behavior. We also emphasize the translational impact of noninvasive brain stimulation techniques in the development of therapeutic approaches. © 2018 New York Academy of Sciences.

  1. Recent advances in understanding the role of the hypothalamic circuit during aggression

    PubMed Central

    Falkner, Annegret L.; Lin, Dayu

    2014-01-01

    The hypothalamus was first implicated in the classic “fight or flight” response nearly a century ago, and since then, many important strides have been made in understanding both the circuitry and the neural dynamics underlying the generation of these behaviors. In this review, we will focus on the role of the hypothalamus in aggression, paying particular attention to recent advances in the field that have allowed for functional identification of relevant hypothalamic subnuclei. Recent progress in this field has been aided by the development of new techniques for functional manipulation including optogenetics and pharmacogenetics, as well as advances in technology used for chronic in vivo recordings during complex social behaviors. We will examine the role of the hypothalamus through the complimentary lenses of (1) loss of function studies, including pharmacology and pharmacogenetics; (2) gain of function studies, including specific comparisons between results from classic electrical stimulation studies and more recent work using optogenetics; and (3) neural activity, including both immediate early gene and awake-behaving recordings. Lastly, we will outline current approaches to identifying the precise role of the hypothalamus in promoting aggressive motivation and aggressive action. PMID:25309351

  2. Cortical representations of communication sounds.

    PubMed

    Heiser, Marc A; Cheung, Steven W

    2008-10-01

    This review summarizes recent research into cortical processing of vocalizations in animals and humans. There has been a resurgent interest in this topic accompanied by an increased number of studies using animal models with complex vocalizations and new methods in human brain imaging. Recent results from such studies are discussed. Experiments have begun to reveal the bilateral cortical fields involved in communication sound processing and the transformations of neural representations that occur among those fields. Advances have also been made in understanding the neuronal basis of interaction between developmental exposures and behavioral experiences with vocalization perception. Exposure to sounds during the developmental period produces large effects on brain responses, as do a variety of specific trained tasks in adults. Studies have also uncovered a neural link between the motor production of vocalizations and the representation of vocalizations in cortex. Parallel experiments in humans and animals are answering important questions about vocalization processing in the central nervous system. This dual approach promises to reveal microscopic, mesoscopic, and macroscopic principles of large-scale dynamic interactions between brain regions that underlie the complex phenomenon of vocalization perception. Such advances will yield a greater understanding of the causes, consequences, and treatment of disorders related to speech processing.

  3. A Deep Learning based Approach to Reduced Order Modeling of Fluids using LSTM Neural Networks

    NASA Astrophysics Data System (ADS)

    Mohan, Arvind; Gaitonde, Datta

    2017-11-01

    Reduced Order Modeling (ROM) can be used as surrogates to prohibitively expensive simulations to model flow behavior for long time periods. ROM is predicated on extracting dominant spatio-temporal features of the flow from CFD or experimental datasets. We explore ROM development with a deep learning approach, which comprises of learning functional relationships between different variables in large datasets for predictive modeling. Although deep learning and related artificial intelligence based predictive modeling techniques have shown varied success in other fields, such approaches are in their initial stages of application to fluid dynamics. Here, we explore the application of the Long Short Term Memory (LSTM) neural network to sequential data, specifically to predict the time coefficients of Proper Orthogonal Decomposition (POD) modes of the flow for future timesteps, by training it on data at previous timesteps. The approach is demonstrated by constructing ROMs of several canonical flows. Additionally, we show that statistical estimates of stationarity in the training data can indicate a priori how amenable a given flow-field is to this approach. Finally, the potential and limitations of deep learning based ROM approaches will be elucidated and further developments discussed.

  4. Stimulus Load and Oscillatory Activity in Higher Cortex

    PubMed Central

    Kornblith, Simon; Buschman, Timothy J.; Miller, Earl K.

    2016-01-01

    Exploring and exploiting a rich visual environment requires perceiving, attending, and remembering multiple objects simultaneously. Recent studies have suggested that this mental “juggling” of multiple objects may depend on oscillatory neural dynamics. We recorded local field potentials from the lateral intraparietal area, frontal eye fields, and lateral prefrontal cortex while monkeys maintained variable numbers of visual stimuli in working memory. Behavior suggested independent processing of stimuli in each hemifield. During stimulus presentation, higher-frequency power (50–100 Hz) increased with the number of stimuli (load) in the contralateral hemifield, whereas lower-frequency power (8–50 Hz) decreased with the total number of stimuli in both hemifields. During the memory delay, lower-frequency power increased with contralateral load. Load effects on higher frequencies during stimulus encoding and lower frequencies during the memory delay were stronger when neural activity also signaled the location of the stimuli. Like power, higher-frequency synchrony increased with load, but beta synchrony (16–30 Hz) showed the opposite effect, increasing when power decreased (stimulus presentation) and decreasing when power increased (memory delay). Our results suggest roles for lower-frequency oscillations in top-down processing and higher-frequency oscillations in bottom-up processing. PMID:26286916

  5. Neural basis for dynamic updating of object representation in visual working memory.

    PubMed

    Takahama, Sachiko; Miyauchi, Satoru; Saiki, Jun

    2010-02-15

    In real world, objects have multiple features and change dynamically. Thus, object representations must satisfy dynamic updating and feature binding. Previous studies have investigated the neural activity of dynamic updating or feature binding alone, but not both simultaneously. We investigated the neural basis of feature-bound object representation in a dynamically updating situation by conducting a multiple object permanence tracking task, which required observers to simultaneously process both the maintenance and dynamic updating of feature-bound objects. Using an event-related design, we separated activities during memory maintenance and change detection. In the search for regions showing selective activation in dynamic updating of feature-bound objects, we identified a network during memory maintenance that was comprised of the inferior precentral sulcus, superior parietal lobule, and middle frontal gyrus. In the change detection period, various prefrontal regions, including the anterior prefrontal cortex, were activated. In updating object representation of dynamically moving objects, the inferior precentral sulcus closely cooperates with a so-called "frontoparietal network", and subregions of the frontoparietal network can be decomposed into those sensitive to spatial updating and feature binding. The anterior prefrontal cortex identifies changes in object representation by comparing memory and perceptual representations rather than maintaining object representations per se, as previously suggested. Copyright 2009 Elsevier Inc. All rights reserved.

  6. Spiking Neurons in a Hierarchical Self-Organizing Map Model Can Learn to Develop Spatial and Temporal Properties of Entorhinal Grid Cells and Hippocampal Place Cells

    PubMed Central

    Pilly, Praveen K.; Grossberg, Stephen

    2013-01-01

    Medial entorhinal grid cells and hippocampal place cells provide neural correlates of spatial representation in the brain. A place cell typically fires whenever an animal is present in one or more spatial regions, or places, of an environment. A grid cell typically fires in multiple spatial regions that form a regular hexagonal grid structure extending throughout the environment. Different grid and place cells prefer spatially offset regions, with their firing fields increasing in size along the dorsoventral axes of the medial entorhinal cortex and hippocampus. The spacing between neighboring fields for a grid cell also increases along the dorsoventral axis. This article presents a neural model whose spiking neurons operate in a hierarchy of self-organizing maps, each obeying the same laws. This spiking GridPlaceMap model simulates how grid cells and place cells may develop. It responds to realistic rat navigational trajectories by learning grid cells with hexagonal grid firing fields of multiple spatial scales and place cells with one or more firing fields that match neurophysiological data about these cells and their development in juvenile rats. The place cells represent much larger spaces than the grid cells, which enable them to support navigational behaviors. Both self-organizing maps amplify and learn to categorize the most frequent and energetic co-occurrences of their inputs. The current results build upon a previous rate-based model of grid and place cell learning, and thus illustrate a general method for converting rate-based adaptive neural models, without the loss of any of their analog properties, into models whose cells obey spiking dynamics. New properties of the spiking GridPlaceMap model include the appearance of theta band modulation. The spiking model also opens a path for implementation in brain-emulating nanochips comprised of networks of noisy spiking neurons with multiple-level adaptive weights for controlling autonomous adaptive robots capable of spatial navigation. PMID:23577130

  7. Accelerating Learning By Neural Networks

    NASA Technical Reports Server (NTRS)

    Toomarian, Nikzad; Barhen, Jacob

    1992-01-01

    Electronic neural networks made to learn faster by use of terminal teacher forcing. Method of supervised learning involves addition of teacher forcing functions to excitations fed as inputs to output neurons. Initially, teacher forcing functions are strong enough to force outputs to desired values; subsequently, these functions decay with time. When learning successfully completed, terminal teacher forcing vanishes, and dynamics or neural network become equivalent to those of conventional neural network. Simulated neural network with terminal teacher forcing learned to produce close approximation of circular trajectory in 400 iterations.

  8. Dynamic Trial-by-Trial Recoding of Task-Set Representations in the Frontoparietal Cortex Mediates Behavioral Flexibility

    PubMed Central

    Qiao, Lei; Zhang, Lijie

    2017-01-01

    Cognitive flexibility forms the core of the extraordinary ability of humans to adapt, but the precise neural mechanisms underlying our ability to nimbly shift between task sets remain poorly understood. Recent functional magnetic resonance imaging (fMRI) studies employing multivoxel pattern analysis (MVPA) have shown that a currently relevant task set can be decoded from activity patterns in the frontoparietal cortex, but whether these regions support the dynamic transformation of task sets from trial to trial is not clear. Here, we combined a cued task-switching protocol with human (both sexes) fMRI, and harnessed representational similarity analysis (RSA) to facilitate a novel assessment of trial-by-trial changes in neural task-set representations. We first used MVPA to define task-sensitive frontoparietal and visual regions and found that neural task-set representations on switch trials are less stably encoded than on repeat trials. We then exploited RSA to show that the neural representational pattern dissimilarity across consecutive trials is greater for switch trials than for repeat trials, and that the degree of this pattern dissimilarity predicts behavior. Moreover, the overall neural pattern of representational dissimilarities followed from the assumption that repeating sets, compared with switching sets, results in stronger neural task representations. Finally, when moving from cue to target phase within a trial, pattern dissimilarities tracked the transformation from previous-trial task representations to the currently relevant set. These results provide neural evidence for the longstanding assumptions of an effortful task-set reconfiguration process hampered by task-set inertia, and they demonstrate that frontoparietal and stimulus processing regions support “dynamic adaptive coding,” flexibly representing changing task sets in a trial-by-trial fashion. SIGNIFICANCE STATEMENT Humans can fluently switch between different tasks, reflecting an ability to dynamically configure “task sets,” rule representations that link stimuli to appropriate responses. Recent studies show that neural signals in frontal and parietal brain regions can tell us which of two tasks a person is currently performing. However, it is not known whether these regions are also involved in dynamically reconfiguring task-set representations when switching between tasks. Here we measured human brain activity during task switching and tracked the similarity of neural task-set representations from trial to trial. We show that frontal and parietal brain regions flexibly recode changing task sets in a trial-by-trial fashion, and that task-set similarity over consecutive trials predicts behavior. PMID:28972126

  9. Network evolution induced by asynchronous stimuli through spike-timing-dependent plasticity.

    PubMed

    Yuan, Wu-Jie; Zhou, Jian-Fang; Zhou, Changsong

    2013-01-01

    In sensory neural system, external asynchronous stimuli play an important role in perceptual learning, associative memory and map development. However, the organization of structure and dynamics of neural networks induced by external asynchronous stimuli are not well understood. Spike-timing-dependent plasticity (STDP) is a typical synaptic plasticity that has been extensively found in the sensory systems and that has received much theoretical attention. This synaptic plasticity is highly sensitive to correlations between pre- and postsynaptic firings. Thus, STDP is expected to play an important role in response to external asynchronous stimuli, which can induce segregative pre- and postsynaptic firings. In this paper, we study the impact of external asynchronous stimuli on the organization of structure and dynamics of neural networks through STDP. We construct a two-dimensional spatial neural network model with local connectivity and sparseness, and use external currents to stimulate alternately on different spatial layers. The adopted external currents imposed alternately on spatial layers can be here regarded as external asynchronous stimuli. Through extensive numerical simulations, we focus on the effects of stimulus number and inter-stimulus timing on synaptic connecting weights and the property of propagation dynamics in the resulting network structure. Interestingly, the resulting feedforward structure induced by stimulus-dependent asynchronous firings and its propagation dynamics reflect both the underlying property of STDP. The results imply a possible important role of STDP in generating feedforward structure and collective propagation activity required for experience-dependent map plasticity in developing in vivo sensory pathways and cortices. The relevance of the results to cue-triggered recall of learned temporal sequences, an important cognitive function, is briefly discussed as well. Furthermore, this finding suggests a potential application for examining STDP by measuring neural population activity in a cultured neural network.

  10. Fuzzy and neural control

    NASA Technical Reports Server (NTRS)

    Berenji, Hamid R.

    1992-01-01

    Fuzzy logic and neural networks provide new methods for designing control systems. Fuzzy logic controllers do not require a complete analytical model of a dynamic system and can provide knowledge-based heuristic controllers for ill-defined and complex systems. Neural networks can be used for learning control. In this chapter, we discuss hybrid methods using fuzzy logic and neural networks which can start with an approximate control knowledge base and refine it through reinforcement learning.

  11. Neural Cross-Frequency Coupling: Connecting Architectures, Mechanisms, and Functions.

    PubMed

    Hyafil, Alexandre; Giraud, Anne-Lise; Fontolan, Lorenzo; Gutkin, Boris

    2015-11-01

    Neural oscillations are ubiquitously observed in the mammalian brain, but it has proven difficult to tie oscillatory patterns to specific cognitive operations. Notably, the coupling between neural oscillations at different timescales has recently received much attention, both from experimentalists and theoreticians. We review the mechanisms underlying various forms of this cross-frequency coupling. We show that different types of neural oscillators and cross-frequency interactions yield distinct signatures in neural dynamics. Finally, we associate these mechanisms with several putative functions of cross-frequency coupling, including neural representations of multiple environmental items, communication over distant areas, internal clocking of neural processes, and modulation of neural processing based on temporal predictions. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Vibration control of uncertain multiple launch rocket system using radial basis function neural network

    NASA Astrophysics Data System (ADS)

    Li, Bo; Rui, Xiaoting

    2018-01-01

    Poor dispersion characteristics of rockets due to the vibration of Multiple Launch Rocket System (MLRS) have always restricted the MLRS development for several decades. Vibration control is a key technique to improve the dispersion characteristics of rockets. For a mechanical system such as MLRS, the major difficulty in designing an appropriate control strategy that can achieve the desired vibration control performance is to guarantee the robustness and stability of the control system under the occurrence of uncertainties and nonlinearities. To approach this problem, a computed torque controller integrated with a radial basis function neural network is proposed to achieve the high-precision vibration control for MLRS. In this paper, the vibration response of a computed torque controlled MLRS is described. The azimuth and elevation mechanisms of the MLRS are driven by permanent magnet synchronous motors and supposed to be rigid. First, the dynamic model of motor-mechanism coupling system is established using Lagrange method and field-oriented control theory. Then, in order to deal with the nonlinearities, a computed torque controller is designed to control the vibration of the MLRS when it is firing a salvo of rockets. Furthermore, to compensate for the lumped uncertainty due to parametric variations and un-modeled dynamics in the design of the computed torque controller, a radial basis function neural network estimator is developed to adapt the uncertainty based on Lyapunov stability theory. Finally, the simulated results demonstrate the effectiveness of the proposed control system and show that the proposed controller is robust with regard to the uncertainty.

  13. The consequences of neural degeneration regarding optimal cochlear implant position in scala tympani: a model approach.

    PubMed

    Briaire, Jeroen J; Frijns, Johan H M

    2006-04-01

    Cochlear implant research endeavors to optimize the spatial selectivity, threshold and dynamic range with the objective of improving the speech perception performance of the implant user. One of the ways to achieve some of these goals is by electrode design. New cochlear implant electrode designs strive to bring the electrode contacts into close proximity to the nerve fibers in the modiolus: this is done by placing the contacts on the medial side of the array and positioning the implant against the medial wall of scala tympani. The question remains whether this is the optimal position for a cochlea with intact neural fibers and, if so, whether it is also true for a cochlea with degenerated neural fibers. In this study a computational model of the implanted human cochlea is used to investigate the optimal position of the array with respect to threshold, dynamic range and spatial selectivity for a cochlea with intact nerve fibers and for degenerated nerve fibers. In addition, the model is used to evaluate the predictive value of eCAP measurements for obtaining peri-operative information on the neural status. The model predicts improved threshold, dynamic range and spatial selectivity for the peri-modiolar position at the basal end of the cochlea, with minimal influence of neural degeneration. At the apical end of the array (1.5 cochlear turns), the dynamic range and the spatial selectivity are limited due to the occurrence of cross-turn stimulation, with the exception of the condition without neural degeneration and with the electrode array along the lateral wall of scala tympani. The eCAP simulations indicate that a large P(0) peak occurs before the N(1)P(1) complex when the fibers are not degenerated. The absence of this peak might be used as an indicator for neural degeneration.

  14. A neural coding scheme reproducing foraging trajectories

    NASA Astrophysics Data System (ADS)

    Gutiérrez, Esther D.; Cabrera, Juan Luis

    2015-12-01

    The movement of many animals may follow Lévy patterns. The underlying generating neuronal dynamics of such a behavior is unknown. In this paper we show that a novel discovery of multifractality in winnerless competition (WLC) systems reveals a potential encoding mechanism that is translatable into two dimensional superdiffusive Lévy movements. The validity of our approach is tested on a conductance based neuronal model showing WLC and through the extraction of Lévy flights inducing fractals from recordings of rat hippocampus during open field foraging. Further insights are gained analyzing mice motor cortex neurons and non motor cell signals. The proposed mechanism provides a plausible explanation for the neuro-dynamical fundamentals of spatial searching patterns observed in animals (including humans) and illustrates an until now unknown way to encode information in neuronal temporal series.

  15. Chaos and Correlated Avalanches in Excitatory Neural Networks with Synaptic Plasticity

    NASA Astrophysics Data System (ADS)

    Pittorino, Fabrizio; Ibáñez-Berganza, Miguel; di Volo, Matteo; Vezzani, Alessandro; Burioni, Raffaella

    2017-03-01

    A collective chaotic phase with power law scaling of activity events is observed in a disordered mean field network of purely excitatory leaky integrate-and-fire neurons with short-term synaptic plasticity. The dynamical phase diagram exhibits two transitions from quasisynchronous and asynchronous regimes to the nontrivial, collective, bursty regime with avalanches. In the homogeneous case without disorder, the system synchronizes and the bursty behavior is reflected into a period doubling transition to chaos for a two dimensional discrete map. Numerical simulations show that the bursty chaotic phase with avalanches exhibits a spontaneous emergence of persistent time correlations and enhanced Kolmogorov complexity. Our analysis reveals a mechanism for the generation of irregular avalanches that emerges from the combination of disorder and deterministic underlying chaotic dynamics.

  16. Lifelong learning of human actions with deep neural network self-organization.

    PubMed

    Parisi, German I; Tani, Jun; Weber, Cornelius; Wermter, Stefan

    2017-12-01

    Lifelong learning is fundamental in autonomous robotics for the acquisition and fine-tuning of knowledge through experience. However, conventional deep neural models for action recognition from videos do not account for lifelong learning but rather learn a batch of training data with a predefined number of action classes and samples. Thus, there is the need to develop learning systems with the ability to incrementally process available perceptual cues and to adapt their responses over time. We propose a self-organizing neural architecture for incrementally learning to classify human actions from video sequences. The architecture comprises growing self-organizing networks equipped with recurrent neurons for processing time-varying patterns. We use a set of hierarchically arranged recurrent networks for the unsupervised learning of action representations with increasingly large spatiotemporal receptive fields. Lifelong learning is achieved in terms of prediction-driven neural dynamics in which the growth and the adaptation of the recurrent networks are driven by their capability to reconstruct temporally ordered input sequences. Experimental results on a classification task using two action benchmark datasets show that our model is competitive with state-of-the-art methods for batch learning also when a significant number of sample labels are missing or corrupted during training sessions. Additional experiments show the ability of our model to adapt to non-stationary input avoiding catastrophic interference. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  17. Biologically inspired EM image alignment and neural reconstruction.

    PubMed

    Knowles-Barley, Seymour; Butcher, Nancy J; Meinertzhagen, Ian A; Armstrong, J Douglas

    2011-08-15

    Three-dimensional reconstruction of consecutive serial-section transmission electron microscopy (ssTEM) images of neural tissue currently requires many hours of manual tracing and annotation. Several computational techniques have already been applied to ssTEM images to facilitate 3D reconstruction and ease this burden. Here, we present an alternative computational approach for ssTEM image analysis. We have used biologically inspired receptive fields as a basis for a ridge detection algorithm to identify cell membranes, synaptic contacts and mitochondria. Detected line segments are used to improve alignment between consecutive images and we have joined small segments of membrane into cell surfaces using a dynamic programming algorithm similar to the Needleman-Wunsch and Smith-Waterman DNA sequence alignment procedures. A shortest path-based approach has been used to close edges and achieve image segmentation. Partial reconstructions were automatically generated and used as a basis for semi-automatic reconstruction of neural tissue. The accuracy of partial reconstructions was evaluated and 96% of membrane could be identified at the cost of 13% false positive detections. An open-source reference implementation is available in the Supplementary information. seymour.kb@ed.ac.uk; douglas.armstrong@ed.ac.uk Supplementary data are available at Bioinformatics online.

  18. Ensemble Nonlinear Autoregressive Exogenous Artificial Neural Networks for Short-Term Wind Speed and Power Forecasting.

    PubMed

    Men, Zhongxian; Yee, Eugene; Lien, Fue-Sang; Yang, Zhiling; Liu, Yongqian

    2014-01-01

    Short-term wind speed and wind power forecasts (for a 72 h period) are obtained using a nonlinear autoregressive exogenous artificial neural network (ANN) methodology which incorporates either numerical weather prediction or high-resolution computational fluid dynamics wind field information as an exogenous input. An ensemble approach is used to combine the predictions from many candidate ANNs in order to provide improved forecasts for wind speed and power, along with the associated uncertainties in these forecasts. More specifically, the ensemble ANN is used to quantify the uncertainties arising from the network weight initialization and from the unknown structure of the ANN. All members forming the ensemble of neural networks were trained using an efficient particle swarm optimization algorithm. The results of the proposed methodology are validated using wind speed and wind power data obtained from an operational wind farm located in Northern China. The assessment demonstrates that this methodology for wind speed and power forecasting generally provides an improvement in predictive skills when compared to the practice of using an "optimal" weight vector from a single ANN while providing additional information in the form of prediction uncertainty bounds.

  19. Ensemble Nonlinear Autoregressive Exogenous Artificial Neural Networks for Short-Term Wind Speed and Power Forecasting

    PubMed Central

    Lien, Fue-Sang; Yang, Zhiling; Liu, Yongqian

    2014-01-01

    Short-term wind speed and wind power forecasts (for a 72 h period) are obtained using a nonlinear autoregressive exogenous artificial neural network (ANN) methodology which incorporates either numerical weather prediction or high-resolution computational fluid dynamics wind field information as an exogenous input. An ensemble approach is used to combine the predictions from many candidate ANNs in order to provide improved forecasts for wind speed and power, along with the associated uncertainties in these forecasts. More specifically, the ensemble ANN is used to quantify the uncertainties arising from the network weight initialization and from the unknown structure of the ANN. All members forming the ensemble of neural networks were trained using an efficient particle swarm optimization algorithm. The results of the proposed methodology are validated using wind speed and wind power data obtained from an operational wind farm located in Northern China. The assessment demonstrates that this methodology for wind speed and power forecasting generally provides an improvement in predictive skills when compared to the practice of using an “optimal” weight vector from a single ANN while providing additional information in the form of prediction uncertainty bounds. PMID:27382627

  20. Optimizing a neural network for detection of moving vehicles in video

    NASA Astrophysics Data System (ADS)

    Fischer, Noëlle M.; Kruithof, Maarten C.; Bouma, Henri

    2017-10-01

    In the field of security and defense, it is extremely important to reliably detect moving objects, such as cars, ships, drones and missiles. Detection and analysis of moving objects in cameras near borders could be helpful to reduce illicit trading, drug trafficking, irregular border crossing, trafficking in human beings and smuggling. Many recent benchmarks have shown that convolutional neural networks are performing well in the detection of objects in images. Most deep-learning research effort focuses on classification or detection on single images. However, the detection of dynamic changes (e.g., moving objects, actions and events) in streaming video is extremely relevant for surveillance and forensic applications. In this paper, we combine an end-to-end feedforward neural network for static detection with a recurrent Long Short-Term Memory (LSTM) network for multi-frame analysis. We present a practical guide with special attention to the selection of the optimizer and batch size. The end-to-end network is able to localize and recognize the vehicles in video from traffic cameras. We show an efficient way to collect relevant in-domain data for training with minimal manual labor. Our results show that the combination with LSTM improves performance for the detection of moving vehicles.

  1. Attraction Basins as Gauges of Robustness against Boundary Conditions in Biological Complex Systems

    PubMed Central

    Demongeot, Jacques; Goles, Eric; Morvan, Michel; Noual, Mathilde; Sené, Sylvain

    2010-01-01

    One fundamental concept in the context of biological systems on which researches have flourished in the past decade is that of the apparent robustness of these systems, i.e., their ability to resist to perturbations or constraints induced by external or boundary elements such as electromagnetic fields acting on neural networks, micro-RNAs acting on genetic networks and even hormone flows acting both on neural and genetic networks. Recent studies have shown the importance of addressing the question of the environmental robustness of biological networks such as neural and genetic networks. In some cases, external regulatory elements can be given a relevant formal representation by assimilating them to or modeling them by boundary conditions. This article presents a generic mathematical approach to understand the influence of boundary elements on the dynamics of regulation networks, considering their attraction basins as gauges of their robustness. The application of this method on a real genetic regulation network will point out a mathematical explanation of a biological phenomenon which has only been observed experimentally until now, namely the necessity of the presence of gibberellin for the flower of the plant Arabidopsis thaliana to develop normally. PMID:20700525

  2. Spatiotemporal neural network dynamics for the processing of dynamic facial expressions.

    PubMed

    Sato, Wataru; Kochiyama, Takanori; Uono, Shota

    2015-07-24

    The dynamic facial expressions of emotion automatically elicit multifaceted psychological activities; however, the temporal profiles and dynamic interaction patterns of brain activities remain unknown. We investigated these issues using magnetoencephalography. Participants passively observed dynamic facial expressions of fear and happiness, or dynamic mosaics. Source-reconstruction analyses utilizing functional magnetic-resonance imaging data revealed higher activation in broad regions of the bilateral occipital and temporal cortices in response to dynamic facial expressions than in response to dynamic mosaics at 150-200 ms and some later time points. The right inferior frontal gyrus exhibited higher activity for dynamic faces versus mosaics at 300-350 ms. Dynamic causal-modeling analyses revealed that dynamic faces activated the dual visual routes and visual-motor route. Superior influences of feedforward and feedback connections were identified before and after 200 ms, respectively. These results indicate that hierarchical, bidirectional neural network dynamics within a few hundred milliseconds implement the processing of dynamic facial expressions.

  3. Spatiotemporal neural network dynamics for the processing of dynamic facial expressions

    PubMed Central

    Sato, Wataru; Kochiyama, Takanori; Uono, Shota

    2015-01-01

    The dynamic facial expressions of emotion automatically elicit multifaceted psychological activities; however, the temporal profiles and dynamic interaction patterns of brain activities remain unknown. We investigated these issues using magnetoencephalography. Participants passively observed dynamic facial expressions of fear and happiness, or dynamic mosaics. Source-reconstruction analyses utilizing functional magnetic-resonance imaging data revealed higher activation in broad regions of the bilateral occipital and temporal cortices in response to dynamic facial expressions than in response to dynamic mosaics at 150–200 ms and some later time points. The right inferior frontal gyrus exhibited higher activity for dynamic faces versus mosaics at 300–350 ms. Dynamic causal-modeling analyses revealed that dynamic faces activated the dual visual routes and visual–motor route. Superior influences of feedforward and feedback connections were identified before and after 200 ms, respectively. These results indicate that hierarchical, bidirectional neural network dynamics within a few hundred milliseconds implement the processing of dynamic facial expressions. PMID:26206708

  4. Adaptive Control Law Development for Failure Compensation Using Neural Networks on a NASA F-15 Aircraft

    NASA Technical Reports Server (NTRS)

    Burken, John J.

    2005-01-01

    This viewgraph presentation covers the following topics: 1) Brief explanation of Generation II Flight Program; 2) Motivation for Neural Network Adaptive Systems; 3) Past/ Current/ Future IFCS programs; 4) Dynamic Inverse Controller with Explicit Model; 5) Types of Neural Networks Investigated; and 6) Brief example

  5. Ontology Mapping Neural Network: An Approach to Learning and Inferring Correspondences among Ontologies

    ERIC Educational Resources Information Center

    Peng, Yefei

    2010-01-01

    An ontology mapping neural network (OMNN) is proposed in order to learn and infer correspondences among ontologies. It extends the Identical Elements Neural Network (IENN)'s ability to represent and map complex relationships. The learning dynamics of simultaneous (interlaced) training of similar tasks interact at the shared connections of the…

  6. A scale out approach towards neural induction of human induced pluripotent stem cells for neurodevelopmental toxicity studies.

    PubMed

    Miranda, Cláudia C; Fernandes, Tiago G; Pinto, Sandra N; Prieto, Manuel; Diogo, M Margarida; Cabral, Joaquim M S

    2018-05-21

    Stem cell's unique properties confer them a multitude of potential applications in the fields of cellular therapy, disease modelling and drug screening fields. In particular, the ability to differentiate neural progenitors (NP) from human induced pluripotent stem cells (hiPSCs) using chemically-defined conditions provides an opportunity to create a simple and straightforward culture platform for application in these fields. Here, we demonstrated that hiPSCs are capable of undergoing neural commitment inside microwells, forming characteristic neural structures resembling neural rosettes and further give rise to glial and neuronal cells. Furthermore, this platform can be applied towards the study of the effect of neurotoxic molecules that impair normal embryonic development. As a proof of concept, the neural teratogenic potential of the antiepileptic drug valproic acid (VPA) was analyzed. It was verified that exposure to VPA, close to typical dosage values (0.3 to 0.75 mM), led to a prevalence of NP structures over neuronal differentiation, as confirmed by analysis of the expression of neural cell adhesion molecule, as well as neural rosette number and morphology assessment. The methodology proposed herein for the generation and neural differentiation of hiPSC aggregates can potentially complement current toxicity tests such as the humanized embryonic stem cell test for the detection of teratogenic compounds that can interfere with normal embryonic development. Copyright © 2018 Elsevier B.V. All rights reserved.

  7. Modeling of cortical signals using echo state networks

    NASA Astrophysics Data System (ADS)

    Zhou, Hanying; Wang, Yongji; Huang, Jiangshuai

    2009-10-01

    Diverse modeling frameworks have been utilized with the ultimate goal of translating brain cortical signals into prediction of visible behavior. The inputs to these models are usually multidimensional neural recordings collected from relevant regions of a monkey's brain while the outputs are the associated behavior which is typically the 2-D or 3-D hand position of a primate. Here our task is to set up a proper model in order to figure out the move trajectories by input the neural signals which are simultaneously collected in the experiment. In this paper, we propose to use Echo State Networks (ESN) to map the neural firing activities into hand positions. ESN is a newly developed recurrent neural network(RNN) model. Besides its dynamic property and short term memory just as other recurrent neural networks have, it has a special echo state property which endows it with the ability to model nonlinear dynamic systems powerfully. What distinguished it from transitional recurrent neural networks most significantly is its special learning method. In this paper we train this net with a refined version of its typical training method and get a better model.

  8. High-order tracking differentiator based adaptive neural control of a flexible air-breathing hypersonic vehicle subject to actuators constraints.

    PubMed

    Bu, Xiangwei; Wu, Xiaoyan; Tian, Mingyan; Huang, Jiaqi; Zhang, Rui; Ma, Zhen

    2015-09-01

    In this paper, an adaptive neural controller is exploited for a constrained flexible air-breathing hypersonic vehicle (FAHV) based on high-order tracking differentiator (HTD). By utilizing functional decomposition methodology, the dynamic model is reasonably decomposed into the respective velocity subsystem and altitude subsystem. For the velocity subsystem, a dynamic inversion based neural controller is constructed. By introducing the HTD to adaptively estimate the newly defined states generated in the process of model transformation, a novel neural based altitude controller that is quite simpler than the ones derived from back-stepping is addressed based on the normal output-feedback form instead of the strict-feedback formulation. Based on minimal-learning parameter scheme, only two neural networks with two adaptive parameters are needed for neural approximation. Especially, a novel auxiliary system is explored to deal with the problem of control inputs constraints. Finally, simulation results are presented to test the effectiveness of the proposed control strategy in the presence of system uncertainties and actuators constraints. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  9. Optimal mapping of neural-network learning on message-passing multicomputers

    NASA Technical Reports Server (NTRS)

    Chu, Lon-Chan; Wah, Benjamin W.

    1992-01-01

    A minimization of learning-algorithm completion time is sought in the present optimal-mapping study of the learning process in multilayer feed-forward artificial neural networks (ANNs) for message-passing multicomputers. A novel approximation algorithm for mappings of this kind is derived from observations of the dominance of a parallel ANN algorithm over its communication time. Attention is given to both static and dynamic mapping schemes for systems with static and dynamic background workloads, as well as to experimental results obtained for simulated mappings on multicomputers with dynamic background workloads.

  10. Increasing Spontaneous Retinal Activity before Eye Opening Accelerates the Development of Geniculate Receptive Fields

    PubMed Central

    Davis, Zachary W.; Chapman, Barbara

    2015-01-01

    Visually evoked activity is necessary for the normal development of the visual system. However, little is known about the capacity for patterned spontaneous activity to drive the maturation of receptive fields before visual experience. Retinal waves provide instructive retinotopic information for the anatomical organization of the visual thalamus. To determine whether retinal waves also drive the maturation of functional responses, we increased the frequency of retinal waves pharmacologically in the ferret (Mustela putorius furo) during a period of retinogeniculate development before eye opening. The development of geniculate receptive fields after receiving these increased neural activities was measured using single-unit electrophysiology. We found that increased retinal waves accelerate the developmental reduction of geniculate receptive field sizes. This reduction is due to a decrease in receptive field center size rather than an increase in inhibitory surround strength. This work reveals an instructive role for patterned spontaneous activity in guiding the functional development of neural circuits. SIGNIFICANCE STATEMENT Patterned spontaneous neural activity that occurs during development is known to be necessary for the proper formation of neural circuits. However, it is unknown whether the spontaneous activity alone is sufficient to drive the maturation of the functional properties of neurons. Our work demonstrates for the first time an acceleration in the maturation of neural function as a consequence of driving patterned spontaneous activity during development. This work has implications for our understanding of how neural circuits can be modified actively to improve function prematurely or to recover from injury with guided interventions of patterned neural activity. PMID:26511250

  11. A state space approach for piecewise-linear recurrent neural networks for identifying computational dynamics from neural measurements.

    PubMed

    Durstewitz, Daniel

    2017-06-01

    The computational and cognitive properties of neural systems are often thought to be implemented in terms of their (stochastic) network dynamics. Hence, recovering the system dynamics from experimentally observed neuronal time series, like multiple single-unit recordings or neuroimaging data, is an important step toward understanding its computations. Ideally, one would not only seek a (lower-dimensional) state space representation of the dynamics, but would wish to have access to its statistical properties and their generative equations for in-depth analysis. Recurrent neural networks (RNNs) are a computationally powerful and dynamically universal formal framework which has been extensively studied from both the computational and the dynamical systems perspective. Here we develop a semi-analytical maximum-likelihood estimation scheme for piecewise-linear RNNs (PLRNNs) within the statistical framework of state space models, which accounts for noise in both the underlying latent dynamics and the observation process. The Expectation-Maximization algorithm is used to infer the latent state distribution, through a global Laplace approximation, and the PLRNN parameters iteratively. After validating the procedure on toy examples, and using inference through particle filters for comparison, the approach is applied to multiple single-unit recordings from the rodent anterior cingulate cortex (ACC) obtained during performance of a classical working memory task, delayed alternation. Models estimated from kernel-smoothed spike time data were able to capture the essential computational dynamics underlying task performance, including stimulus-selective delay activity. The estimated models were rarely multi-stable, however, but rather were tuned to exhibit slow dynamics in the vicinity of a bifurcation point. In summary, the present work advances a semi-analytical (thus reasonably fast) maximum-likelihood estimation framework for PLRNNs that may enable to recover relevant aspects of the nonlinear dynamics underlying observed neuronal time series, and directly link these to computational properties.

  12. Dynamic facial expressions evoke distinct activation in the face perception network: a connectivity analysis study.

    PubMed

    Foley, Elaine; Rippon, Gina; Thai, Ngoc Jade; Longe, Olivia; Senior, Carl

    2012-02-01

    Very little is known about the neural structures involved in the perception of realistic dynamic facial expressions. In the present study, a unique set of naturalistic dynamic facial emotional expressions was created. Through fMRI and connectivity analysis, a dynamic face perception network was identified, which is demonstrated to extend Haxby et al.'s [Haxby, J. V., Hoffman, E. A., & Gobbini, M. I. The distributed human neural system for face perception. Trends in Cognitive Science, 4, 223-233, 2000] distributed neural system for face perception. This network includes early visual regions, such as the inferior occipital gyrus, which is identified as insensitive to motion or affect but sensitive to the visual stimulus, the STS, identified as specifically sensitive to motion, and the amygdala, recruited to process affect. Measures of effective connectivity between these regions revealed that dynamic facial stimuli were associated with specific increases in connectivity between early visual regions, such as the inferior occipital gyrus and the STS, along with coupling between the STS and the amygdala, as well as the inferior frontal gyrus. These findings support the presence of a distributed network of cortical regions that mediate the perception of different dynamic facial expressions.

  13. Direct imaging of neural currents using ultra-low field magnetic resonance techniques

    DOEpatents

    Volegov, Petr L [Los Alamos, NM; Matlashov, Andrei N [Los Alamos, NM; Mosher, John C [Los Alamos, NM; Espy, Michelle A [Los Alamos, NM; Kraus, Jr., Robert H.

    2009-08-11

    Using resonant interactions to directly and tomographically image neural activity in the human brain using magnetic resonance imaging (MRI) techniques at ultra-low field (ULF), the present inventors have established an approach that is sensitive to magnetic field distributions local to the spin population in cortex at the Larmor frequency of the measurement field. Because the Larmor frequency can be readily manipulated (through varying B.sub.m), one can also envision using ULF-DNI to image the frequency distribution of the local fields in cortex. Such information, taken together with simultaneous acquisition of MEG and ULF-NMR signals, enables non-invasive exploration of the correlation between local fields induced by neural activity in cortex and more `distant` measures of brain activity such as MEG and EEG.

  14. An analog neural hardware implementation using charge-injection multipliers and neutron-specific gain control.

    PubMed

    Massengill, L W; Mundie, D B

    1992-01-01

    A neural network IC based on a dynamic charge injection is described. The hardware design is space and power efficient, and achieves massive parallelism of analog inner products via charge-based multipliers and spatially distributed summing buses. Basic synaptic cells are constructed of exponential pulse-decay modulation (EPDM) dynamic injection multipliers operating sequentially on propagating signal vectors and locally stored analog weights. Individually adjustable gain controls on each neutron reduce the effects of limited weight dynamic range. A hardware simulator/trainer has been developed which incorporates the physical (nonideal) characteristics of actual circuit components into the training process, thus absorbing nonlinearities and parametric deviations into the macroscopic performance of the network. Results show that charge-based techniques may achieve a high degree of neural density and throughput using standard CMOS processes.

  15. At the interface: convergence of neural regeneration and neural prostheses for restoration of function.

    PubMed

    Grill, W M; McDonald, J W; Peckham, P H; Heetderks, W; Kocsis, J; Weinrich, M

    2001-01-01

    The rapid pace of recent advances in development and application of electrical stimulation of the nervous system and in neural regeneration has created opportunities to combine these two approaches to restoration of function. This paper relates the discussion on this topic from a workshop at the International Functional Electrical Stimulation Society. The goals of this workshop were to discuss the current state of interaction between the fields of neural regeneration and neural prostheses and to identify potential areas of future research that would have the greatest impact on achieving the common goal of restoring function after neurological damage. Identified areas include enhancement of axonal regeneration with applied electric fields, development of hybrid neural interfaces combining synthetic silicon and biologically derived elements, and investigation of the role of patterned neural activity in regulating various neuronal processes and neurorehabilitation. Increased communication and cooperation between the two communities and recognition by each field that the other has something to contribute to their efforts are needed to take advantage of these opportunities. In addition, creative grants combining the two approaches and more flexible funding mechanisms to support the convergence of their perspectives are necessary to achieve common objectives.

  16. Neural control of visual search by frontal eye field: effects of unexpected target displacement on visual selection and saccade preparation.

    PubMed

    Murthy, Aditya; Ray, Supriya; Shorter, Stephanie M; Schall, Jeffrey D; Thompson, Kirk G

    2009-05-01

    The dynamics of visual selection and saccade preparation by the frontal eye field was investigated in macaque monkeys performing a search-step task combining the classic double-step saccade task with visual search. Reward was earned for producing a saccade to a color singleton. On random trials the target and one distractor swapped locations before the saccade and monkeys were rewarded for shifting gaze to the new singleton location. A race model accounts for the probabilities and latencies of saccades to the initial and final singleton locations and provides a measure of the duration of a covert compensation process-target-step reaction time. When the target stepped out of a movement field, noncompensated saccades to the original location were produced when movement-related activity grew rapidly to a threshold. Compensated saccades to the final location were produced when the growth of the original movement-related activity was interrupted within target-step reaction time and was replaced by activation of other neurons producing the compensated saccade. When the target stepped into a receptive field, visual neurons selected the new target location regardless of the monkeys' response. When the target stepped out of a receptive field most visual neurons maintained the representation of the original target location, but a minority of visual neurons showed reduced activity. Chronometric analyses of the neural responses to the target step revealed that the modulation of visually responsive neurons and movement-related neurons occurred early enough to shift attention and saccade preparation from the old to the new target location. These findings indicate that visual activity in the frontal eye field signals the location of targets for orienting, whereas movement-related activity instantiates saccade preparation.

  17. Dynamic Encoding of Acoustic Features in Neural Responses to Continuous Speech.

    PubMed

    Khalighinejad, Bahar; Cruzatto da Silva, Guilherme; Mesgarani, Nima

    2017-02-22

    Humans are unique in their ability to communicate using spoken language. However, it remains unclear how the speech signal is transformed and represented in the brain at different stages of the auditory pathway. In this study, we characterized electroencephalography responses to continuous speech by obtaining the time-locked responses to phoneme instances (phoneme-related potential). We showed that responses to different phoneme categories are organized by phonetic features. We found that each instance of a phoneme in continuous speech produces multiple distinguishable neural responses occurring as early as 50 ms and as late as 400 ms after the phoneme onset. Comparing the patterns of phoneme similarity in the neural responses and the acoustic signals confirms a repetitive appearance of acoustic distinctions of phonemes in the neural data. Analysis of the phonetic and speaker information in neural activations revealed that different time intervals jointly encode the acoustic similarity of both phonetic and speaker categories. These findings provide evidence for a dynamic neural transformation of low-level speech features as they propagate along the auditory pathway, and form an empirical framework to study the representational changes in learning, attention, and speech disorders. SIGNIFICANCE STATEMENT We characterized the properties of evoked neural responses to phoneme instances in continuous speech. We show that each instance of a phoneme in continuous speech produces several observable neural responses at different times occurring as early as 50 ms and as late as 400 ms after the phoneme onset. Each temporal event explicitly encodes the acoustic similarity of phonemes, and linguistic and nonlinguistic information are best represented at different time intervals. Finally, we show a joint encoding of phonetic and speaker information, where the neural representation of speakers is dependent on phoneme category. These findings provide compelling new evidence for dynamic processing of speech sounds in the auditory pathway. Copyright © 2017 Khalighinejad et al.

  18. Fixed-time stabilization of impulsive Cohen-Grossberg BAM neural networks.

    PubMed

    Li, Hongfei; Li, Chuandong; Huang, Tingwen; Zhang, Wanli

    2018-02-01

    This article is concerned with the fixed-time stabilization for impulsive Cohen-Grossberg BAM neural networks via two different controllers. By using a novel constructive approach based on some comparison techniques for differential inequalities, an improvement theorem of fixed-time stability for impulsive dynamical systems is established. In addition, based on the fixed-time stability theorem of impulsive dynamical systems, two different control protocols are designed to ensure the fixed-time stabilization of impulsive Cohen-Grossberg BAM neural networks, which include and extend the earlier works. Finally, two simulations examples are provided to illustrate the validity of the proposed theoretical results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Dynamic neural networking as a basis for plasticity in the control of heart rate.

    PubMed

    Kember, G; Armour, J A; Zamir, M

    2013-01-21

    A model is proposed in which the relationship between individual neurons within a neural network is dynamically changing to the effect of providing a measure of "plasticity" in the control of heart rate. The neural network on which the model is based consists of three populations of neurons residing in the central nervous system, the intrathoracic extracardiac nervous system, and the intrinsic cardiac nervous system. This hierarchy of neural centers is used to challenge the classical view that the control of heart rate, a key clinical index, resides entirely in central neuronal command (spinal cord, medulla oblongata, and higher centers). Our results indicate that dynamic networking allows for the possibility of an interplay among the three populations of neurons to the effect of altering the order of control of heart rate among them. This interplay among the three levels of control allows for different neural pathways for the control of heart rate to emerge under different blood flow demands or disease conditions and, as such, it has significant clinical implications because current understanding and treatment of heart rate anomalies are based largely on a single level of control and on neurons acting in unison as a single entity rather than individually within a (plastically) interconnected network. Copyright © 2012 Elsevier Ltd. All rights reserved.

  20. Estimating wetland methane emissions from the northern high latitudes from 1990 to 2009 using artificial neural networks

    NASA Astrophysics Data System (ADS)

    Zhu, Xudong; Zhuang, Qianlai; Qin, Zhangcai; Glagolev, Mikhail; Song, Lulu

    2013-04-01

    Methane (CH4) emissions from wetland ecosystems in nothern high latitudes provide a potentially positive feedback to global climate warming. Large uncertainties still remain in estimating wetland CH4 emisions at regional scales. Here we develop a statistical model of CH4 emissions using an artificial neural network (ANN) approach and field observations of CH4 fluxes. Six explanatory variables (air temperature, precipitation, water table depth, soil organic carbon, soil total porosity, and soil pH) are included in the development of ANN models, which are then extrapolated to the northern high latitudes to estimate monthly CH4 emissions from 1990 to 2009. We estimate that the annual wetland CH4 source from the northern high latitudes (north of 45°N) is 48.7 Tg CH4 yr-1 (1 Tg = 1012 g) with an uncertainty range of 44.0 53.7 Tg CH4 yr-1. The estimated wetland CH4 emissions show a large spatial variability over the northern high latitudes, due to variations in hydrology, climate, and soil conditions. Significant interannual and seasonal variations of wetland CH4 emissions exist in the past 2 decades, and the emissions in this period are most sensitive to variations in water table position. To improve future assessment of wetland CH4 dynamics in this region, research priorities should be directed to better characterizing hydrological processes of wetlands, including temporal dynamics of water table position and spatial dynamics of wetland areas.

  1. Adaptive Neural Output-Feedback Control for a Class of Nonlower Triangular Nonlinear Systems With Unmodeled Dynamics.

    PubMed

    Wang, Huanqing; Liu, Peter Xiaoping; Li, Shuai; Wang, Ding

    2017-08-29

    This paper presents the development of an adaptive neural controller for a class of nonlinear systems with unmodeled dynamics and immeasurable states. An observer is designed to estimate system states. The structure consistency of virtual control signals and the variable partition technique are combined to overcome the difficulties appearing in a nonlower triangular form. An adaptive neural output-feedback controller is developed based on the backstepping technique and the universal approximation property of the radial basis function (RBF) neural networks. By using the Lyapunov stability analysis, the semiglobally and uniformly ultimate boundedness of all signals within the closed-loop system is guaranteed. The simulation results show that the controlled system converges quickly, and all the signals are bounded. This paper is novel at least in the two aspects: 1) an output-feedback control strategy is developed for a class of nonlower triangular nonlinear systems with unmodeled dynamics and 2) the nonlinear disturbances and their bounds are the functions of all states, which is in a more general form than existing results.

  2. Approximately adaptive neural cooperative control for nonlinear multiagent systems with performance guarantee

    NASA Astrophysics Data System (ADS)

    Wang, Jing; Yang, Tianyu; Staskevich, Gennady; Abbe, Brian

    2017-04-01

    This paper studies the cooperative control problem for a class of multiagent dynamical systems with partially unknown nonlinear system dynamics. In particular, the control objective is to solve the state consensus problem for multiagent systems based on the minimisation of certain cost functions for individual agents. Under the assumption that there exist admissible cooperative controls for such class of multiagent systems, the formulated problem is solved through finding the optimal cooperative control using the approximate dynamic programming and reinforcement learning approach. With the aid of neural network parameterisation and online adaptive learning, our method renders a practically implementable approximately adaptive neural cooperative control for multiagent systems. Specifically, based on the Bellman's principle of optimality, the Hamilton-Jacobi-Bellman (HJB) equation for multiagent systems is first derived. We then propose an approximately adaptive policy iteration algorithm for multiagent cooperative control based on neural network approximation of the value functions. The convergence of the proposed algorithm is rigorously proved using the contraction mapping method. The simulation results are included to validate the effectiveness of the proposed algorithm.

  3. Diagonal recurrent neural network based adaptive control of nonlinear dynamical systems using lyapunov stability criterion.

    PubMed

    Kumar, Rajesh; Srivastava, Smriti; Gupta, J R P

    2017-03-01

    In this paper adaptive control of nonlinear dynamical systems using diagonal recurrent neural network (DRNN) is proposed. The structure of DRNN is a modification of fully connected recurrent neural network (FCRNN). Presence of self-recurrent neurons in the hidden layer of DRNN gives it an ability to capture the dynamic behaviour of the nonlinear plant under consideration (to be controlled). To ensure stability, update rules are developed using lyapunov stability criterion. These rules are then used for adjusting the various parameters of DRNN. The responses of plants obtained with DRNN are compared with those obtained when multi-layer feed forward neural network (MLFFNN) is used as a controller. Also, in example 4, FCRNN is also investigated and compared with DRNN and MLFFNN. Robustness of the proposed control scheme is also tested against parameter variations and disturbance signals. Four simulation examples including one-link robotic manipulator and inverted pendulum are considered on which the proposed controller is applied. The results so obtained show the superiority of DRNN over MLFFNN as a controller. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  4. A Novel Experimental and Analytical Approach to the Multimodal Neural Decoding of Intent During Social Interaction in Freely-behaving Human Infants.

    PubMed

    Cruz-Garza, Jesus G; Hernandez, Zachery R; Tse, Teresa; Caducoy, Eunice; Abibullaev, Berdakh; Contreras-Vidal, Jose L

    2015-10-04

    Understanding typical and atypical development remains one of the fundamental questions in developmental human neuroscience. Traditionally, experimental paradigms and analysis tools have been limited to constrained laboratory tasks and contexts due to technical limitations imposed by the available set of measuring and analysis techniques and the age of the subjects. These limitations severely limit the study of developmental neural dynamics and associated neural networks engaged in cognition, perception and action in infants performing "in action and in context". This protocol presents a novel approach to study infants and young children as they freely organize their own behavior, and its consequences in a complex, partly unpredictable and highly dynamic environment. The proposed methodology integrates synchronized high-density active scalp electroencephalography (EEG), inertial measurement units (IMUs), video recording and behavioral analysis to capture brain activity and movement non-invasively in freely-behaving infants. This setup allows for the study of neural network dynamics in the developing brain, in action and context, as these networks are recruited during goal-oriented, exploration and social interaction tasks.

  5. Comparison of RF spectrum prediction methods for dynamic spectrum access

    NASA Astrophysics Data System (ADS)

    Kovarskiy, Jacob A.; Martone, Anthony F.; Gallagher, Kyle A.; Sherbondy, Kelly D.; Narayanan, Ram M.

    2017-05-01

    Dynamic spectrum access (DSA) refers to the adaptive utilization of today's busy electromagnetic spectrum. Cognitive radio/radar technologies require DSA to intelligently transmit and receive information in changing environments. Predicting radio frequency (RF) activity reduces sensing time and energy consumption for identifying usable spectrum. Typical spectrum prediction methods involve modeling spectral statistics with Hidden Markov Models (HMM) or various neural network structures. HMMs describe the time-varying state probabilities of Markov processes as a dynamic Bayesian network. Neural Networks model biological brain neuron connections to perform a wide range of complex and often non-linear computations. This work compares HMM, Multilayer Perceptron (MLP), and Recurrent Neural Network (RNN) algorithms and their ability to perform RF channel state prediction. Monte Carlo simulations on both measured and simulated spectrum data evaluate the performance of these algorithms. Generalizing spectrum occupancy as an alternating renewal process allows Poisson random variables to generate simulated data while energy detection determines the occupancy state of measured RF spectrum data for testing. The results suggest that neural networks achieve better prediction accuracy and prove more adaptable to changing spectral statistics than HMMs given sufficient training data.

  6. Structure of receptive fields in a computational model of area 3b of primary sensory cortex.

    PubMed

    Detorakis, Georgios Is; Rougier, Nicolas P

    2014-01-01

    In a previous work, we introduced a computational model of area 3b which is built upon the neural field theory and receives input from a simplified model of the index distal finger pad populated by a random set of touch receptors (Merkell cells). This model has been shown to be able to self-organize following the random stimulation of the finger pad model and to cope, to some extent, with cortical or skin lesions. The main hypothesis of the model is that learning of skin representations occurs at the thalamo-cortical level while cortico-cortical connections serve a stereotyped competition mechanism that shapes the receptive fields. To further assess this hypothesis and the validity of the model, we reproduced in this article the exact experimental protocol of DiCarlo et al. that has been used to examine the structure of receptive fields in area 3b of the primary somatosensory cortex. Using the same analysis toolset, the model yields consistent results, having most of the receptive fields to contain a single region of excitation and one to several regions of inhibition. We further proceeded our study using a dynamic competition that deeply influences the formation of the receptive fields. We hypothesized this dynamic competition to correspond to some form of somatosensory attention that may help to precisely shape the receptive fields. To test this hypothesis, we designed a protocol where an arbitrary region of interest is delineated on the index distal finger pad and we either (1) instructed explicitly the model to attend to this region (simulating an attentional signal) (2) preferentially trained the model on this region or (3) combined the two aforementioned protocols simultaneously. Results tend to confirm that dynamic competition leads to shrunken receptive fields and its joint interaction with intensive training promotes a massive receptive fields migration and shrinkage.

  7. Multi-layer neural networks for robot control

    NASA Technical Reports Server (NTRS)

    Pourboghrat, Farzad

    1989-01-01

    Two neural learning controller designs for manipulators are considered. The first design is based on a neural inverse-dynamics system. The second is the combination of the first one with a neural adaptive state feedback system. Both types of controllers enable the manipulator to perform any given task very well after a period of training and to do other untrained tasks satisfactorily. The second design also enables the manipulator to compensate for unpredictable perturbations.

  8. Spatiotemporal Dynamics and Reliable Computations in Recurrent Spiking Neural Networks

    NASA Astrophysics Data System (ADS)

    Pyle, Ryan; Rosenbaum, Robert

    2017-01-01

    Randomly connected networks of excitatory and inhibitory spiking neurons provide a parsimonious model of neural variability, but are notoriously unreliable for performing computations. We show that this difficulty is overcome by incorporating the well-documented dependence of connection probability on distance. Spatially extended spiking networks exhibit symmetry-breaking bifurcations and generate spatiotemporal patterns that can be trained to perform dynamical computations under a reservoir computing framework.

  9. An analysis of nonlinear dynamics underlying neural activity related to auditory induction in the rat auditory cortex.

    PubMed

    Noto, M; Nishikawa, J; Tateno, T

    2016-03-24

    A sound interrupted by silence is perceived as discontinuous. However, when high-intensity noise is inserted during the silence, the missing sound may be perceptually restored and be heard as uninterrupted. This illusory phenomenon is called auditory induction. Recent electrophysiological studies have revealed that auditory induction is associated with the primary auditory cortex (A1). Although experimental evidence has been accumulating, the neural mechanisms underlying auditory induction in A1 neurons are poorly understood. To elucidate this, we used both experimental and computational approaches. First, using an optical imaging method, we characterized population responses across auditory cortical fields to sound and identified five subfields in rats. Next, we examined neural population activity related to auditory induction with high temporal and spatial resolution in the rat auditory cortex (AC), including the A1 and several other AC subfields. Our imaging results showed that tone-burst stimuli interrupted by a silent gap elicited early phasic responses to the first tone and similar or smaller responses to the second tone following the gap. In contrast, tone stimuli interrupted by broadband noise (BN), considered to cause auditory induction, considerably suppressed or eliminated responses to the tone following the noise. Additionally, tone-burst stimuli that were interrupted by notched noise centered at the tone frequency, which is considered to decrease the strength of auditory induction, partially restored the second responses from the suppression caused by BN. To phenomenologically mimic the neural population activity in the A1 and thus investigate the mechanisms underlying auditory induction, we constructed a computational model from the periphery through the AC, including a nonlinear dynamical system. The computational model successively reproduced some of the above-mentioned experimental results. Therefore, our results suggest that a nonlinear, self-exciting system is a key element for qualitatively reproducing A1 population activity and to understand the underlying mechanisms. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.

  10. Neural-adaptive control of single-master-multiple-slaves teleoperation for coordinated multiple mobile manipulators with time-varying communication delays and input uncertainties.

    PubMed

    Li, Zhijun; Su, Chun-Yi

    2013-09-01

    In this paper, adaptive neural network control is investigated for single-master-multiple-slaves teleoperation in consideration of time delays and input dead-zone uncertainties for multiple mobile manipulators carrying a common object in a cooperative manner. Firstly, concise dynamics of teleoperation systems consisting of a single master robot, multiple coordinated slave robots, and the object are developed in the task space. To handle asymmetric time-varying delays in communication channels and unknown asymmetric input dead zones, the nonlinear dynamics of the teleoperation system are transformed into two subsystems through feedback linearization: local master or slave dynamics including the unknown input dead zones and delayed dynamics for the purpose of synchronization. Then, a model reference neural network control strategy based on linear matrix inequalities (LMI) and adaptive techniques is proposed. The developed control approach ensures that the defined tracking errors converge to zero whereas the coordination internal force errors remain bounded and can be made arbitrarily small. Throughout this paper, stability analysis is performed via explicit Lyapunov techniques under specific LMI conditions. The proposed adaptive neural network control scheme is robust against motion disturbances, parametric uncertainties, time-varying delays, and input dead zones, which is validated by simulation studies.

  11. Deep learning for computational chemistry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goh, Garrett B.; Hodas, Nathan O.; Vishnu, Abhinav

    The rise and fall of artificial neural networks is well documented in the scientific literature of both the fields of computer science and computational chemistry. Yet almost two decades later, we are now seeing a resurgence of interest in deep learning, a machine learning algorithm based on “deep” neural networks. Within the last few years, we have seen the transformative impact of deep learning the computer science domain, notably in speech recognition and computer vision, to the extent that the majority of practitioners in those field are now regularly eschewing prior established models in favor of deep learning models. Inmore » this review, we provide an introductory overview into the theory of deep neural networks and their unique properties as compared to traditional machine learning algorithms used in cheminformatics. By providing an overview of the variety of emerging applications of deep neural networks, we highlight its ubiquity and broad applicability to a wide range of challenges in the field, including QSAR, virtual screening, protein structure modeling, QM calculations, materials synthesis and property prediction. In reviewing the performance of deep neural networks, we observed a consistent outperformance against non neural networks state-of-the-art models across disparate research topics, and deep neural network based models often exceeded the “glass ceiling” expectations of their respective tasks. Coupled with the maturity of GPU-accelerated computing for training deep neural networks and the exponential growth of chemical data on which to train these networks on, we anticipate that deep learning algorithms will be a useful tool and may grow into a pivotal role for various challenges in the computational chemistry field.« less

  12. Robust adaptive backstepping neural networks control for spacecraft rendezvous and docking with input saturation.

    PubMed

    Xia, Kewei; Huo, Wei

    2016-05-01

    This paper presents a robust adaptive neural networks control strategy for spacecraft rendezvous and docking with the coupled position and attitude dynamics under input saturation. Backstepping technique is applied to design a relative attitude controller and a relative position controller, respectively. The dynamics uncertainties are approximated by radial basis function neural networks (RBFNNs). A novel switching controller consists of an adaptive neural networks controller dominating in its active region combined with an extra robust controller to avoid invalidation of the RBFNNs destroying stability of the system outside the neural active region. An auxiliary signal is introduced to compensate the input saturation with anti-windup technique, and a command filter is employed to approximate derivative of the virtual control in the backstepping procedure. Globally uniformly ultimately bounded of the relative states is proved via Lyapunov theory. Simulation example demonstrates effectiveness of the proposed control scheme. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  13. Robustness analysis of uncertain dynamical neural networks with multiple time delays.

    PubMed

    Senan, Sibel

    2015-10-01

    This paper studies the problem of global robust asymptotic stability of the equilibrium point for the class of dynamical neural networks with multiple time delays with respect to the class of slope-bounded activation functions and in the presence of the uncertainties of system parameters of the considered neural network model. By using an appropriate Lyapunov functional and exploiting the properties of the homeomorphism mapping theorem, we derive a new sufficient condition for the existence, uniqueness and global robust asymptotic stability of the equilibrium point for the class of neural networks with multiple time delays. The obtained stability condition basically relies on testing some relationships imposed on the interconnection matrices of the neural system, which can be easily verified by using some certain properties of matrices. An instructive numerical example is also given to illustrate the applicability of our result and show the advantages of this new condition over the previously reported corresponding results. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Development and Flight Testing of a Neural Network Based Flight Control System on the NF-15B Aircraft

    NASA Technical Reports Server (NTRS)

    Bomben, Craig R.; Smolka, James W.; Bosworth, John T.; Silliams-Hayes, Peggy S.; Burken, John J.; Larson, Richard R.; Buschbacher, Mark J.; Maliska, Heather A.

    2006-01-01

    The Intelligent Flight Control System (IFCS) project at the NASA Dryden Flight Research Center, Edwards AFB, CA, has been investigating the use of neural network based adaptive control on a unique NF-15B test aircraft. The IFCS neural network is a software processor that stores measured aircraft response information to dynamically alter flight control gains. In 2006, the neural network was engaged and allowed to learn in real time to dynamically alter the aircraft handling qualities characteristics in the presence of actual aerodynamic failure conditions injected into the aircraft through the flight control system. The use of neural network and similar adaptive technologies in the design of highly fault and damage tolerant flight control systems shows promise in making future aircraft far more survivable than current technology allows. This paper will present the results of the IFCS flight test program conducted at the NASA Dryden Flight Research Center in 2006, with emphasis on challenges encountered and lessons learned.

  15. Pattern recognition neural-net by spatial mapping of biology visual field

    NASA Astrophysics Data System (ADS)

    Lin, Xin; Mori, Masahiko

    2000-05-01

    The method of spatial mapping in biology vision field is applied to artificial neural networks for pattern recognition. By the coordinate transform that is called the complex-logarithm mapping and Fourier transform, the input images are transformed into scale- rotation- and shift- invariant patterns, and then fed into a multilayer neural network for learning and recognition. The results of computer simulation and an optical experimental system are described.

  16. Cortical and thalamic contributions to response dynamics across layers of the primary somatosensory cortex during tactile discrimination

    PubMed Central

    Pais-Vieira, Miguel; Kunicki, Carolina; Tseng, Po-He; Martin, Joel; Lebedev, Mikhail

    2015-01-01

    Tactile information processing in the rodent primary somatosensory cortex (S1) is layer specific and involves modulations from both thalamocortical and cortico-cortical loops. However, the extent to which these loops influence the dynamics of the primary somatosensory cortex while animals execute tactile discrimination remains largely unknown. Here, we describe neural dynamics of S1 layers across the multiple epochs defining a tactile discrimination task. We observed that neuronal ensembles within different layers of the S1 cortex exhibited significantly distinct neurophysiological properties, which constantly changed across the behavioral states that defined a tactile discrimination. Neural dynamics present in supragranular and granular layers generally matched the patterns observed in the ventral posterior medial nucleus of the thalamus (VPM), whereas the neural dynamics recorded from infragranular layers generally matched the patterns from the posterior nucleus of the thalamus (POM). Selective inactivation of contralateral S1 specifically switched infragranular neural dynamics from POM-like to those resembling VPM neurons. Meanwhile, ipsilateral M1 inactivation profoundly modulated the firing suppression observed in infragranular layers. This latter effect was counterbalanced by contralateral S1 block. Tactile stimulus encoding was layer specific and selectively affected by M1 or contralateral S1 inactivation. Lastly, causal information transfer occurred between all neurons in all S1 layers but was maximal from infragranular to the granular layer. These results suggest that tactile information processing in the S1 of awake behaving rodents is layer specific and state dependent and that its dynamics depend on the asynchronous convergence of modulations originating from ipsilateral M1 and contralateral S1. PMID:26180115

  17. Dynamic performance of accommodating intraocular lenses in a negative feedback control system: a simulation-based study.

    PubMed

    Schor, Clifton M; Bharadwaj, Shrikant R; Burns, Christopher D

    2007-07-01

    A dynamic model of ocular accommodation is used to simulate the stability and dynamic performance of accommodating intraocular lenses (A-IOLs) that replace the hardened natural ocular lens that is unable to change focus. Accommodation simulations of an older eye with A-IOL materials having biomechanical properties of a younger eye illustrate overshoots and oscillations resulting from decreased visco-elasticity of the A-IOL. Stable dynamics of an A-IOL are restored by adaptation of phasic and tonic neural-control properties of accommodation. Simulations indicate that neural control must be recalibrated to avoid unstable dynamic accommodation with A-IOLs. An interactive web-model of A-IOL illustrating these properties is available at http://schorlab.berkeley.edu.

  18. Weakly connected neural nets

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    1990-01-01

    A new neural network architecture is proposed based upon effects of non-Lipschitzian dynamics. The network is fully connected, but these connections are active only during vanishingly short time periods. The advantages of this architecture are discussed.

  19. Hopf bifurcation of an (n + 1) -neuron bidirectional associative memory neural network model with delays.

    PubMed

    Xiao, Min; Zheng, Wei Xing; Cao, Jinde

    2013-01-01

    Recent studies on Hopf bifurcations of neural networks with delays are confined to simplified neural network models consisting of only two, three, four, five, or six neurons. It is well known that neural networks are complex and large-scale nonlinear dynamical systems, so the dynamics of the delayed neural networks are very rich and complicated. Although discussing the dynamics of networks with a few neurons may help us to understand large-scale networks, there are inevitably some complicated problems that may be overlooked if simplified networks are carried over to large-scale networks. In this paper, a general delayed bidirectional associative memory neural network model with n + 1 neurons is considered. By analyzing the associated characteristic equation, the local stability of the trivial steady state is examined, and then the existence of the Hopf bifurcation at the trivial steady state is established. By applying the normal form theory and the center manifold reduction, explicit formulae are derived to determine the direction and stability of the bifurcating periodic solution. Furthermore, the paper highlights situations where the Hopf bifurcations are particularly critical, in the sense that the amplitude and the period of oscillations are very sensitive to errors due to tolerances in the implementation of neuron interconnections. It is shown that the sensitivity is crucially dependent on the delay and also significantly influenced by the feature of the number of neurons. Numerical simulations are carried out to illustrate the main results.

  20. How pattern formation in ring networks of excitatory and inhibitory spiking neurons depends on the input current regime.

    PubMed

    Kriener, Birgit; Helias, Moritz; Rotter, Stefan; Diesmann, Markus; Einevoll, Gaute T

    2013-01-01

    Pattern formation, i.e., the generation of an inhomogeneous spatial activity distribution in a dynamical system with translation invariant structure, is a well-studied phenomenon in neuronal network dynamics, specifically in neural field models. These are population models to describe the spatio-temporal dynamics of large groups of neurons in terms of macroscopic variables such as population firing rates. Though neural field models are often deduced from and equipped with biophysically meaningful properties, a direct mapping to simulations of individual spiking neuron populations is rarely considered. Neurons have a distinct identity defined by their action on their postsynaptic targets. In its simplest form they act either excitatorily or inhibitorily. When the distribution of neuron identities is assumed to be periodic, pattern formation can be observed, given the coupling strength is supracritical, i.e., larger than a critical weight. We find that this critical weight is strongly dependent on the characteristics of the neuronal input, i.e., depends on whether neurons are mean- or fluctuation driven, and different limits in linearizing the full non-linear system apply in order to assess stability. In particular, if neurons are mean-driven, the linearization has a very simple form and becomes independent of both the fixed point firing rate and the variance of the input current, while in the very strongly fluctuation-driven regime the fixed point rate, as well as the input mean and variance are important parameters in the determination of the critical weight. We demonstrate that interestingly even in "intermediate" regimes, when the system is technically fluctuation-driven, the simple linearization neglecting the variance of the input can yield the better prediction of the critical coupling strength. We moreover analyze the effects of structural randomness by rewiring individual synapses or redistributing weights, as well as coarse-graining on the formation of inhomogeneous activity patterns.

  1. How pattern formation in ring networks of excitatory and inhibitory spiking neurons depends on the input current regime

    PubMed Central

    Kriener, Birgit; Helias, Moritz; Rotter, Stefan; Diesmann, Markus; Einevoll, Gaute T.

    2014-01-01

    Pattern formation, i.e., the generation of an inhomogeneous spatial activity distribution in a dynamical system with translation invariant structure, is a well-studied phenomenon in neuronal network dynamics, specifically in neural field models. These are population models to describe the spatio-temporal dynamics of large groups of neurons in terms of macroscopic variables such as population firing rates. Though neural field models are often deduced from and equipped with biophysically meaningful properties, a direct mapping to simulations of individual spiking neuron populations is rarely considered. Neurons have a distinct identity defined by their action on their postsynaptic targets. In its simplest form they act either excitatorily or inhibitorily. When the distribution of neuron identities is assumed to be periodic, pattern formation can be observed, given the coupling strength is supracritical, i.e., larger than a critical weight. We find that this critical weight is strongly dependent on the characteristics of the neuronal input, i.e., depends on whether neurons are mean- or fluctuation driven, and different limits in linearizing the full non-linear system apply in order to assess stability. In particular, if neurons are mean-driven, the linearization has a very simple form and becomes independent of both the fixed point firing rate and the variance of the input current, while in the very strongly fluctuation-driven regime the fixed point rate, as well as the input mean and variance are important parameters in the determination of the critical weight. We demonstrate that interestingly even in “intermediate” regimes, when the system is technically fluctuation-driven, the simple linearization neglecting the variance of the input can yield the better prediction of the critical coupling strength. We moreover analyze the effects of structural randomness by rewiring individual synapses or redistributing weights, as well as coarse-graining on the formation of inhomogeneous activity patterns. PMID:24501591

  2. Brain Oscillations in Sport: Toward EEG Biomarkers of Performance.

    PubMed

    Cheron, Guy; Petit, Géraldine; Cheron, Julian; Leroy, Axelle; Cebolla, Anita; Cevallos, Carlos; Petieau, Mathieu; Hoellinger, Thomas; Zarka, David; Clarinval, Anne-Marie; Dan, Bernard

    2016-01-01

    Brain dynamics is at the basis of top performance accomplishment in sports. The search for neural biomarkers of performance remains a challenge in movement science and sport psychology. The non-invasive nature of high-density electroencephalography (EEG) recording has made it a most promising avenue for providing quantitative feedback to practitioners and coaches. Here, we review the current relevance of the main types of EEG oscillations in order to trace a perspective for future practical applications of EEG and event-related potentials (ERP) in sport. In this context, the hypotheses of unified brain rhythms and continuity between wake and sleep states should provide a functional template for EEG biomarkers in sport. The oscillations in the thalamo-cortical and hippocampal circuitry including the physiology of the place cells and the grid cells provide a frame of reference for the analysis of delta, theta, beta, alpha (incl.mu), and gamma oscillations recorded in the space field of human performance. Based on recent neuronal models facilitating the distinction between the different dynamic regimes (selective gating and binding) in these different oscillations we suggest an integrated approach articulating together the classical biomechanical factors (3D movements and EMG) and the high-density EEG and ERP signals to allow finer mathematical analysis to optimize sport performance, such as microstates, coherency/directionality analysis and neural generators.

  3. Brain Oscillations in Sport: Toward EEG Biomarkers of Performance

    PubMed Central

    Cheron, Guy; Petit, Géraldine; Cheron, Julian; Leroy, Axelle; Cebolla, Anita; Cevallos, Carlos; Petieau, Mathieu; Hoellinger, Thomas; Zarka, David; Clarinval, Anne-Marie; Dan, Bernard

    2016-01-01

    Brain dynamics is at the basis of top performance accomplishment in sports. The search for neural biomarkers of performance remains a challenge in movement science and sport psychology. The non-invasive nature of high-density electroencephalography (EEG) recording has made it a most promising avenue for providing quantitative feedback to practitioners and coaches. Here, we review the current relevance of the main types of EEG oscillations in order to trace a perspective for future practical applications of EEG and event-related potentials (ERP) in sport. In this context, the hypotheses of unified brain rhythms and continuity between wake and sleep states should provide a functional template for EEG biomarkers in sport. The oscillations in the thalamo-cortical and hippocampal circuitry including the physiology of the place cells and the grid cells provide a frame of reference for the analysis of delta, theta, beta, alpha (incl.mu), and gamma oscillations recorded in the space field of human performance. Based on recent neuronal models facilitating the distinction between the different dynamic regimes (selective gating and binding) in these different oscillations we suggest an integrated approach articulating together the classical biomechanical factors (3D movements and EMG) and the high-density EEG and ERP signals to allow finer mathematical analysis to optimize sport performance, such as microstates, coherency/directionality analysis and neural generators. PMID:26955362

  4. Statistical Frequency-Dependent Analysis of Trial-to-Trial Variability in Single Time Series by Recurrence Plots.

    PubMed

    Tošić, Tamara; Sellers, Kristin K; Fröhlich, Flavio; Fedotenkova, Mariia; Beim Graben, Peter; Hutt, Axel

    2015-01-01

    For decades, research in neuroscience has supported the hypothesis that brain dynamics exhibits recurrent metastable states connected by transients, which together encode fundamental neural information processing. To understand the system's dynamics it is important to detect such recurrence domains, but it is challenging to extract them from experimental neuroscience datasets due to the large trial-to-trial variability. The proposed methodology extracts recurrent metastable states in univariate time series by transforming datasets into their time-frequency representations and computing recurrence plots based on instantaneous spectral power values in various frequency bands. Additionally, a new statistical inference analysis compares different trial recurrence plots with corresponding surrogates to obtain statistically significant recurrent structures. This combination of methods is validated by applying it to two artificial datasets. In a final study of visually-evoked Local Field Potentials in partially anesthetized ferrets, the methodology is able to reveal recurrence structures of neural responses with trial-to-trial variability. Focusing on different frequency bands, the δ-band activity is much less recurrent than α-band activity. Moreover, α-activity is susceptible to pre-stimuli, while δ-activity is much less sensitive to pre-stimuli. This difference in recurrence structures in different frequency bands indicates diverse underlying information processing steps in the brain.

  5. Statistical Frequency-Dependent Analysis of Trial-to-Trial Variability in Single Time Series by Recurrence Plots

    PubMed Central

    Tošić, Tamara; Sellers, Kristin K.; Fröhlich, Flavio; Fedotenkova, Mariia; beim Graben, Peter; Hutt, Axel

    2016-01-01

    For decades, research in neuroscience has supported the hypothesis that brain dynamics exhibits recurrent metastable states connected by transients, which together encode fundamental neural information processing. To understand the system's dynamics it is important to detect such recurrence domains, but it is challenging to extract them from experimental neuroscience datasets due to the large trial-to-trial variability. The proposed methodology extracts recurrent metastable states in univariate time series by transforming datasets into their time-frequency representations and computing recurrence plots based on instantaneous spectral power values in various frequency bands. Additionally, a new statistical inference analysis compares different trial recurrence plots with corresponding surrogates to obtain statistically significant recurrent structures. This combination of methods is validated by applying it to two artificial datasets. In a final study of visually-evoked Local Field Potentials in partially anesthetized ferrets, the methodology is able to reveal recurrence structures of neural responses with trial-to-trial variability. Focusing on different frequency bands, the δ-band activity is much less recurrent than α-band activity. Moreover, α-activity is susceptible to pre-stimuli, while δ-activity is much less sensitive to pre-stimuli. This difference in recurrence structures in different frequency bands indicates diverse underlying information processing steps in the brain. PMID:26834580

  6. Mechanisms of Long Non-Coding RNAs in the Assembly and Plasticity of Neural Circuitry.

    PubMed

    Wang, Andi; Wang, Junbao; Liu, Ying; Zhou, Yan

    2017-01-01

    The mechanisms underlying development processes and functional dynamics of neural circuits are far from understood. Long non-coding RNAs (lncRNAs) have emerged as essential players in defining identities of neural cells, and in modulating neural activities. In this review, we summarized latest advances concerning roles and mechanisms of lncRNAs in assembly, maintenance and plasticity of neural circuitry, as well as lncRNAs' implications in neurological disorders. We also discussed technical advances and challenges in studying functions and mechanisms of lncRNAs in neural circuitry. Finally, we proposed that lncRNA studies would advance our understanding on how neural circuits develop and function in physiology and disease conditions.

  7. Stimulation of neural differentiation in human bone marrow mesenchymal stem cells by extremely low-frequency electromagnetic fields incorporated with MNPs.

    PubMed

    Choi, Yun-Kyong; Lee, Dong Heon; Seo, Young-Kwon; Jung, Hyun; Park, Jung-Keug; Cho, Hyunjin

    2014-10-01

    Human bone marrow-derived mesenchymal stem cells (hBM-MSCs) have been investigated as a new cell-therapeutic solution due to their capacity that could differentiate into neural-like cells. Extremely low-frequency electromagnetic fields (ELF-EMFs) therapy has emerged as a novel technique, using mechanical stimulus to differentiate hBM-MSCs and significantly enhance neuronal differentiation to affect cellular and molecular reactions. Magnetic iron oxide (Fe3O4) nanoparticles (MNPs) have recently achieved widespread use for biomedical applications and polyethylene glycol (PEG)-labeled nanoparticles are used to increase their circulation time, aqueous solubility, biocompatibility, and nonspecific cellular uptake as well as to decrease immunogenicity. Many studies have used MNP-labeled cells for differentiation, but there have been no reports of MNP-labeled neural differentiation combined with EMFs. In this study, synthesized PEG-phospholipid encapsulated magnetite (Fe3O4) nanoparticles are used on hBM-MSCs to improve their intracellular uptake. The PEGylated nanoparticles were exposed to the cells under 50 Hz of EMFs to improve neural differentiation. First, we measured cell viability and intracellular iron content in hBM-MSCs after treatment with MNPs. Analysis was conducted by RT-PCR, and immunohistological analysis using neural cell type-specific genes and antibodies after exposure to 50 Hz electromagnetic fields. These results suggest that electromagnetic fields enhance neural differentiation in hBM-MSCs incorporated with MNPs and would be an effective method for differentiating neural cells.

  8. Neural signal processing and closed-loop control algorithm design for an implanted neural recording and stimulation system.

    PubMed

    Hamilton, Lei; McConley, Marc; Angermueller, Kai; Goldberg, David; Corba, Massimiliano; Kim, Louis; Moran, James; Parks, Philip D; Sang Chin; Widge, Alik S; Dougherty, Darin D; Eskandar, Emad N

    2015-08-01

    A fully autonomous intracranial device is built to continually record neural activities in different parts of the brain, process these sampled signals, decode features that correlate to behaviors and neuropsychiatric states, and use these features to deliver brain stimulation in a closed-loop fashion. In this paper, we describe the sampling and stimulation aspects of such a device. We first describe the signal processing algorithms of two unsupervised spike sorting methods. Next, we describe the LFP time-frequency analysis and feature derivation from the two spike sorting methods. Spike sorting includes a novel approach to constructing a dictionary learning algorithm in a Compressed Sensing (CS) framework. We present a joint prediction scheme to determine the class of neural spikes in the dictionary learning framework; and, the second approach is a modified OSort algorithm which is implemented in a distributed system optimized for power efficiency. Furthermore, sorted spikes and time-frequency analysis of LFP signals can be used to generate derived features (including cross-frequency coupling, spike-field coupling). We then show how these derived features can be used in the design and development of novel decode and closed-loop control algorithms that are optimized to apply deep brain stimulation based on a patient's neuropsychiatric state. For the control algorithm, we define the state vector as representative of a patient's impulsivity, avoidance, inhibition, etc. Controller parameters are optimized to apply stimulation based on the state vector's current state as well as its historical values. The overall algorithm and software design for our implantable neural recording and stimulation system uses an innovative, adaptable, and reprogrammable architecture that enables advancement of the state-of-the-art in closed-loop neural control while also meeting the challenges of system power constraints and concurrent development with ongoing scientific research designed to define brain network connectivity and neural network dynamics that vary at the individual patient level and vary over time.

  9. Neuromorphic neural interfaces: from neurophysiological inspiration to biohybrid coupling with nervous systems

    NASA Astrophysics Data System (ADS)

    Broccard, Frédéric D.; Joshi, Siddharth; Wang, Jun; Cauwenberghs, Gert

    2017-08-01

    Objective. Computation in nervous systems operates with different computational primitives, and on different hardware, than traditional digital computation and is thus subjected to different constraints from its digital counterpart regarding the use of physical resources such as time, space and energy. In an effort to better understand neural computation on a physical medium with similar spatiotemporal and energetic constraints, the field of neuromorphic engineering aims to design and implement electronic systems that emulate in very large-scale integration (VLSI) hardware the organization and functions of neural systems at multiple levels of biological organization, from individual neurons up to large circuits and networks. Mixed analog/digital neuromorphic VLSI systems are compact, consume little power and operate in real time independently of the size and complexity of the model. Approach. This article highlights the current efforts to interface neuromorphic systems with neural systems at multiple levels of biological organization, from the synaptic to the system level, and discusses the prospects for future biohybrid systems with neuromorphic circuits of greater complexity. Main results. Single silicon neurons have been interfaced successfully with invertebrate and vertebrate neural networks. This approach allowed the investigation of neural properties that are inaccessible with traditional techniques while providing a realistic biological context not achievable with traditional numerical modeling methods. At the network level, populations of neurons are envisioned to communicate bidirectionally with neuromorphic processors of hundreds or thousands of silicon neurons. Recent work on brain-machine interfaces suggests that this is feasible with current neuromorphic technology. Significance. Biohybrid interfaces between biological neurons and VLSI neuromorphic systems of varying complexity have started to emerge in the literature. Primarily intended as a computational tool for investigating fundamental questions related to neural dynamics, the sophistication of current neuromorphic systems now allows direct interfaces with large neuronal networks and circuits, resulting in potentially interesting clinical applications for neuroengineering systems, neuroprosthetics and neurorehabilitation.

  10. Rapid Postnatal Expansion of Neural Networks Occurs in an Environment of Altered Neurovascular and Neurometabolic Coupling.

    PubMed

    Kozberg, Mariel G; Ma, Ying; Shaik, Mohammed A; Kim, Sharon H; Hillman, Elizabeth M C

    2016-06-22

    In the adult brain, increases in neural activity lead to increases in local blood flow. However, many prior measurements of functional hemodynamics in the neonatal brain, including functional magnetic resonance imaging (fMRI) in human infants, have noted altered and even inverted hemodynamic responses to stimuli. Here, we demonstrate that localized neural activity in early postnatal mice does not evoke blood flow increases as in the adult brain, and elucidate the neural and metabolic correlates of these altered functional hemodynamics as a function of developmental age. Using wide-field GCaMP imaging, the development of neural responses to somatosensory stimulus is visualized over the entire bilaterally exposed cortex. Neural responses are observed to progress from tightly localized, unilateral maps to bilateral responses as interhemispheric connectivity becomes established. Simultaneous hemodynamic imaging confirms that spatiotemporally coupled functional hyperemia is not present during these early stages of postnatal brain development, and develops gradually as cortical connectivity is established. Exploring the consequences of this lack of functional hyperemia, measurements of oxidative metabolism via flavoprotein fluorescence suggest that neural activity depletes local oxygen to below baseline levels at early developmental stages. Analysis of hemoglobin oxygenation dynamics at the same age confirms oxygen depletion for both stimulus-evoked and resting-state neural activity. This state of unmet metabolic demand during neural network development poses new questions about the mechanisms of neurovascular development and its role in both normal and abnormal brain development. These results also provide important insights for the interpretation of fMRI studies of the developing brain. This work demonstrates that the postnatal development of neuronal connectivity is accompanied by development of the mechanisms that regulate local blood flow in response to neural activity. Novel in vivo imaging reveals that, in the developing mouse brain, strong and localized GCaMP neural responses to stimulus fail to evoke local blood flow increases, leading to a state in which oxygen levels become locally depleted. These results demonstrate that the development of cortical connectivity occurs in an environment of altered energy availability that itself may play a role in shaping normal brain development. These findings have important implications for understanding the pathophysiology of abnormal developmental trajectories, and for the interpretation of functional magnetic resonance imaging data acquired in the developing brain. Copyright © 2016 the authors 0270-6474/16/366704-14$15.00/0.

  11. A modular architecture for transparent computation in recurrent neural networks.

    PubMed

    Carmantini, Giovanni S; Beim Graben, Peter; Desroches, Mathieu; Rodrigues, Serafim

    2017-01-01

    Computation is classically studied in terms of automata, formal languages and algorithms; yet, the relation between neural dynamics and symbolic representations and operations is still unclear in traditional eliminative connectionism. Therefore, we suggest a unique perspective on this central issue, to which we would like to refer as transparent connectionism, by proposing accounts of how symbolic computation can be implemented in neural substrates. In this study we first introduce a new model of dynamics on a symbolic space, the versatile shift, showing that it supports the real-time simulation of a range of automata. We then show that the Gödelization of versatile shifts defines nonlinear dynamical automata, dynamical systems evolving on a vectorial space. Finally, we present a mapping between nonlinear dynamical automata and recurrent artificial neural networks. The mapping defines an architecture characterized by its granular modularity, where data, symbolic operations and their control are not only distinguishable in activation space, but also spatially localizable in the network itself, while maintaining a distributed encoding of symbolic representations. The resulting networks simulate automata in real-time and are programmed directly, in the absence of network training. To discuss the unique characteristics of the architecture and their consequences, we present two examples: (i) the design of a Central Pattern Generator from a finite-state locomotive controller, and (ii) the creation of a network simulating a system of interactive automata that supports the parsing of garden-path sentences as investigated in psycholinguistics experiments. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Predictive Coding of Dynamical Variables in Balanced Spiking Networks

    PubMed Central

    Boerlin, Martin; Machens, Christian K.; Denève, Sophie

    2013-01-01

    Two observations about the cortex have puzzled neuroscientists for a long time. First, neural responses are highly variable. Second, the level of excitation and inhibition received by each neuron is tightly balanced at all times. Here, we demonstrate that both properties are necessary consequences of neural networks that represent information efficiently in their spikes. We illustrate this insight with spiking networks that represent dynamical variables. Our approach is based on two assumptions: We assume that information about dynamical variables can be read out linearly from neural spike trains, and we assume that neurons only fire a spike if that improves the representation of the dynamical variables. Based on these assumptions, we derive a network of leaky integrate-and-fire neurons that is able to implement arbitrary linear dynamical systems. We show that the membrane voltage of the neurons is equivalent to a prediction error about a common population-level signal. Among other things, our approach allows us to construct an integrator network of spiking neurons that is robust against many perturbations. Most importantly, neural variability in our networks cannot be equated to noise. Despite exhibiting the same single unit properties as widely used population code models (e.g. tuning curves, Poisson distributed spike trains), balanced networks are orders of magnitudes more reliable. Our approach suggests that spikes do matter when considering how the brain computes, and that the reliability of cortical representations could have been strongly underestimated. PMID:24244113

  13. Self-organization in leaky threshold systems: The influence of near-mean field dynamics and its implications for earthquakes, neurobiology, and forecasting

    PubMed Central

    Rundle, J. B.; Tiampo, K. F.; Klein, W.; Sá Martins, J. S.

    2002-01-01

    Threshold systems are known to be some of the most important nonlinear self-organizing systems in nature, including networks of earthquake faults, neural networks, superconductors and semiconductors, and the World Wide Web, as well as political, social, and ecological systems. All of these systems have dynamics that are strongly correlated in space and time, and all typically display a multiplicity of spatial and temporal scales. Here we discuss the physics of self-organization in earthquake threshold systems at two distinct scales: (i) The “microscopic” laboratory scale, in which consideration of results from simulations leads to dynamical equations that can be used to derive the results obtained from sliding friction experiments, and (ii) the “macroscopic” earthquake fault-system scale, in which the physics of strongly correlated earthquake fault systems can be understood by using time-dependent state vectors defined in a Hilbert space of eigenstates, similar in many respects to the mathematics of quantum mechanics. In all of these systems, long-range interactions induce the existence of locally ergodic dynamics. The existence of dissipative effects leads to the appearance of a “leaky threshold” dynamics, equivalent to a new scaling field that controls the size of nucleation events relative to the size of background fluctuations. At the macroscopic earthquake fault-system scale, these ideas show considerable promise as a means of forecasting future earthquake activity. PMID:11875204

  14. Frequency modulation entrains slow neural oscillations and optimizes human listening behavior

    PubMed Central

    Henry, Molly J.; Obleser, Jonas

    2012-01-01

    The human ability to continuously track dynamic environmental stimuli, in particular speech, is proposed to profit from “entrainment” of endogenous neural oscillations, which involves phase reorganization such that “optimal” phase comes into line with temporally expected critical events, resulting in improved processing. The current experiment goes beyond previous work in this domain by addressing two thus far unanswered questions. First, how general is neural entrainment to environmental rhythms: Can neural oscillations be entrained by temporal dynamics of ongoing rhythmic stimuli without abrupt onsets? Second, does neural entrainment optimize performance of the perceptual system: Does human auditory perception benefit from neural phase reorganization? In a human electroencephalography study, listeners detected short gaps distributed uniformly with respect to the phase angle of a 3-Hz frequency-modulated stimulus. Listeners’ ability to detect gaps in the frequency-modulated sound was not uniformly distributed in time, but clustered in certain preferred phases of the modulation. Moreover, the optimal stimulus phase was individually determined by the neural delta oscillation entrained by the stimulus. Finally, delta phase predicted behavior better than stimulus phase or the event-related potential after the gap. This study demonstrates behavioral benefits of phase realignment in response to frequency-modulated auditory stimuli, overall suggesting that frequency fluctuations in natural environmental input provide a pacing signal for endogenous neural oscillations, thereby influencing perceptual processing. PMID:23151506

  15. The impact of neurotechnology on rehabilitation.

    PubMed

    Berger, Theodore W; Gerhardt, Greg; Liker, Mark A; Soussou, Walid

    2008-01-01

    This paper present results of a multi-disciplinary project that is developing a microchip-based neural prosthesis for the hippocampus, a region of the brain responsible for the formation of long-term memories. Damage to the hippocampus is frequently associated with epilepsy, stroke, and dementia (Alzheimer's disease) and is considered to underlie the memory deficits related to these neurological conditions. The essential goals of the multi-laboratory effort include: (1) experimental study of neuron and neural network function--how does the hippocampus encode information? (2) formulation of biologically realistic models of neural system dynamics--can that encoding process be described mathematically to realize a predictive model of how the hippocampus responds to any event? (3) microchip implementation of neural system models--can the mathematical model be realized as a set of electronic circuits to achieve parallel processing, rapid computational speed, and miniaturization? and (4) creation of hybrid neuron-silicon interfaces-can structural and functional connections between electronic devices and neural tissue be achieved for long-term, bi-directional communication with the brain? By integrating solutions to these component problems, we are realizing a microchip-based model of hippocampal nonlinear dynamics that can perform the same function as part of the hippocampus. Through bi-directional communication with other neural tissue that normally provides the inputs and outputs to/from a damaged hippocampal area, the biomimetic model could serve as a neural prosthesis. A proof-of-concept will be presented in which the CA3 region of the hippocampal slice is surgically removed and is replaced by a microchip model of CA3 nonlinear dynamics--the "hybrid" hippocampal circuit displays normal physiological properties. How the work in brain slices is being extended to behaving animals also will be described.

  16. Coordinated within-trial dynamics of low-frequency neural rhythms controls evidence accumulation.

    PubMed

    Werkle-Bergner, Markus; Grandy, Thomas H; Chicherio, Christian; Schmiedek, Florian; Lövdén, Martin; Lindenberger, Ulman

    2014-06-18

    Higher cognitive functions, such as human perceptual decision making, require information processing and transmission across wide-spread cortical networks. Temporally synchronized neural firing patterns are advantageous for efficiently representing and transmitting information within and between assemblies. Computational, empirical, and conceptual considerations all lead to the expectation that the informational redundancy of neural firing rates is positively related to their synchronization. Recent theorizing and initial evidence also suggest that the coding of stimulus characteristics and their integration with behavioral goal states require neural interactions across a hierarchy of timescales. However, most studies thus have focused on neural activity in a single frequency range or on a restricted set of brain regions. Here we provide evidence for cooperative spatiotemporal dynamics of slow and fast EEG signals during perceptual decision making at the single-trial level. Participants performed three masked two-choice decision tasks, one each with numerical, verbal, or figural content. Decrements in posterior α power (8-14 Hz) were paralleled by increments in high-frequency (>30 Hz) signal entropy in trials demanding active sensory processing. Simultaneously, frontocentral θ power (4-7 Hz) increased, indicating evidence integration. The coordinated α/θ dynamics were tightly linked to decision speed and remarkably similar across tasks, suggesting a domain-general mechanism. In sum, we demonstrate an inverse association between decision-related changes in widespread low-frequency power and local high-frequency entropy. The cooperation among mechanisms captured by these changes enhances the informational density of neural response patterns and qualifies as a neural coding system in the service of perceptual decision making. Copyright © 2014 the authors 0270-6474/14/348519-10$15.00/0.

  17. SpikingLab: modelling agents controlled by Spiking Neural Networks in Netlogo.

    PubMed

    Jimenez-Romero, Cristian; Johnson, Jeffrey

    2017-01-01

    The scientific interest attracted by Spiking Neural Networks (SNN) has lead to the development of tools for the simulation and study of neuronal dynamics ranging from phenomenological models to the more sophisticated and biologically accurate Hodgkin-and-Huxley-based and multi-compartmental models. However, despite the multiple features offered by neural modelling tools, their integration with environments for the simulation of robots and agents can be challenging and time consuming. The implementation of artificial neural circuits to control robots generally involves the following tasks: (1) understanding the simulation tools, (2) creating the neural circuit in the neural simulator, (3) linking the simulated neural circuit with the environment of the agent and (4) programming the appropriate interface in the robot or agent to use the neural controller. The accomplishment of the above-mentioned tasks can be challenging, especially for undergraduate students or novice researchers. This paper presents an alternative tool which facilitates the simulation of simple SNN circuits using the multi-agent simulation and the programming environment Netlogo (educational software that simplifies the study and experimentation of complex systems). The engine proposed and implemented in Netlogo for the simulation of a functional model of SNN is a simplification of integrate and fire (I&F) models. The characteristics of the engine (including neuronal dynamics, STDP learning and synaptic delay) are demonstrated through the implementation of an agent representing an artificial insect controlled by a simple neural circuit. The setup of the experiment and its outcomes are described in this work.

  18. Dynamic and Differential Regulation of Stem Cell Factor FoxD3 in the Neural Crest Is Encrypted in the Genome

    PubMed Central

    Tan-Cabugao, Joanne; Sauka-Spengler, Tatjana; Bronner, Marianne E.

    2012-01-01

    The critical stem cell transcription factor FoxD3 is expressed by the premigratory and migrating neural crest, an embryonic stem cell population that forms diverse derivatives. Despite its important role in development and stem cell biology, little is known about what mediates FoxD3 activity in these cells. We have uncovered two FoxD3 enhancers, NC1 and NC2, that drive reporter expression in spatially and temporally distinct manners. Whereas NC1 activity recapitulates initial FoxD3 expression in the cranial neural crest, NC2 activity recapitulates initial FoxD3 expression at vagal/trunk levels while appearing only later in migrating cranial crest. Detailed mutational analysis, in vivo chromatin immunoprecipitation, and morpholino knock-downs reveal that transcription factors Pax7 and Msx1/2 cooperate with the neural crest specifier gene, Ets1, to bind to the cranial NC1 regulatory element. However, at vagal/trunk levels, they function together with the neural plate border gene, Zic1, which directly binds to the NC2 enhancer. These results reveal dynamic and differential regulation of FoxD3 in distinct neural crest subpopulations, suggesting that heterogeneity is encrypted at the regulatory level. Isolation of neural crest enhancers not only allows establishment of direct regulatory connections underlying neural crest formation, but also provides valuable tools for tissue specific manipulation and investigation of neural crest cell identity in amniotes. PMID:23284303

  19. Optimization of return electrodes in neurostimulating arrays

    NASA Astrophysics Data System (ADS)

    Flores, Thomas; Goetz, Georges; Lei, Xin; Palanker, Daniel

    2016-06-01

    Objective. High resolution visual prostheses require dense stimulating arrays with localized inputs of individual electrodes. We study the electric field produced by multielectrode arrays in electrolyte to determine an optimal configuration of return electrodes and activation sequence. Approach. To determine the boundary conditions for computation of the electric field in electrolyte, we assessed current dynamics using an equivalent circuit of a multielectrode array with interleaved return electrodes. The electric field modeled with two different boundary conditions derived from the equivalent circuit was then compared to measurements of electric potential in electrolyte. To assess the effect of return electrode configuration on retinal stimulation, we transformed the computed electric fields into retinal response using a model of neural network-mediated stimulation. Main results. Electric currents at the capacitive electrode-electrolyte interface redistribute over time, so that boundary conditions transition from equipotential surfaces at the beginning of the pulse to uniform current density in steady state. Experimental measurements confirmed that, in steady state, the boundary condition corresponds to a uniform current density on electrode surfaces. Arrays with local return electrodes exhibit improved field confinement and can elicit stronger network-mediated retinal response compared to those with a common remote return. Connecting local return electrodes enhances the field penetration depth and allows reducing the return electrode area. Sequential activation of the pixels in large monopolar arrays reduces electrical cross-talk and improves the contrast in pattern stimulation. Significance. Accurate modeling of multielectrode arrays helps optimize the electrode configuration to maximize the spatial resolution, contrast and dynamic range of retinal prostheses.

  20. Optimization of matrix tablets controlled drug release using Elman dynamic neural networks and decision trees.

    PubMed

    Petrović, Jelena; Ibrić, Svetlana; Betz, Gabriele; Đurić, Zorica

    2012-05-30

    The main objective of the study was to develop artificial intelligence methods for optimization of drug release from matrix tablets regardless of the matrix type. Static and dynamic artificial neural networks of the same topology were developed to model dissolution profiles of different matrix tablets types (hydrophilic/lipid) using formulation composition, compression force used for tableting and tablets porosity and tensile strength as input data. Potential application of decision trees in discovering knowledge from experimental data was also investigated. Polyethylene oxide polymer and glyceryl palmitostearate were used as matrix forming materials for hydrophilic and lipid matrix tablets, respectively whereas selected model drugs were diclofenac sodium and caffeine. Matrix tablets were prepared by direct compression method and tested for in vitro dissolution profiles. Optimization of static and dynamic neural networks used for modeling of drug release was performed using Monte Carlo simulations or genetic algorithms optimizer. Decision trees were constructed following discretization of data. Calculated difference (f(1)) and similarity (f(2)) factors for predicted and experimentally obtained dissolution profiles of test matrix tablets formulations indicate that Elman dynamic neural networks as well as decision trees are capable of accurate predictions of both hydrophilic and lipid matrix tablets dissolution profiles. Elman neural networks were compared to most frequently used static network, Multi-layered perceptron, and superiority of Elman networks have been demonstrated. Developed methods allow simple, yet very precise way of drug release predictions for both hydrophilic and lipid matrix tablets having controlled drug release. Copyright © 2012 Elsevier B.V. All rights reserved.

Top