Sample records for recurrent network activity

  1. Encoding Time in Feedforward Trajectories of a Recurrent Neural Network Model.

    PubMed

    Hardy, N F; Buonomano, Dean V

    2018-02-01

    Brain activity evolves through time, creating trajectories of activity that underlie sensorimotor processing, behavior, and learning and memory. Therefore, understanding the temporal nature of neural dynamics is essential to understanding brain function and behavior. In vivo studies have demonstrated that sequential transient activation of neurons can encode time. However, it remains unclear whether these patterns emerge from feedforward network architectures or from recurrent networks and, furthermore, what role network structure plays in timing. We address these issues using a recurrent neural network (RNN) model with distinct populations of excitatory and inhibitory units. Consistent with experimental data, a single RNN could autonomously produce multiple functionally feedforward trajectories, thus potentially encoding multiple timed motor patterns lasting up to several seconds. Importantly, the model accounted for Weber's law, a hallmark of timing behavior. Analysis of network connectivity revealed that efficiency-a measure of network interconnectedness-decreased as the number of stored trajectories increased. Additionally, the balance of excitation (E) and inhibition (I) shifted toward excitation during each unit's activation time, generating the prediction that observed sequential activity relies on dynamic control of the E/I balance. Our results establish for the first time that the same RNN can generate multiple functionally feedforward patterns of activity as a result of dynamic shifts in the E/I balance imposed by the connectome of the RNN. We conclude that recurrent network architectures account for sequential neural activity, as well as for a fundamental signature of timing behavior: Weber's law.

  2. Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition.

    PubMed

    Ordóñez, Francisco Javier; Roggen, Daniel

    2016-01-18

    Human activity recognition (HAR) tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we propose a generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which: (i) is suitable for multimodal wearable sensors; (ii) can perform sensor fusion naturally; (iii) does not require expert knowledge in designing features; and (iv) explicitly models the temporal dynamics of feature activations. We evaluate our framework on two datasets, one of which has been used in a public activity recognition challenge. Our results show that our framework outperforms competing deep non-recurrent networks on the challenge dataset by 4% on average; outperforming some of the previous reported results by up to 9%. Our results show that the framework can be applied to homogeneous sensor modalities, but can also fuse multimodal sensors to improve performance. We characterise key architectural hyperparameters' influence on performance to provide insights about their optimisation.

  3. Nonlinear recurrent neural networks for finite-time solution of general time-varying linear matrix equations.

    PubMed

    Xiao, Lin; Liao, Bolin; Li, Shuai; Chen, Ke

    2018-02-01

    In order to solve general time-varying linear matrix equations (LMEs) more efficiently, this paper proposes two nonlinear recurrent neural networks based on two nonlinear activation functions. According to Lyapunov theory, such two nonlinear recurrent neural networks are proved to be convergent within finite-time. Besides, by solving differential equation, the upper bounds of the finite convergence time are determined analytically. Compared with existing recurrent neural networks, the proposed two nonlinear recurrent neural networks have a better convergence property (i.e., the upper bound is lower), and thus the accurate solutions of general time-varying LMEs can be obtained with less time. At last, various different situations have been considered by setting different coefficient matrices of general time-varying LMEs and a great variety of computer simulations (including the application to robot manipulators) have been conducted to validate the better finite-time convergence of the proposed two nonlinear recurrent neural networks. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Homeostatic Scaling of Excitability in Recurrent Neural Networks

    PubMed Central

    Remme, Michiel W. H.; Wadman, Wytse J.

    2012-01-01

    Neurons adjust their intrinsic excitability when experiencing a persistent change in synaptic drive. This process can prevent neural activity from moving into either a quiescent state or a saturated state in the face of ongoing plasticity, and is thought to promote stability of the network in which neurons reside. However, most neurons are embedded in recurrent networks, which require a delicate balance between excitation and inhibition to maintain network stability. This balance could be disrupted when neurons independently adjust their intrinsic excitability. Here, we study the functioning of activity-dependent homeostatic scaling of intrinsic excitability (HSE) in a recurrent neural network. Using both simulations of a recurrent network consisting of excitatory and inhibitory neurons that implement HSE, and a mean-field description of adapting excitatory and inhibitory populations, we show that the stability of such adapting networks critically depends on the relationship between the adaptation time scales of both neuron populations. In a stable adapting network, HSE can keep all neurons functioning within their dynamic range, while the network is undergoing several (patho)physiologically relevant types of plasticity, such as persistent changes in external drive, changes in connection strengths, or the loss of inhibitory cells from the network. However, HSE cannot prevent the unstable network dynamics that result when, due to such plasticity, recurrent excitation in the network becomes too strong compared to feedback inhibition. This suggests that keeping a neural network in a stable and functional state requires the coordination of distinct homeostatic mechanisms that operate not only by adjusting neural excitability, but also by controlling network connectivity. PMID:22570604

  5. Persistence and storage of activity patterns in spiking recurrent cortical networks: modulation of sigmoid signals by after-hyperpolarization currents and acetylcholine

    PubMed Central

    Palma, Jesse; Grossberg, Stephen; Versace, Massimiliano

    2012-01-01

    Many cortical networks contain recurrent architectures that transform input patterns before storing them in short-term memory (STM). Theorems in the 1970's showed how feedback signal functions in rate-based recurrent on-center off-surround networks control this process. A sigmoid signal function induces a quenching threshold below which inputs are suppressed as noise and above which they are contrast-enhanced before pattern storage. This article describes how changes in feedback signaling, neuromodulation, and recurrent connectivity may alter pattern processing in recurrent on-center off-surround networks of spiking neurons. In spiking neurons, fast, medium, and slow after-hyperpolarization (AHP) currents control sigmoid signal threshold and slope. Modulation of AHP currents by acetylcholine (ACh) can change sigmoid shape and, with it, network dynamics. For example, decreasing signal function threshold and increasing slope can lengthen the persistence of a partially contrast-enhanced pattern, increase the number of active cells stored in STM, or, if connectivity is distance-dependent, cause cell activities to cluster. These results clarify how cholinergic modulation by the basal forebrain may alter the vigilance of category learning circuits, and thus their sensitivity to predictive mismatches, thereby controlling whether learned categories code concrete or abstract features, as predicted by Adaptive Resonance Theory. The analysis includes global, distance-dependent, and interneuron-mediated circuits. With an appropriate degree of recurrent excitation and inhibition, spiking networks maintain a partially contrast-enhanced pattern for 800 ms or longer after stimuli offset, then resolve to no stored pattern, or to winner-take-all (WTA) stored patterns with one or multiple winners. Strengthening inhibition prolongs a partially contrast-enhanced pattern by slowing the transition to stability, while strengthening excitation causes more winners when the network stabilizes. PMID:22754524

  6. Recurrent Network models of sequence generation and memory

    PubMed Central

    Rajan, Kanaka; Harvey, Christopher D; Tank, David W

    2016-01-01

    SUMMARY Sequential activation of neurons is a common feature of network activity during a variety of behaviors, including working memory and decision making. Previous network models for sequences and memory emphasized specialized architectures in which a principled mechanism is pre-wired into their connectivity. Here, we demonstrate that starting from random connectivity and modifying a small fraction of connections, a largely disordered recurrent network can produce sequences and implement working memory efficiently. We use this process, called Partial In-Network training (PINning), to model and match cellular-resolution imaging data from the posterior parietal cortex during a virtual memory-guided two-alternative forced choice task [Harvey, Coen and Tank, 2012]. Analysis of the connectivity reveals that sequences propagate by the cooperation between recurrent synaptic interactions and external inputs, rather than through feedforward or asymmetric connections. Together our results suggest that neural sequences may emerge through learning from largely unstructured network architectures. PMID:26971945

  7. Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition

    PubMed Central

    Ordóñez, Francisco Javier; Roggen, Daniel

    2016-01-01

    Human activity recognition (HAR) tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we propose a generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which: (i) is suitable for multimodal wearable sensors; (ii) can perform sensor fusion naturally; (iii) does not require expert knowledge in designing features; and (iv) explicitly models the temporal dynamics of feature activations. We evaluate our framework on two datasets, one of which has been used in a public activity recognition challenge. Our results show that our framework outperforms competing deep non-recurrent networks on the challenge dataset by 4% on average; outperforming some of the previous reported results by up to 9%. Our results show that the framework can be applied to homogeneous sensor modalities, but can also fuse multimodal sensors to improve performance. We characterise key architectural hyperparameters’ influence on performance to provide insights about their optimisation. PMID:26797612

  8. Emergence of network structure due to spike-timing-dependent plasticity in recurrent neuronal networks IV: structuring synaptic pathways among recurrent connections.

    PubMed

    Gilson, Matthieu; Burkitt, Anthony N; Grayden, David B; Thomas, Doreen A; van Hemmen, J Leo

    2009-12-01

    In neuronal networks, the changes of synaptic strength (or weight) performed by spike-timing-dependent plasticity (STDP) are hypothesized to give rise to functional network structure. This article investigates how this phenomenon occurs for the excitatory recurrent connections of a network with fixed input weights that is stimulated by external spike trains. We develop a theoretical framework based on the Poisson neuron model to analyze the interplay between the neuronal activity (firing rates and the spike-time correlations) and the learning dynamics, when the network is stimulated by correlated pools of homogeneous Poisson spike trains. STDP can lead to both a stabilization of all the neuron firing rates (homeostatic equilibrium) and a robust weight specialization. The pattern of specialization for the recurrent weights is determined by a relationship between the input firing-rate and correlation structures, the network topology, the STDP parameters and the synaptic response properties. We find conditions for feed-forward pathways or areas with strengthened self-feedback to emerge in an initially homogeneous recurrent network.

  9. Memory replay in balanced recurrent networks

    PubMed Central

    Chenkov, Nikolay; Sprekeler, Henning; Kempter, Richard

    2017-01-01

    Complex patterns of neural activity appear during up-states in the neocortex and sharp waves in the hippocampus, including sequences that resemble those during prior behavioral experience. The mechanisms underlying this replay are not well understood. How can small synaptic footprints engraved by experience control large-scale network activity during memory retrieval and consolidation? We hypothesize that sparse and weak synaptic connectivity between Hebbian assemblies are boosted by pre-existing recurrent connectivity within them. To investigate this idea, we connect sequences of assemblies in randomly connected spiking neuronal networks with a balance of excitation and inhibition. Simulations and analytical calculations show that recurrent connections within assemblies allow for a fast amplification of signals that indeed reduces the required number of inter-assembly connections. Replay can be evoked by small sensory-like cues or emerge spontaneously by activity fluctuations. Global—potentially neuromodulatory—alterations of neuronal excitability can switch between network states that favor retrieval and consolidation. PMID:28135266

  10. The relevance of network micro-structure for neural dynamics.

    PubMed

    Pernice, Volker; Deger, Moritz; Cardanobile, Stefano; Rotter, Stefan

    2013-01-01

    The activity of cortical neurons is determined by the input they receive from presynaptic neurons. Many previous studies have investigated how specific aspects of the statistics of the input affect the spike trains of single neurons and neurons in recurrent networks. However, typically very simple random network models are considered in such studies. Here we use a recently developed algorithm to construct networks based on a quasi-fractal probability measure which are much more variable than commonly used network models, and which therefore promise to sample the space of recurrent networks in a more exhaustive fashion than previously possible. We use the generated graphs as the underlying network topology in simulations of networks of integrate-and-fire neurons in an asynchronous and irregular state. Based on an extensive dataset of networks and neuronal simulations we assess statistical relations between features of the network structure and the spiking activity. Our results highlight the strong influence that some details of the network structure have on the activity dynamics of both single neurons and populations, even if some global network parameters are kept fixed. We observe specific and consistent relations between activity characteristics like spike-train irregularity or correlations and network properties, for example the distributions of the numbers of in- and outgoing connections or clustering. Exploiting these relations, we demonstrate that it is possible to estimate structural characteristics of the network from activity data. We also assess higher order correlations of spiking activity in the various networks considered here, and find that their occurrence strongly depends on the network structure. These results provide directions for further theoretical studies on recurrent networks, as well as new ways to interpret spike train recordings from neural circuits.

  11. The default mode network and recurrent depression: a neurobiological model of cognitive risk factors.

    PubMed

    Marchetti, Igor; Koster, Ernst H W; Sonuga-Barke, Edmund J; De Raedt, Rudi

    2012-09-01

    A neurobiological account of cognitive vulnerability for recurrent depression is presented based on recent developments of resting state neural networks. We propose that alterations in the interplay between task positive (TP) and task negative (TN) elements of the Default Mode Network (DMN) act as a neurobiological risk factor for recurrent depression mediated by cognitive mechanisms. In the framework, depression is characterized by an imbalance between TN-TP components leading to an overpowering of TP by TN activity. The TN-TP imbalance is associated with a dysfunctional internally-focused cognitive style as well as a failure to attenuate TN activity in the transition from rest to task. Thus we propose the TN-TP imbalance as overarching neural mechanism involved in crucial cognitive risk factors for recurrent depression, namely rumination, impaired attentional control, and cognitive reactivity. During remission the TN-TP imbalance persists predisposing to vulnerability of recurrent depression. Empirical data to support this model is reviewed. Finally, we specify how this framework can guide future research efforts.

  12. Biologically plausible learning in recurrent neural networks reproduces neural dynamics observed during cognitive tasks

    PubMed Central

    Miconi, Thomas

    2017-01-01

    Neural activity during cognitive tasks exhibits complex dynamics that flexibly encode task-relevant variables. Chaotic recurrent networks, which spontaneously generate rich dynamics, have been proposed as a model of cortical computation during cognitive tasks. However, existing methods for training these networks are either biologically implausible, and/or require a continuous, real-time error signal to guide learning. Here we show that a biologically plausible learning rule can train such recurrent networks, guided solely by delayed, phasic rewards at the end of each trial. Networks endowed with this learning rule can successfully learn nontrivial tasks requiring flexible (context-dependent) associations, memory maintenance, nonlinear mixed selectivities, and coordination among multiple outputs. The resulting networks replicate complex dynamics previously observed in animal cortex, such as dynamic encoding of task features and selective integration of sensory inputs. We conclude that recurrent neural networks offer a plausible model of cortical dynamics during both learning and performance of flexible behavior. DOI: http://dx.doi.org/10.7554/eLife.20899.001 PMID:28230528

  13. Biologically plausible learning in recurrent neural networks reproduces neural dynamics observed during cognitive tasks.

    PubMed

    Miconi, Thomas

    2017-02-23

    Neural activity during cognitive tasks exhibits complex dynamics that flexibly encode task-relevant variables. Chaotic recurrent networks, which spontaneously generate rich dynamics, have been proposed as a model of cortical computation during cognitive tasks. However, existing methods for training these networks are either biologically implausible, and/or require a continuous, real-time error signal to guide learning. Here we show that a biologically plausible learning rule can train such recurrent networks, guided solely by delayed, phasic rewards at the end of each trial. Networks endowed with this learning rule can successfully learn nontrivial tasks requiring flexible (context-dependent) associations, memory maintenance, nonlinear mixed selectivities, and coordination among multiple outputs. The resulting networks replicate complex dynamics previously observed in animal cortex, such as dynamic encoding of task features and selective integration of sensory inputs. We conclude that recurrent neural networks offer a plausible model of cortical dynamics during both learning and performance of flexible behavior.

  14. Prefrontal cortical network activity: Opposite effects of psychedelic hallucinogens and D1/D5 dopamine receptor activation

    PubMed Central

    Lambe, Evelyn K.; Aghajanian, George K.

    2007-01-01

    The fine-tuning of network activity provides a modulating influence on how information is processed and interpreted in the brain. Here, we use brain slices of rat prefrontal cortex to study how recurrent network activity is affected by neuromodulators known to alter normal cortical function. We previously determined that glutamate spillover and stimulation of extrasynaptic NMDA receptors are required to support hallucinogen-induced cortical network activity. Since microdialysis studies suggest that psychedelic hallucinogens and dopamine D1/D5 receptor agonists have opposite effects on extracellular glutamate in prefrontal cortex, we hypothesized that these two families of psychoactive drugs would have opposite effects on cortical network activity. We found that network activity can be enhanced by DOI (a psychedelic hallucinogen that is a partial agonist of serotonin 5-HT2A/2C receptors) and suppressed by the selective D1/D5 agonist SKF 38393. This suppression could be mimicked by direct activation of adenylyl cyclase with forskolin or by addition of a cAMP analog. These findings are consistent with previous work showing that activation of adenylyl cyclase can upregulate neuronal glutamate transporters, thereby decreasing synaptic spillover of glutamate. Consistent with this hypothesis, a low concentration of the glutamate transporter inhibitor TBOA restored electrically-evoked recurrent activity in the presence of a selective D1/D5 agonist, whereas recurrent activity in the presence of a low level of the GABAA antagonist bicuculline was not resistant to suppression by the D1/D5 agonist. The tempering of network UP states by D1/D5 receptor activation may have implications for the proposed use of D1/D5 agonists in the treatment of schizophrenia. PMID:17293055

  15. Delay selection by spike-timing-dependent plasticity in recurrent networks of spiking neurons receiving oscillatory inputs.

    PubMed

    Kerr, Robert R; Burkitt, Anthony N; Thomas, Doreen A; Gilson, Matthieu; Grayden, David B

    2013-01-01

    Learning rules, such as spike-timing-dependent plasticity (STDP), change the structure of networks of neurons based on the firing activity. A network level understanding of these mechanisms can help infer how the brain learns patterns and processes information. Previous studies have shown that STDP selectively potentiates feed-forward connections that have specific axonal delays, and that this underlies behavioral functions such as sound localization in the auditory brainstem of the barn owl. In this study, we investigate how STDP leads to the selective potentiation of recurrent connections with different axonal and dendritic delays during oscillatory activity. We develop analytical models of learning with additive STDP in recurrent networks driven by oscillatory inputs, and support the results using simulations with leaky integrate-and-fire neurons. Our results show selective potentiation of connections with specific axonal delays, which depended on the input frequency. In addition, we demonstrate how this can lead to a network becoming selective in the amplitude of its oscillatory response to this frequency. We extend this model of axonal delay selection within a single recurrent network in two ways. First, we show the selective potentiation of connections with a range of both axonal and dendritic delays. Second, we show axonal delay selection between multiple groups receiving out-of-phase, oscillatory inputs. We discuss the application of these models to the formation and activation of neuronal ensembles or cell assemblies in the cortex, and also to missing fundamental pitch perception in the auditory brainstem.

  16. Delay Selection by Spike-Timing-Dependent Plasticity in Recurrent Networks of Spiking Neurons Receiving Oscillatory Inputs

    PubMed Central

    Kerr, Robert R.; Burkitt, Anthony N.; Thomas, Doreen A.; Gilson, Matthieu; Grayden, David B.

    2013-01-01

    Learning rules, such as spike-timing-dependent plasticity (STDP), change the structure of networks of neurons based on the firing activity. A network level understanding of these mechanisms can help infer how the brain learns patterns and processes information. Previous studies have shown that STDP selectively potentiates feed-forward connections that have specific axonal delays, and that this underlies behavioral functions such as sound localization in the auditory brainstem of the barn owl. In this study, we investigate how STDP leads to the selective potentiation of recurrent connections with different axonal and dendritic delays during oscillatory activity. We develop analytical models of learning with additive STDP in recurrent networks driven by oscillatory inputs, and support the results using simulations with leaky integrate-and-fire neurons. Our results show selective potentiation of connections with specific axonal delays, which depended on the input frequency. In addition, we demonstrate how this can lead to a network becoming selective in the amplitude of its oscillatory response to this frequency. We extend this model of axonal delay selection within a single recurrent network in two ways. First, we show the selective potentiation of connections with a range of both axonal and dendritic delays. Second, we show axonal delay selection between multiple groups receiving out-of-phase, oscillatory inputs. We discuss the application of these models to the formation and activation of neuronal ensembles or cell assemblies in the cortex, and also to missing fundamental pitch perception in the auditory brainstem. PMID:23408878

  17. Iterative free-energy optimization for recurrent neural networks (INFERNO).

    PubMed

    Pitti, Alexandre; Gaussier, Philippe; Quoy, Mathias

    2017-01-01

    The intra-parietal lobe coupled with the Basal Ganglia forms a working memory that demonstrates strong planning capabilities for generating robust yet flexible neuronal sequences. Neurocomputational models however, often fails to control long range neural synchrony in recurrent spiking networks due to spontaneous activity. As a novel framework based on the free-energy principle, we propose to see the problem of spikes' synchrony as an optimization problem of the neurons sub-threshold activity for the generation of long neuronal chains. Using a stochastic gradient descent, a reinforcement signal (presumably dopaminergic) evaluates the quality of one input vector to move the recurrent neural network to a desired activity; depending on the error made, this input vector is strengthened to hill-climb the gradient or elicited to search for another solution. This vector can be learned then by one associative memory as a model of the basal-ganglia to control the recurrent neural network. Experiments on habit learning and on sequence retrieving demonstrate the capabilities of the dual system to generate very long and precise spatio-temporal sequences, above two hundred iterations. Its features are applied then to the sequential planning of arm movements. In line with neurobiological theories, we discuss its relevance for modeling the cortico-basal working memory to initiate flexible goal-directed neuronal chains of causation and its relation to novel architectures such as Deep Networks, Neural Turing Machines and the Free-Energy Principle.

  18. How Travel Demand Affects Detection of Non-Recurrent Traffic Congestion on Urban Road Networks

    NASA Astrophysics Data System (ADS)

    Anbaroglu, B.; Heydecker, B.; Cheng, T.

    2016-06-01

    Occurrence of non-recurrent traffic congestion hinders the economic activity of a city, as travellers could miss appointments or be late for work or important meetings. Similarly, for shippers, unexpected delays may disrupt just-in-time delivery and manufacturing processes, which could lose them payment. Consequently, research on non-recurrent congestion detection on urban road networks has recently gained attention. By analysing large amounts of traffic data collected on a daily basis, traffic operation centres can improve their methods to detect non-recurrent congestion rapidly and then revise their existing plans to mitigate its effects. Space-time clusters of high link journey time estimates correspond to non-recurrent congestion events. Existing research, however, has not considered the effect of travel demand on the effectiveness of non-recurrent congestion detection methods. Therefore, this paper investigates how travel demand affects detection of non-recurrent traffic congestion detection on urban road networks. Travel demand has been classified into three categories as low, normal and high. The experiments are carried out on London's urban road network, and the results demonstrate the necessity to adjust the relative importance of the component evaluation criteria depending on the travel demand level.

  19. A generalized LSTM-like training algorithm for second-order recurrent neural networks

    PubMed Central

    Monner, Derek; Reggia, James A.

    2011-01-01

    The Long Short Term Memory (LSTM) is a second-order recurrent neural network architecture that excels at storing sequential short-term memories and retrieving them many time-steps later. LSTM’s original training algorithm provides the important properties of spatial and temporal locality, which are missing from other training approaches, at the cost of limiting it’s applicability to a small set of network architectures. Here we introduce the Generalized Long Short-Term Memory (LSTM-g) training algorithm, which provides LSTM-like locality while being applicable without modification to a much wider range of second-order network architectures. With LSTM-g, all units have an identical set of operating instructions for both activation and learning, subject only to the configuration of their local environment in the network; this is in contrast to the original LSTM training algorithm, where each type of unit has its own activation and training instructions. When applied to LSTM architectures with peephole connections, LSTM-g takes advantage of an additional source of back-propagated error which can enable better performance than the original algorithm. Enabled by the broad architectural applicability of LSTM-g, we demonstrate that training recurrent networks engineered for specific tasks can produce better results than single-layer networks. We conclude that LSTM-g has the potential to both improve the performance and broaden the applicability of spatially and temporally local gradient-based training algorithms for recurrent neural networks. PMID:21803542

  20. Pathological effect of homeostatic synaptic scaling on network dynamics in diseases of the cortex.

    PubMed

    Fröhlich, Flavio; Bazhenov, Maxim; Sejnowski, Terrence J

    2008-02-13

    Slow periodic EEG discharges are common in CNS disorders. The pathophysiology of this aberrant rhythmic activity is poorly understood. We used a computational model of a neocortical network with a dynamic homeostatic scaling rule to show that loss of input (partial deafferentation) can trigger network reorganization that results in pathological periodic discharges. The decrease in average firing rate in the network by deafferentation was compensated by homeostatic synaptic scaling of recurrent excitation among pyramidal cells. Synaptic scaling succeeded in recovering the network target firing rate for all degrees of deafferentation (fraction of deafferented cells), but there was a critical degree of deafferentation for pathological network reorganization. For deafferentation degrees below this value, homeostatic upregulation of recurrent excitation had minimal effect on the macroscopic network dynamics. For deafferentation above this threshold, however, a slow periodic oscillation appeared, patterns of activity were less sparse, and bursting occurred in individual neurons. Also, comparison of spike-triggered afferent and recurrent excitatory conductances revealed that information transmission was strongly impaired. These results suggest that homeostatic plasticity can lead to secondary functional impairment in case of cortical disorders associated with cell loss.

  1. Reward-based training of recurrent neural networks for cognitive and value-based tasks

    PubMed Central

    Song, H Francis; Yang, Guangyu R; Wang, Xiao-Jing

    2017-01-01

    Trained neural network models, which exhibit features of neural activity recorded from behaving animals, may provide insights into the circuit mechanisms of cognitive functions through systematic analysis of network activity and connectivity. However, in contrast to the graded error signals commonly used to train networks through supervised learning, animals learn from reward feedback on definite actions through reinforcement learning. Reward maximization is particularly relevant when optimal behavior depends on an animal’s internal judgment of confidence or subjective preferences. Here, we implement reward-based training of recurrent neural networks in which a value network guides learning by using the activity of the decision network to predict future reward. We show that such models capture behavioral and electrophysiological findings from well-known experimental paradigms. Our work provides a unified framework for investigating diverse cognitive and value-based computations, and predicts a role for value representation that is essential for learning, but not executing, a task. DOI: http://dx.doi.org/10.7554/eLife.21492.001 PMID:28084991

  2. Once upon a (slow) time in the land of recurrent neuronal networks….

    PubMed

    Huang, Chengcheng; Doiron, Brent

    2017-10-01

    The brain must both react quickly to new inputs as well as store a memory of past activity. This requires biology that operates over a vast range of time scales. Fast time scales are determined by the kinetics of synaptic conductances and ionic channels; however, the mechanics of slow time scales are more complicated. In this opinion article we review two distinct network-based mechanisms that impart slow time scales in recurrently coupled neuronal networks. The first is in strongly coupled networks where the time scale of the internally generated fluctuations diverges at the transition between stable and chaotic firing rate activity. The second is in networks with finitely many members where noise-induced transitions between metastable states appear as a slow time scale in the ongoing network firing activity. We discuss these mechanisms with an emphasis on their similarities and differences. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. From fuzzy recurrence plots to scalable recurrence networks of time series

    NASA Astrophysics Data System (ADS)

    Pham, Tuan D.

    2017-04-01

    Recurrence networks, which are derived from recurrence plots of nonlinear time series, enable the extraction of hidden features of complex dynamical systems. Because fuzzy recurrence plots are represented as grayscale images, this paper presents a variety of texture features that can be extracted from fuzzy recurrence plots. Based on the notion of fuzzy recurrence plots, defuzzified, undirected, and unweighted recurrence networks are introduced. Network measures can be computed for defuzzified recurrence networks that are scalable to meet the demand for the network-based analysis of big data.

  4. A New Local Bipolar Autoassociative Memory Based on External Inputs of Discrete Recurrent Neural Networks With Time Delay.

    PubMed

    Zhou, Caigen; Zeng, Xiaoqin; Luo, Chaomin; Zhang, Huaguang

    In this paper, local bipolar auto-associative memories are presented based on discrete recurrent neural networks with a class of gain type activation function. The weight parameters of neural networks are acquired by a set of inequalities without the learning procedure. The global exponential stability criteria are established to ensure the accuracy of the restored patterns by considering time delays and external inputs. The proposed methodology is capable of effectively overcoming spurious memory patterns and achieving memory capacity. The effectiveness, robustness, and fault-tolerant capability are validated by simulated experiments.In this paper, local bipolar auto-associative memories are presented based on discrete recurrent neural networks with a class of gain type activation function. The weight parameters of neural networks are acquired by a set of inequalities without the learning procedure. The global exponential stability criteria are established to ensure the accuracy of the restored patterns by considering time delays and external inputs. The proposed methodology is capable of effectively overcoming spurious memory patterns and achieving memory capacity. The effectiveness, robustness, and fault-tolerant capability are validated by simulated experiments.

  5. Spatiotemporal discrimination in neural networks with short-term synaptic plasticity

    NASA Astrophysics Data System (ADS)

    Shlaer, Benjamin; Miller, Paul

    2015-03-01

    Cells in recurrently connected neural networks exhibit bistability, which allows for stimulus information to persist in a circuit even after stimulus offset, i.e. short-term memory. However, such a system does not have enough hysteresis to encode temporal information about the stimuli. The biophysically described phenomenon of synaptic depression decreases synaptic transmission strengths due to increased presynaptic activity. This short-term reduction in synaptic strengths can destabilize attractor states in excitatory recurrent neural networks, causing the network to move along stimulus dependent dynamical trajectories. Such a network can successfully separate amplitudes and durations of stimuli from the number of successive stimuli. Stimulus number, duration and intensity encoding in randomly connected attractor networks with synaptic depression. Front. Comput. Neurosci. 7:59., and so provides a strong candidate network for the encoding of spatiotemporal information. Here we explicitly demonstrate the capability of a recurrent neural network with short-term synaptic depression to discriminate between the temporal sequences in which spatial stimuli are presented.

  6. Complete stability of delayed recurrent neural networks with Gaussian activation functions.

    PubMed

    Liu, Peng; Zeng, Zhigang; Wang, Jun

    2017-01-01

    This paper addresses the complete stability of delayed recurrent neural networks with Gaussian activation functions. By means of the geometrical properties of Gaussian function and algebraic properties of nonsingular M-matrix, some sufficient conditions are obtained to ensure that for an n-neuron neural network, there are exactly 3 k equilibrium points with 0≤k≤n, among which 2 k and 3 k -2 k equilibrium points are locally exponentially stable and unstable, respectively. Moreover, it concludes that all the states converge to one of the equilibrium points; i.e., the neural networks are completely stable. The derived conditions herein can be easily tested. Finally, a numerical example is given to illustrate the theoretical results. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Neuronal network model of interictal and recurrent ictal activity

    NASA Astrophysics Data System (ADS)

    Lopes, M. A.; Lee, K.-E.; Goltsev, A. V.

    2017-12-01

    We propose a neuronal network model which undergoes a saddle node on an invariant circle bifurcation as the mechanism of the transition from the interictal to the ictal (seizure) state. In the vicinity of this transition, the model captures important dynamical features of both interictal and ictal states. We study the nature of interictal spikes and early warnings of the transition predicted by this model. We further demonstrate that recurrent seizures emerge due to the interaction between two networks.

  8. Effects of Calcium Spikes in the Layer 5 Pyramidal Neuron on Coincidence Detection and Activity Propagation

    PubMed Central

    Chua, Yansong; Morrison, Abigail

    2016-01-01

    The role of dendritic spiking mechanisms in neural processing is so far poorly understood. To investigate the role of calcium spikes in the functional properties of the single neuron and recurrent networks, we investigated a three compartment neuron model of the layer 5 pyramidal neuron with calcium dynamics in the distal compartment. By performing single neuron simulations with noisy synaptic input and occasional large coincident input at either just the distal compartment or at both somatic and distal compartments, we show that the presence of calcium spikes confers a substantial advantage for coincidence detection in the former case and a lesser advantage in the latter. We further show that the experimentally observed critical frequency phenomenon, in which action potentials triggered by stimuli near the soma above a certain frequency trigger a calcium spike at distal dendrites, leading to further somatic depolarization, is not exhibited by a neuron receiving realistically noisy synaptic input, and so is unlikely to be a necessary component of coincidence detection. We next investigate the effect of calcium spikes in propagation of spiking activities in a feed-forward network (FFN) embedded in a balanced recurrent network. The excitatory neurons in the network are again connected to either just the distal, or both somatic and distal compartments. With purely distal connectivity, activity propagation is stable and distinguishable for a large range of recurrent synaptic strengths if the feed-forward connections are sufficiently strong, but propagation does not occur in the absence of calcium spikes. When connections are made to both the somatic and the distal compartments, activity propagation is achieved for neurons with active calcium dynamics at a much smaller number of neurons per pool, compared to a network of passive neurons, but quickly becomes unstable as the strength of recurrent synapses increases. Activity propagation at higher scaling factors can be stabilized by increasing network inhibition or introducing short term depression in the excitatory synapses, but the signal to noise ratio remains low. Our results demonstrate that the interaction of synchrony with dendritic spiking mechanisms can have profound consequences for the dynamics on the single neuron and network level. PMID:27499740

  9. Effects of Calcium Spikes in the Layer 5 Pyramidal Neuron on Coincidence Detection and Activity Propagation.

    PubMed

    Chua, Yansong; Morrison, Abigail

    2016-01-01

    The role of dendritic spiking mechanisms in neural processing is so far poorly understood. To investigate the role of calcium spikes in the functional properties of the single neuron and recurrent networks, we investigated a three compartment neuron model of the layer 5 pyramidal neuron with calcium dynamics in the distal compartment. By performing single neuron simulations with noisy synaptic input and occasional large coincident input at either just the distal compartment or at both somatic and distal compartments, we show that the presence of calcium spikes confers a substantial advantage for coincidence detection in the former case and a lesser advantage in the latter. We further show that the experimentally observed critical frequency phenomenon, in which action potentials triggered by stimuli near the soma above a certain frequency trigger a calcium spike at distal dendrites, leading to further somatic depolarization, is not exhibited by a neuron receiving realistically noisy synaptic input, and so is unlikely to be a necessary component of coincidence detection. We next investigate the effect of calcium spikes in propagation of spiking activities in a feed-forward network (FFN) embedded in a balanced recurrent network. The excitatory neurons in the network are again connected to either just the distal, or both somatic and distal compartments. With purely distal connectivity, activity propagation is stable and distinguishable for a large range of recurrent synaptic strengths if the feed-forward connections are sufficiently strong, but propagation does not occur in the absence of calcium spikes. When connections are made to both the somatic and the distal compartments, activity propagation is achieved for neurons with active calcium dynamics at a much smaller number of neurons per pool, compared to a network of passive neurons, but quickly becomes unstable as the strength of recurrent synapses increases. Activity propagation at higher scaling factors can be stabilized by increasing network inhibition or introducing short term depression in the excitatory synapses, but the signal to noise ratio remains low. Our results demonstrate that the interaction of synchrony with dendritic spiking mechanisms can have profound consequences for the dynamics on the single neuron and network level.

  10. Delay-slope-dependent stability results of recurrent neural networks.

    PubMed

    Li, Tao; Zheng, Wei Xing; Lin, Chong

    2011-12-01

    By using the fact that the neuron activation functions are sector bounded and nondecreasing, this brief presents a new method, named the delay-slope-dependent method, for stability analysis of a class of recurrent neural networks with time-varying delays. This method includes more information on the slope of neuron activation functions and fewer matrix variables in the constructed Lyapunov-Krasovskii functional. Then some improved delay-dependent stability criteria with less computational burden and conservatism are obtained. Numerical examples are given to illustrate the effectiveness and the benefits of the proposed method.

  11. Active Control of Complex Systems via Dynamic (Recurrent) Neural Networks

    DTIC Science & Technology

    1992-05-30

    course, to on-going changes brought about by learning processes. As research in neurodynamics proceeded, the concept of reverberatory information flows...Microstructure of Cognition . Vol. 1: Foundations, M.I.T. Press, Cambridge, Massachusetts, pp. 354-361, 1986. 100 I Schwarz, G., "Estimating the dimension of a...Continually Running Fully Recurrent Neural Networks, ICS Report 8805, Institute of Cognitive Science, University of California at San Diego, 1988. 10 II

  12. Distributed multisensory integration in a recurrent network model through supervised learning

    NASA Astrophysics Data System (ADS)

    Wang, He; Wong, K. Y. Michael

    Sensory integration between different modalities has been extensively studied. It is suggested that the brain integrates signals from different modalities in a Bayesian optimal way. However, how the Bayesian rule is implemented in a neural network remains under debate. In this work we propose a biologically plausible recurrent network model, which can perform Bayesian multisensory integration after trained by supervised learning. Our model is composed of two modules, each for one modality. We assume that each module is a recurrent network, whose activity represents the posterior distribution of each stimulus. The feedforward input on each module is the likelihood of each modality. Two modules are integrated through cross-links, which are feedforward connections from the other modality, and reciprocal connections, which are recurrent connections between different modules. By stochastic gradient descent, we successfully trained the feedforward and recurrent coupling matrices simultaneously, both of which resembles the Mexican-hat. We also find that there are more than one set of coupling matrices that can approximate the Bayesian theorem well. Specifically, reciprocal connections and cross-links will compensate each other if one of them is removed. Even though trained with two inputs, the network's performance with only one input is in good accordance with what is predicted by the Bayesian theorem.

  13. Where’s the Noise? Key Features of Spontaneous Activity and Neural Variability Arise through Learning in a Deterministic Network

    PubMed Central

    Hartmann, Christoph; Lazar, Andreea; Nessler, Bernhard; Triesch, Jochen

    2015-01-01

    Even in the absence of sensory stimulation the brain is spontaneously active. This background “noise” seems to be the dominant cause of the notoriously high trial-to-trial variability of neural recordings. Recent experimental observations have extended our knowledge of trial-to-trial variability and spontaneous activity in several directions: 1. Trial-to-trial variability systematically decreases following the onset of a sensory stimulus or the start of a motor act. 2. Spontaneous activity states in sensory cortex outline the region of evoked sensory responses. 3. Across development, spontaneous activity aligns itself with typical evoked activity patterns. 4. The spontaneous brain activity prior to the presentation of an ambiguous stimulus predicts how the stimulus will be interpreted. At present it is unclear how these observations relate to each other and how they arise in cortical circuits. Here we demonstrate that all of these phenomena can be accounted for by a deterministic self-organizing recurrent neural network model (SORN), which learns a predictive model of its sensory environment. The SORN comprises recurrently coupled populations of excitatory and inhibitory threshold units and learns via a combination of spike-timing dependent plasticity (STDP) and homeostatic plasticity mechanisms. Similar to balanced network architectures, units in the network show irregular activity and variable responses to inputs. Additionally, however, the SORN exhibits sequence learning abilities matching recent findings from visual cortex and the network’s spontaneous activity reproduces the experimental findings mentioned above. Intriguingly, the network’s behaviour is reminiscent of sampling-based probabilistic inference, suggesting that correlates of sampling-based inference can develop from the interaction of STDP and homeostasis in deterministic networks. We conclude that key observations on spontaneous brain activity and the variability of neural responses can be accounted for by a simple deterministic recurrent neural network which learns a predictive model of its sensory environment via a combination of generic neural plasticity mechanisms. PMID:26714277

  14. Intrinsically-generated fluctuating activity in excitatory-inhibitory networks.

    PubMed

    Mastrogiuseppe, Francesca; Ostojic, Srdjan

    2017-04-01

    Recurrent networks of non-linear units display a variety of dynamical regimes depending on the structure of their synaptic connectivity. A particularly remarkable phenomenon is the appearance of strongly fluctuating, chaotic activity in networks of deterministic, but randomly connected rate units. How this type of intrinsically generated fluctuations appears in more realistic networks of spiking neurons has been a long standing question. To ease the comparison between rate and spiking networks, recent works investigated the dynamical regimes of randomly-connected rate networks with segregated excitatory and inhibitory populations, and firing rates constrained to be positive. These works derived general dynamical mean field (DMF) equations describing the fluctuating dynamics, but solved these equations only in the case of purely inhibitory networks. Using a simplified excitatory-inhibitory architecture in which DMF equations are more easily tractable, here we show that the presence of excitation qualitatively modifies the fluctuating activity compared to purely inhibitory networks. In presence of excitation, intrinsically generated fluctuations induce a strong increase in mean firing rates, a phenomenon that is much weaker in purely inhibitory networks. Excitation moreover induces two different fluctuating regimes: for moderate overall coupling, recurrent inhibition is sufficient to stabilize fluctuations; for strong coupling, firing rates are stabilized solely by the upper bound imposed on activity, even if inhibition is stronger than excitation. These results extend to more general network architectures, and to rate networks receiving noisy inputs mimicking spiking activity. Finally, we show that signatures of the second dynamical regime appear in networks of integrate-and-fire neurons.

  15. Intrinsically-generated fluctuating activity in excitatory-inhibitory networks

    PubMed Central

    Mastrogiuseppe, Francesca; Ostojic, Srdjan

    2017-01-01

    Recurrent networks of non-linear units display a variety of dynamical regimes depending on the structure of their synaptic connectivity. A particularly remarkable phenomenon is the appearance of strongly fluctuating, chaotic activity in networks of deterministic, but randomly connected rate units. How this type of intrinsically generated fluctuations appears in more realistic networks of spiking neurons has been a long standing question. To ease the comparison between rate and spiking networks, recent works investigated the dynamical regimes of randomly-connected rate networks with segregated excitatory and inhibitory populations, and firing rates constrained to be positive. These works derived general dynamical mean field (DMF) equations describing the fluctuating dynamics, but solved these equations only in the case of purely inhibitory networks. Using a simplified excitatory-inhibitory architecture in which DMF equations are more easily tractable, here we show that the presence of excitation qualitatively modifies the fluctuating activity compared to purely inhibitory networks. In presence of excitation, intrinsically generated fluctuations induce a strong increase in mean firing rates, a phenomenon that is much weaker in purely inhibitory networks. Excitation moreover induces two different fluctuating regimes: for moderate overall coupling, recurrent inhibition is sufficient to stabilize fluctuations; for strong coupling, firing rates are stabilized solely by the upper bound imposed on activity, even if inhibition is stronger than excitation. These results extend to more general network architectures, and to rate networks receiving noisy inputs mimicking spiking activity. Finally, we show that signatures of the second dynamical regime appear in networks of integrate-and-fire neurons. PMID:28437436

  16. Decreased Resting-State Activity in the Precuneus Is Associated With Depressive Episodes in Recurrent Depression.

    PubMed

    Liu, Chun-Hong; Ma, Xin; Yuan, Zhen; Song, Lu-Ping; Jing, Bing; Lu, Hong-Yu; Tang, Li-Rong; Fan, Jin; Walter, Martin; Liu, Cun-Zhi; Wang, Lihong; Wang, Chuan-Yue

    2017-04-01

    To investigate alterations in resting-state spontaneous brain activity in patients with major depressive disorder (MDD) experiencing multiple episodes. Between May 2007 and September 2014, 24 recurrent and 22 remitted patients diagnosed with MDD with the Structured Clinical Interview for DSM-IV Axis I Disorders (SCID-I), and 69 healthy controls matched for age, sex, and educational level participated in this study. Among them, 1 healthy control was excluded due to excessive head motion. The fractional amplitude of low-frequency fluctuation (fALFF) was assessed for all recruited subjects during the completion of resting-state functional magnetic resonance imaging. Relationships between fALFF and clinical measurements, including number of depressive episodes and illness duration, were examined. Compared to patients with remitted MDD and to healthy controls, patients with recurrent MDD exhibited decreased fALFF in the right posterior insula and right precuneus and increased fALFF in the left ventral anterior cingulate cortex. Decreased fALFF in the right precuneus and increased fALFF in the right middle insula were correlated with the number of depressive episodes in the recurrent MDD groups (r = -0.75, P < .01 and r = 0.78, P < .01, respectively) and remitted MDD groups (r = -0.63, P < .01 and r = 0.41, P = .03, respectively). In addition to regions in the default mode network (DMN) and salience network, the altered resting-state activity in the middle temporal and visual cortices was also identified. Altered resting-state activity was observed across several neural networks in patients with recurrent MDD. Consistent with the emerging theory that altered DMN activity is a risk factor for depression relapses, the association between reduced fALFF in the right precuneus and number of depressive episodes supports the role of the DMN in the pathology of recurrent depression. © Copyright 2017 Physicians Postgraduate Press, Inc.

  17. Cross over of recurrence networks to random graphs and random geometric graphs

    NASA Astrophysics Data System (ADS)

    Jacob, Rinku; Harikrishnan, K. P.; Misra, R.; Ambika, G.

    2017-02-01

    Recurrence networks are complex networks constructed from the time series of chaotic dynamical systems where the connection between two nodes is limited by the recurrence threshold. This condition makes the topology of every recurrence network unique with the degree distribution determined by the probability density variations of the representative attractor from which it is constructed. Here we numerically investigate the properties of recurrence networks from standard low-dimensional chaotic attractors using some basic network measures and show how the recurrence networks are different from random and scale-free networks. In particular, we show that all recurrence networks can cross over to random geometric graphs by adding sufficient amount of noise to the time series and into the classical random graphs by increasing the range of interaction to the system size. We also highlight the effectiveness of a combined plot of characteristic path length and clustering coefficient in capturing the small changes in the network characteristics.

  18. Integrative Analysis of Many Weighted Co-Expression Networks Using Tensor Computation

    PubMed Central

    Li, Wenyuan; Liu, Chun-Chi; Zhang, Tong; Li, Haifeng; Waterman, Michael S.; Zhou, Xianghong Jasmine

    2011-01-01

    The rapid accumulation of biological networks poses new challenges and calls for powerful integrative analysis tools. Most existing methods capable of simultaneously analyzing a large number of networks were primarily designed for unweighted networks, and cannot easily be extended to weighted networks. However, it is known that transforming weighted into unweighted networks by dichotomizing the edges of weighted networks with a threshold generally leads to information loss. We have developed a novel, tensor-based computational framework for mining recurrent heavy subgraphs in a large set of massive weighted networks. Specifically, we formulate the recurrent heavy subgraph identification problem as a heavy 3D subtensor discovery problem with sparse constraints. We describe an effective approach to solving this problem by designing a multi-stage, convex relaxation protocol, and a non-uniform edge sampling technique. We applied our method to 130 co-expression networks, and identified 11,394 recurrent heavy subgraphs, grouped into 2,810 families. We demonstrated that the identified subgraphs represent meaningful biological modules by validating against a large set of compiled biological knowledge bases. We also showed that the likelihood for a heavy subgraph to be meaningful increases significantly with its recurrence in multiple networks, highlighting the importance of the integrative approach to biological network analysis. Moreover, our approach based on weighted graphs detects many patterns that would be overlooked using unweighted graphs. In addition, we identified a large number of modules that occur predominately under specific phenotypes. This analysis resulted in a genome-wide mapping of gene network modules onto the phenome. Finally, by comparing module activities across many datasets, we discovered high-order dynamic cooperativeness in protein complex networks and transcriptional regulatory networks. PMID:21698123

  19. Effect of Heterogeneity on Decorrelation Mechanisms in Spiking Neural Networks: A Neuromorphic-Hardware Study

    NASA Astrophysics Data System (ADS)

    Pfeil, Thomas; Jordan, Jakob; Tetzlaff, Tom; Grübl, Andreas; Schemmel, Johannes; Diesmann, Markus; Meier, Karlheinz

    2016-04-01

    High-level brain function, such as memory, classification, or reasoning, can be realized by means of recurrent networks of simplified model neurons. Analog neuromorphic hardware constitutes a fast and energy-efficient substrate for the implementation of such neural computing architectures in technical applications and neuroscientific research. The functional performance of neural networks is often critically dependent on the level of correlations in the neural activity. In finite networks, correlations are typically inevitable due to shared presynaptic input. Recent theoretical studies have shown that inhibitory feedback, abundant in biological neural networks, can actively suppress these shared-input correlations and thereby enable neurons to fire nearly independently. For networks of spiking neurons, the decorrelating effect of inhibitory feedback has so far been explicitly demonstrated only for homogeneous networks of neurons with linear subthreshold dynamics. Theory, however, suggests that the effect is a general phenomenon, present in any system with sufficient inhibitory feedback, irrespective of the details of the network structure or the neuronal and synaptic properties. Here, we investigate the effect of network heterogeneity on correlations in sparse, random networks of inhibitory neurons with nonlinear, conductance-based synapses. Emulations of these networks on the analog neuromorphic-hardware system Spikey allow us to test the efficiency of decorrelation by inhibitory feedback in the presence of hardware-specific heterogeneities. The configurability of the hardware substrate enables us to modulate the extent of heterogeneity in a systematic manner. We selectively study the effects of shared input and recurrent connections on correlations in membrane potentials and spike trains. Our results confirm that shared-input correlations are actively suppressed by inhibitory feedback also in highly heterogeneous networks exhibiting broad, heavy-tailed firing-rate distributions. In line with former studies, cell heterogeneities reduce shared-input correlations. Overall, however, correlations in the recurrent system can increase with the level of heterogeneity as a consequence of diminished effective negative feedback.

  20. Distribution of Orientation Selectivity in Recurrent Networks of Spiking Neurons with Different Random Topologies

    PubMed Central

    Sadeh, Sadra; Rotter, Stefan

    2014-01-01

    Neurons in the primary visual cortex are more or less selective for the orientation of a light bar used for stimulation. A broad distribution of individual grades of orientation selectivity has in fact been reported in all species. A possible reason for emergence of broad distributions is the recurrent network within which the stimulus is being processed. Here we compute the distribution of orientation selectivity in randomly connected model networks that are equipped with different spatial patterns of connectivity. We show that, for a wide variety of connectivity patterns, a linear theory based on firing rates accurately approximates the outcome of direct numerical simulations of networks of spiking neurons. Distance dependent connectivity in networks with a more biologically realistic structure does not compromise our linear analysis, as long as the linearized dynamics, and hence the uniform asynchronous irregular activity state, remain stable. We conclude that linear mechanisms of stimulus processing are indeed responsible for the emergence of orientation selectivity and its distribution in recurrent networks with functionally heterogeneous synaptic connectivity. PMID:25469704

  1. A recurrent neural network for nonlinear optimization with a continuously differentiable objective function and bound constraints.

    PubMed

    Liang, X B; Wang, J

    2000-01-01

    This paper presents a continuous-time recurrent neural-network model for nonlinear optimization with any continuously differentiable objective function and bound constraints. Quadratic optimization with bound constraints is a special problem which can be solved by the recurrent neural network. The proposed recurrent neural network has the following characteristics. 1) It is regular in the sense that any optimum of the objective function with bound constraints is also an equilibrium point of the neural network. If the objective function to be minimized is convex, then the recurrent neural network is complete in the sense that the set of optima of the function with bound constraints coincides with the set of equilibria of the neural network. 2) The recurrent neural network is primal and quasiconvergent in the sense that its trajectory cannot escape from the feasible region and will converge to the set of equilibria of the neural network for any initial point in the feasible bound region. 3) The recurrent neural network has an attractivity property in the sense that its trajectory will eventually converge to the feasible region for any initial states even at outside of the bounded feasible region. 4) For minimizing any strictly convex quadratic objective function subject to bound constraints, the recurrent neural network is globally exponentially stable for almost any positive network parameters. Simulation results are given to demonstrate the convergence and performance of the proposed recurrent neural network for nonlinear optimization with bound constraints.

  2. Low-dimensional recurrent neural network-based Kalman filter for speech enhancement.

    PubMed

    Xia, Youshen; Wang, Jun

    2015-07-01

    This paper proposes a new recurrent neural network-based Kalman filter for speech enhancement, based on a noise-constrained least squares estimate. The parameters of speech signal modeled as autoregressive process are first estimated by using the proposed recurrent neural network and the speech signal is then recovered from Kalman filtering. The proposed recurrent neural network is globally asymptomatically stable to the noise-constrained estimate. Because the noise-constrained estimate has a robust performance against non-Gaussian noise, the proposed recurrent neural network-based speech enhancement algorithm can minimize the estimation error of Kalman filter parameters in non-Gaussian noise. Furthermore, having a low-dimensional model feature, the proposed neural network-based speech enhancement algorithm has a much faster speed than two existing recurrent neural networks-based speech enhancement algorithms. Simulation results show that the proposed recurrent neural network-based speech enhancement algorithm can produce a good performance with fast computation and noise reduction. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. A class of finite-time dual neural networks for solving quadratic programming problems and its k-winners-take-all application.

    PubMed

    Li, Shuai; Li, Yangming; Wang, Zheng

    2013-03-01

    This paper presents a class of recurrent neural networks to solve quadratic programming problems. Different from most existing recurrent neural networks for solving quadratic programming problems, the proposed neural network model converges in finite time and the activation function is not required to be a hard-limiting function for finite convergence time. The stability, finite-time convergence property and the optimality of the proposed neural network for solving the original quadratic programming problem are proven in theory. Extensive simulations are performed to evaluate the performance of the neural network with different parameters. In addition, the proposed neural network is applied to solving the k-winner-take-all (k-WTA) problem. Both theoretical analysis and numerical simulations validate the effectiveness of our method for solving the k-WTA problem. Copyright © 2012 Elsevier Ltd. All rights reserved.

  4. Modeling of cortical signals using echo state networks

    NASA Astrophysics Data System (ADS)

    Zhou, Hanying; Wang, Yongji; Huang, Jiangshuai

    2009-10-01

    Diverse modeling frameworks have been utilized with the ultimate goal of translating brain cortical signals into prediction of visible behavior. The inputs to these models are usually multidimensional neural recordings collected from relevant regions of a monkey's brain while the outputs are the associated behavior which is typically the 2-D or 3-D hand position of a primate. Here our task is to set up a proper model in order to figure out the move trajectories by input the neural signals which are simultaneously collected in the experiment. In this paper, we propose to use Echo State Networks (ESN) to map the neural firing activities into hand positions. ESN is a newly developed recurrent neural network(RNN) model. Besides its dynamic property and short term memory just as other recurrent neural networks have, it has a special echo state property which endows it with the ability to model nonlinear dynamic systems powerfully. What distinguished it from transitional recurrent neural networks most significantly is its special learning method. In this paper we train this net with a refined version of its typical training method and get a better model.

  5. Spike timing analysis in neural networks with unsupervised synaptic plasticity

    NASA Astrophysics Data System (ADS)

    Mizusaki, B. E. P.; Agnes, E. J.; Brunnet, L. G.; Erichsen, R., Jr.

    2013-01-01

    The synaptic plasticity rules that sculpt a neural network architecture are key elements to understand cortical processing, as they may explain the emergence of stable, functional activity, while avoiding runaway excitation. For an associative memory framework, they should be built in a way as to enable the network to reproduce a robust spatio-temporal trajectory in response to an external stimulus. Still, how these rules may be implemented in recurrent networks and the way they relate to their capacity of pattern recognition remains unclear. We studied the effects of three phenomenological unsupervised rules in sparsely connected recurrent networks for associative memory: spike-timing-dependent-plasticity, short-term-plasticity and an homeostatic scaling. The system stability is monitored during the learning process of the network, as the mean firing rate converges to a value determined by the homeostatic scaling. Afterwards, it is possible to measure the recovery efficiency of the activity following each initial stimulus. This is evaluated by a measure of the correlation between spike fire timings, and we analysed the full memory separation capacity and limitations of this system.

  6. Self-Organization of Microcircuits in Networks of Spiking Neurons with Plastic Synapses.

    PubMed

    Ocker, Gabriel Koch; Litwin-Kumar, Ashok; Doiron, Brent

    2015-08-01

    The synaptic connectivity of cortical networks features an overrepresentation of certain wiring motifs compared to simple random-network models. This structure is shaped, in part, by synaptic plasticity that promotes or suppresses connections between neurons depending on their joint spiking activity. Frequently, theoretical studies focus on how feedforward inputs drive plasticity to create this network structure. We study the complementary scenario of self-organized structure in a recurrent network, with spike timing-dependent plasticity driven by spontaneous dynamics. We develop a self-consistent theory for the evolution of network structure by combining fast spiking covariance with a slow evolution of synaptic weights. Through a finite-size expansion of network dynamics we obtain a low-dimensional set of nonlinear differential equations for the evolution of two-synapse connectivity motifs. With this theory in hand, we explore how the form of the plasticity rule drives the evolution of microcircuits in cortical networks. When potentiation and depression are in approximate balance, synaptic dynamics depend on weighted divergent, convergent, and chain motifs. For additive, Hebbian STDP these motif interactions create instabilities in synaptic dynamics that either promote or suppress the initial network structure. Our work provides a consistent theoretical framework for studying how spiking activity in recurrent networks interacts with synaptic plasticity to determine network structure.

  7. Self-Organization of Microcircuits in Networks of Spiking Neurons with Plastic Synapses

    PubMed Central

    Ocker, Gabriel Koch; Litwin-Kumar, Ashok; Doiron, Brent

    2015-01-01

    The synaptic connectivity of cortical networks features an overrepresentation of certain wiring motifs compared to simple random-network models. This structure is shaped, in part, by synaptic plasticity that promotes or suppresses connections between neurons depending on their joint spiking activity. Frequently, theoretical studies focus on how feedforward inputs drive plasticity to create this network structure. We study the complementary scenario of self-organized structure in a recurrent network, with spike timing-dependent plasticity driven by spontaneous dynamics. We develop a self-consistent theory for the evolution of network structure by combining fast spiking covariance with a slow evolution of synaptic weights. Through a finite-size expansion of network dynamics we obtain a low-dimensional set of nonlinear differential equations for the evolution of two-synapse connectivity motifs. With this theory in hand, we explore how the form of the plasticity rule drives the evolution of microcircuits in cortical networks. When potentiation and depression are in approximate balance, synaptic dynamics depend on weighted divergent, convergent, and chain motifs. For additive, Hebbian STDP these motif interactions create instabilities in synaptic dynamics that either promote or suppress the initial network structure. Our work provides a consistent theoretical framework for studying how spiking activity in recurrent networks interacts with synaptic plasticity to determine network structure. PMID:26291697

  8. A theory of cerebellar cortex and adaptive motor control based on two types of universal function approximation capability.

    PubMed

    Fujita, Masahiko

    2016-03-01

    Lesions of the cerebellum result in large errors in movements. The cerebellum adaptively controls the strength and timing of motor command signals depending on the internal and external environments of movements. The present theory describes how the cerebellar cortex can control signals for accurate and timed movements. A model network of the cerebellar Golgi and granule cells is shown to be equivalent to a multiple-input (from mossy fibers) hierarchical neural network with a single hidden layer of threshold units (granule cells) that receive a common recurrent inhibition (from a Golgi cell). The weighted sum of the hidden unit signals (Purkinje cell output) is theoretically analyzed regarding the capability of the network to perform two types of universal function approximation. The hidden units begin firing as the excitatory inputs exceed the recurrent inhibition. This simple threshold feature leads to the first approximation theory, and the network final output can be any continuous function of the multiple inputs. When the input is constant, this output becomes stationary. However, when the recurrent unit activity is triggered to decrease or the recurrent inhibition is triggered to increase through a certain mechanism (metabotropic modulation or extrasynaptic spillover), the network can generate any continuous signals for a prolonged period of change in the activity of recurrent signals, as the second approximation theory shows. By incorporating the cerebellar capability of two such types of approximations to a motor system, in which learning proceeds through repeated movement trials with accompanying corrections, accurate and timed responses for reaching the target can be adaptively acquired. Simple models of motor control can solve the motor error vs. sensory error problem, as well as the structural aspects of credit (or error) assignment problem. Two physiological experiments are proposed for examining the delay and trace conditioning of eyelid responses, as well as saccade adaptation, to investigate this novel idea of cerebellar processing. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Embedding recurrent neural networks into predator-prey models.

    PubMed

    Moreau, Yves; Louiès, Stephane; Vandewalle, Joos; Brenig, Leon

    1999-03-01

    We study changes of coordinates that allow the embedding of ordinary differential equations describing continuous-time recurrent neural networks into differential equations describing predator-prey models-also called Lotka-Volterra systems. We transform the equations for the neural network first into quasi-monomial form (Brenig, L. (1988). Complete factorization and analytic solutions of generalized Lotka-Volterra equations. Physics Letters A, 133(7-8), 378-382), where we express the vector field of the dynamical system as a linear combination of products of powers of the variables. In practice, this transformation is possible only if the activation function is the hyperbolic tangent or the logistic sigmoid. From this quasi-monomial form, we can directly transform the system further into Lotka-Volterra equations. The resulting Lotka-Volterra system is of higher dimension than the original system, but the behavior of its first variables is equivalent to the behavior of the original neural network. We expect that this transformation will permit the application of existing techniques for the analysis of Lotka-Volterra systems to recurrent neural networks. Furthermore, our results show that Lotka-Volterra systems are universal approximators of dynamical systems, just as are continuous-time neural networks.

  10. Region stability analysis and tracking control of memristive recurrent neural network.

    PubMed

    Bao, Gang; Zeng, Zhigang; Shen, Yanjun

    2018-02-01

    Memristor is firstly postulated by Leon Chua and realized by Hewlett-Packard (HP) laboratory. Research results show that memristor can be used to simulate the synapses of neurons. This paper presents a class of recurrent neural network with HP memristors. Firstly, it shows that memristive recurrent neural network has more compound dynamics than the traditional recurrent neural network by simulations. Then it derives that n dimensional memristive recurrent neural network is composed of [Formula: see text] sub neural networks which do not have a common equilibrium point. By designing the tracking controller, it can make memristive neural network being convergent to the desired sub neural network. At last, two numerical examples are given to verify the validity of our result. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. A Recurrent Network Model of Somatosensory Parametric Working Memory in the Prefrontal Cortex

    PubMed Central

    Miller, Paul; Brody, Carlos D; Romo, Ranulfo; Wang, Xiao-Jing

    2015-01-01

    A parametric working memory network stores the information of an analog stimulus in the form of persistent neural activity that is monotonically tuned to the stimulus. The family of persistent firing patterns with a continuous range of firing rates must all be realizable under exactly the same external conditions (during the delay when the transient stimulus is withdrawn). How this can be accomplished by neural mechanisms remains an unresolved question. Here we present a recurrent cortical network model of irregularly spiking neurons that was designed to simulate a somatosensory working memory experiment with behaving monkeys. Our model reproduces the observed positively and negatively monotonic persistent activity, and heterogeneous tuning curves of memory activity. We show that fine-tuning mathematically corresponds to a precise alignment of cusps in the bifurcation diagram of the network. Moreover, we show that the fine-tuned network can integrate stimulus inputs over several seconds. Assuming that such time integration occurs in neural populations downstream from a tonically persistent neural population, our model is able to account for the slow ramping-up and ramping-down behaviors of neurons observed in prefrontal cortex. PMID:14576212

  12. Spatial Learning and Action Planning in a Prefrontal Cortical Network Model

    PubMed Central

    Martinet, Louis-Emmanuel; Sheynikhovich, Denis; Benchenane, Karim; Arleo, Angelo

    2011-01-01

    The interplay between hippocampus and prefrontal cortex (PFC) is fundamental to spatial cognition. Complementing hippocampal place coding, prefrontal representations provide more abstract and hierarchically organized memories suitable for decision making. We model a prefrontal network mediating distributed information processing for spatial learning and action planning. Specific connectivity and synaptic adaptation principles shape the recurrent dynamics of the network arranged in cortical minicolumns. We show how the PFC columnar organization is suitable for learning sparse topological-metrical representations from redundant hippocampal inputs. The recurrent nature of the network supports multilevel spatial processing, allowing structural features of the environment to be encoded. An activation diffusion mechanism spreads the neural activity through the column population leading to trajectory planning. The model provides a functional framework for interpreting the activity of PFC neurons recorded during navigation tasks. We illustrate the link from single unit activity to behavioral responses. The results suggest plausible neural mechanisms subserving the cognitive “insight” capability originally attributed to rodents by Tolman & Honzik. Our time course analysis of neural responses shows how the interaction between hippocampus and PFC can yield the encoding of manifold information pertinent to spatial planning, including prospective coding and distance-to-goal correlates. PMID:21625569

  13. Locking of correlated neural activity to ongoing oscillations

    PubMed Central

    Helias, Moritz

    2017-01-01

    Population-wide oscillations are ubiquitously observed in mesoscopic signals of cortical activity. In these network states a global oscillatory cycle modulates the propensity of neurons to fire. Synchronous activation of neurons has been hypothesized to be a separate channel of signal processing information in the brain. A salient question is therefore if and how oscillations interact with spike synchrony and in how far these channels can be considered separate. Experiments indeed showed that correlated spiking co-modulates with the static firing rate and is also tightly locked to the phase of beta-oscillations. While the dependence of correlations on the mean rate is well understood in feed-forward networks, it remains unclear why and by which mechanisms correlations tightly lock to an oscillatory cycle. We here demonstrate that such correlated activation of pairs of neurons is qualitatively explained by periodically-driven random networks. We identify the mechanisms by which covariances depend on a driving periodic stimulus. Mean-field theory combined with linear response theory yields closed-form expressions for the cyclostationary mean activities and pairwise zero-time-lag covariances of binary recurrent random networks. Two distinct mechanisms cause time-dependent covariances: the modulation of the susceptibility of single neurons (via the external input and network feedback) and the time-varying variances of single unit activities. For some parameters, the effectively inhibitory recurrent feedback leads to resonant covariances even if mean activities show non-resonant behavior. Our analytical results open the question of time-modulated synchronous activity to a quantitative analysis. PMID:28604771

  14. A novel joint-processing adaptive nonlinear equalizer using a modular recurrent neural network for chaotic communication systems.

    PubMed

    Zhao, Haiquan; Zeng, Xiangping; Zhang, Jiashu; Liu, Yangguang; Wang, Xiaomin; Li, Tianrui

    2011-01-01

    To eliminate nonlinear channel distortion in chaotic communication systems, a novel joint-processing adaptive nonlinear equalizer based on a pipelined recurrent neural network (JPRNN) is proposed, using a modified real-time recurrent learning (RTRL) algorithm. Furthermore, an adaptive amplitude RTRL algorithm is adopted to overcome the deteriorating effect introduced by the nesting process. Computer simulations illustrate that the proposed equalizer outperforms the pipelined recurrent neural network (PRNN) and recurrent neural network (RNN) equalizers. Copyright © 2010 Elsevier Ltd. All rights reserved.

  15. Recurrence Density Enhanced Complex Networks for Nonlinear Time Series Analysis

    NASA Astrophysics Data System (ADS)

    Costa, Diego G. De B.; Reis, Barbara M. Da F.; Zou, Yong; Quiles, Marcos G.; Macau, Elbert E. N.

    We introduce a new method, which is entitled Recurrence Density Enhanced Complex Network (RDE-CN), to properly analyze nonlinear time series. Our method first transforms a recurrence plot into a figure of a reduced number of points yet preserving the main and fundamental recurrence properties of the original plot. This resulting figure is then reinterpreted as a complex network, which is further characterized by network statistical measures. We illustrate the computational power of RDE-CN approach by time series by both the logistic map and experimental fluid flows, which show that our method distinguishes different dynamics sufficiently well as the traditional recurrence analysis. Therefore, the proposed methodology characterizes the recurrence matrix adequately, while using a reduced set of points from the original recurrence plots.

  16. The frequency preference of neurons and synapses in a recurrent oscillatory network.

    PubMed

    Tseng, Hua-an; Martinez, Diana; Nadim, Farzan

    2014-09-17

    A variety of neurons and synapses shows a maximal response at a preferred frequency, generally considered to be important in shaping network activity. We are interested in whether all neurons and synapses in a recurrent oscillatory network can have preferred frequencies and, if so, whether these frequencies are the same or correlated, and whether they influence the network activity. We address this question using identified neurons in the pyloric network of the crab Cancer borealis. Previous work has shown that the pyloric pacemaker neurons exhibit membrane potential resonance whose resonance frequency is correlated with the network frequency. The follower lateral pyloric (LP) neuron makes reciprocally inhibitory synapses with the pacemakers. We find that LP shows resonance at a higher frequency than the pacemakers and the network frequency falls between the two. We also find that the reciprocal synapses between the pacemakers and LP have preferred frequencies but at significantly lower values. The preferred frequency of the LP to pacemaker synapse is correlated with the presynaptic preferred frequency, which is most pronounced when the peak voltage of the LP waveform is within the dynamic range of the synaptic activation curve and a shift in the activation curve by the modulatory neuropeptide proctolin shifts the frequency preference. Proctolin also changes the power of the LP neuron resonance without significantly changing the resonance frequency. These results indicate that different neuron types and synapses in a network may have distinct preferred frequencies, which are subject to neuromodulation and may interact to shape network oscillations. Copyright © 2014 the authors 0270-6474/14/3412933-13$15.00/0.

  17. Functional Resistance to Recurrent Spatially Heterogeneous Disturbances Is Facilitated by Increased Activity of Surviving Bacteria in a Virtual Ecosystem

    PubMed Central

    König, Sara; Worrich, Anja; Banitz, Thomas; Harms, Hauke; Kästner, Matthias; Miltner, Anja; Wick, Lukas Y.; Frank, Karin; Thullner, Martin; Centler, Florian

    2018-01-01

    Bacterial degradation of organic compounds is an important ecosystem function with relevance to, e.g., the cycling of elements or the degradation of organic contaminants. It remains an open question, however, to which extent ecosystems are able to maintain such biodegradation function under recurrent disturbances (functional resistance) and how this is related to the bacterial biomass abundance. In this paper, we use a numerical simulation approach to systematically analyze the dynamic response of a microbial population to recurrent disturbances of different spatial distribution. The spatially explicit model considers microbial degradation, growth, dispersal, and spatial networks that facilitate bacterial dispersal mimicking effects of mycelial networks in nature. We find: (i) There is a certain capacity for high resistance of biodegradation performance to recurrent disturbances. (ii) If this resistance capacity is exceeded, spatial zones of different biodegradation performance develop, ranging from no or reduced to even increased performance. (iii) Bacterial biomass and biodegradation dynamics respond inversely to the spatial fragmentation of disturbances: overall biodegradation performance improves with increasing fragmentation, but bacterial biomass declines. (iv) Bacterial dispersal networks can enhance functional resistance against recurrent disturbances, mainly by reactivating zones in the core of disturbed areas, even though this leads to an overall reduction of bacterial biomass. PMID:29696013

  18. Recurrent interactions between the input and output of a songbird cortico-basal ganglia pathway are implicated in vocal sequence variability

    PubMed Central

    Hamaguchi, Kosuke; Mooney, Richard

    2012-01-01

    Complex brain functions, such as the capacity to learn and modulate vocal sequences, depend on activity propagation in highly distributed neural networks. To explore the synaptic basis of activity propagation in such networks, we made dual in vivo intracellular recordings in anesthetized zebra finches from the input (nucleus HVC) and output (lateral magnocellular nucleus of the anterior nidopallium (LMAN)) neurons of a songbird cortico-basal ganglia (BG) pathway necessary to the learning and modulation of vocal motor sequences. These recordings reveal evidence of bidirectional interactions, rather than only feedforward propagation of activity from HVC to LMAN, as had been previously supposed. A combination of dual and triple recording configurations and pharmacological manipulations was used to map out circuitry by which activity propagates from LMAN to HVC. These experiments indicate that activity travels to HVC through at least two independent ipsilateral pathways, one of which involves fast signaling through a midbrain dopaminergic cell group, reminiscent of recurrent mesocortical loops described in mammals. We then used in vivo pharmacological manipulations to establish that augmented LMAN activity is sufficient to restore high levels of sequence variability in adult birds, suggesting that recurrent interactions through highly distributed forebrain – midbrain pathways can modulate learned vocal sequences. PMID:22915110

  19. Character recognition from trajectory by recurrent spiking neural networks.

    PubMed

    Jiangrong Shen; Kang Lin; Yueming Wang; Gang Pan

    2017-07-01

    Spiking neural networks are biologically plausible and power-efficient on neuromorphic hardware, while recurrent neural networks have been proven to be efficient on time series data. However, how to use the recurrent property to improve the performance of spiking neural networks is still a problem. This paper proposes a recurrent spiking neural network for character recognition using trajectories. In the network, a new encoding method is designed, in which varying time ranges of input streams are used in different recurrent layers. This is able to improve the generalization ability of our model compared with general encoding methods. The experiments are conducted on four groups of the character data set from University of Edinburgh. The results show that our method can achieve a higher average recognition accuracy than existing methods.

  20. Finite-time convergent recurrent neural network with a hard-limiting activation function for constrained optimization with piecewise-linear objective functions.

    PubMed

    Liu, Qingshan; Wang, Jun

    2011-04-01

    This paper presents a one-layer recurrent neural network for solving a class of constrained nonsmooth optimization problems with piecewise-linear objective functions. The proposed neural network is guaranteed to be globally convergent in finite time to the optimal solutions under a mild condition on a derived lower bound of a single gain parameter in the model. The number of neurons in the neural network is the same as the number of decision variables of the optimization problem. Compared with existing neural networks for optimization, the proposed neural network has a couple of salient features such as finite-time convergence and a low model complexity. Specific models for two important special cases, namely, linear programming and nonsmooth optimization, are also presented. In addition, applications to the shortest path problem and constrained least absolute deviation problem are discussed with simulation results to demonstrate the effectiveness and characteristics of the proposed neural network.

  1. Attractor neural networks with resource-efficient synaptic connectivity

    NASA Astrophysics Data System (ADS)

    Pehlevan, Cengiz; Sengupta, Anirvan

    Memories are thought to be stored in the attractor states of recurrent neural networks. Here we explore how resource constraints interplay with memory storage function to shape synaptic connectivity of attractor networks. We propose that given a set of memories, in the form of population activity patterns, the neural circuit choses a synaptic connectivity configuration that minimizes a resource usage cost. We argue that the total synaptic weight (l1-norm) in the network measures the resource cost because synaptic weight is correlated with synaptic volume, which is a limited resource, and is proportional to neurotransmitter release and post-synaptic current, both of which cost energy. Using numerical simulations and replica theory, we characterize optimal connectivity profiles in resource-efficient attractor networks. Our theory explains several experimental observations on cortical connectivity profiles, 1) connectivity is sparse, because synapses are costly, 2) bidirectional connections are overrepresented and 3) are stronger, because attractor states need strong recurrence.

  2. Self-organized topology of recurrence-based complex networks

    NASA Astrophysics Data System (ADS)

    Yang, Hui; Liu, Gang

    2013-12-01

    With the rapid technological advancement, network is almost everywhere in our daily life. Network theory leads to a new way to investigate the dynamics of complex systems. As a result, many methods are proposed to construct a network from nonlinear time series, including the partition of state space, visibility graph, nearest neighbors, and recurrence approaches. However, most previous works focus on deriving the adjacency matrix to represent the complex network and extract new network-theoretic measures. Although the adjacency matrix provides connectivity information of nodes and edges, the network geometry can take variable forms. The research objective of this article is to develop a self-organizing approach to derive the steady geometric structure of a network from the adjacency matrix. We simulate the recurrence network as a physical system by treating the edges as springs and the nodes as electrically charged particles. Then, force-directed algorithms are developed to automatically organize the network geometry by minimizing the system energy. Further, a set of experiments were designed to investigate important factors (i.e., dynamical systems, network construction methods, force-model parameter, nonhomogeneous distribution) affecting this self-organizing process. Interestingly, experimental results show that the self-organized geometry recovers the attractor of a dynamical system that produced the adjacency matrix. This research addresses a question, i.e., "what is the self-organizing geometry of a recurrence network?" and provides a new way to reproduce the attractor or time series from the recurrence plot. As a result, novel network-theoretic measures (e.g., average path length and proximity ratio) can be achieved based on actual node-to-node distances in the self-organized network topology. The paper brings the physical models into the recurrence analysis and discloses the spatial geometry of recurrence networks.

  3. Self-organized topology of recurrence-based complex networks.

    PubMed

    Yang, Hui; Liu, Gang

    2013-12-01

    With the rapid technological advancement, network is almost everywhere in our daily life. Network theory leads to a new way to investigate the dynamics of complex systems. As a result, many methods are proposed to construct a network from nonlinear time series, including the partition of state space, visibility graph, nearest neighbors, and recurrence approaches. However, most previous works focus on deriving the adjacency matrix to represent the complex network and extract new network-theoretic measures. Although the adjacency matrix provides connectivity information of nodes and edges, the network geometry can take variable forms. The research objective of this article is to develop a self-organizing approach to derive the steady geometric structure of a network from the adjacency matrix. We simulate the recurrence network as a physical system by treating the edges as springs and the nodes as electrically charged particles. Then, force-directed algorithms are developed to automatically organize the network geometry by minimizing the system energy. Further, a set of experiments were designed to investigate important factors (i.e., dynamical systems, network construction methods, force-model parameter, nonhomogeneous distribution) affecting this self-organizing process. Interestingly, experimental results show that the self-organized geometry recovers the attractor of a dynamical system that produced the adjacency matrix. This research addresses a question, i.e., "what is the self-organizing geometry of a recurrence network?" and provides a new way to reproduce the attractor or time series from the recurrence plot. As a result, novel network-theoretic measures (e.g., average path length and proximity ratio) can be achieved based on actual node-to-node distances in the self-organized network topology. The paper brings the physical models into the recurrence analysis and discloses the spatial geometry of recurrence networks.

  4. Self-organized topology of recurrence-based complex networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Hui, E-mail: huiyang@usf.edu; Liu, Gang

    With the rapid technological advancement, network is almost everywhere in our daily life. Network theory leads to a new way to investigate the dynamics of complex systems. As a result, many methods are proposed to construct a network from nonlinear time series, including the partition of state space, visibility graph, nearest neighbors, and recurrence approaches. However, most previous works focus on deriving the adjacency matrix to represent the complex network and extract new network-theoretic measures. Although the adjacency matrix provides connectivity information of nodes and edges, the network geometry can take variable forms. The research objective of this article ismore » to develop a self-organizing approach to derive the steady geometric structure of a network from the adjacency matrix. We simulate the recurrence network as a physical system by treating the edges as springs and the nodes as electrically charged particles. Then, force-directed algorithms are developed to automatically organize the network geometry by minimizing the system energy. Further, a set of experiments were designed to investigate important factors (i.e., dynamical systems, network construction methods, force-model parameter, nonhomogeneous distribution) affecting this self-organizing process. Interestingly, experimental results show that the self-organized geometry recovers the attractor of a dynamical system that produced the adjacency matrix. This research addresses a question, i.e., “what is the self-organizing geometry of a recurrence network?” and provides a new way to reproduce the attractor or time series from the recurrence plot. As a result, novel network-theoretic measures (e.g., average path length and proximity ratio) can be achieved based on actual node-to-node distances in the self-organized network topology. The paper brings the physical models into the recurrence analysis and discloses the spatial geometry of recurrence networks.« less

  5. Charting epilepsy by searching for intelligence in network space with the help of evolving autonomous agents.

    PubMed

    Ohayon, Elan L; Kalitzin, Stiliyan; Suffczynski, Piotr; Jin, Frank Y; Tsang, Paul W; Borrett, Donald S; Burnham, W McIntyre; Kwan, Hon C

    2004-01-01

    The problem of demarcating neural network space is formidable. A simple fully connected recurrent network of five units (binary activations, synaptic weight resolution of 10) has 3.2 *10(26) possible initial states. The problem increases drastically with scaling. Here we consider three complementary approaches to help direct the exploration to distinguish epileptic from healthy networks. [1] First, we perform a gross mapping of the space of five-unit continuous recurrent networks using randomized weights and initial activations. The majority of weight patterns (>70%) were found to result in neural assemblies exhibiting periodic limit-cycle oscillatory behavior. [2] Next we examine the activation space of non-periodic networks demonstrating that the emergence of paroxysmal activity does not require changes in connectivity. [3] The next challenge is to focus the search of network space to identify networks with more complex dynamics. Here we rely on a major available indicator critical to clinical assessment but largely ignored by epilepsy modelers, namely: behavioral states. To this end, we connected the above network layout to an external robot in which interactive states were evolved. The first random generation showed a distribution in line with approach [1]. That is, the predominate phenotypes were fixed-point or oscillatory with seizure-like motor output. As evolution progressed the profile changed markedly. Within 20 generations the entire population was able to navigate a simple environment with all individuals exhibiting multiply-stable behaviors with no cases of default locked limit-cycle oscillatory motor behavior. The resultant population may thus afford us a view of the architectural principles demarcating healthy biological networks from the pathological. The approach has an advantage over other epilepsy modeling techniques in providing a way to clarify whether observed dynamics or suggested therapies are pointing to computational viability or dead space.

  6. Human activities recognition by head movement using partial recurrent neural network

    NASA Astrophysics Data System (ADS)

    Tan, Henry C. C.; Jia, Kui; De Silva, Liyanage C.

    2003-06-01

    Traditionally, human activities recognition has been achieved mainly by the statistical pattern recognition methods or the Hidden Markov Model (HMM). In this paper, we propose a novel use of the connectionist approach for the recognition of ten simple human activities: walking, sitting down, getting up, squatting down and standing up, in both lateral and frontal views, in an office environment. By means of tracking the head movement of the subjects over consecutive frames from a database of different color image sequences, and incorporating the Elman model of the partial recurrent neural network (RNN) that learns the sequential patterns of relative change of the head location in the images, the proposed system is able to robustly classify all the ten activities performed by unseen subjects from both sexes, of different race and physique, with a recognition rate as high as 92.5%. This demonstrates the potential of employing partial RNN to recognize complex activities in the increasingly popular human-activities-based applications.

  7. Deep Recurrent Neural Network-Based Autoencoders for Acoustic Novelty Detection

    PubMed Central

    Vesperini, Fabio; Schuller, Björn

    2017-01-01

    In the emerging field of acoustic novelty detection, most research efforts are devoted to probabilistic approaches such as mixture models or state-space models. Only recent studies introduced (pseudo-)generative models for acoustic novelty detection with recurrent neural networks in the form of an autoencoder. In these approaches, auditory spectral features of the next short term frame are predicted from the previous frames by means of Long-Short Term Memory recurrent denoising autoencoders. The reconstruction error between the input and the output of the autoencoder is used as activation signal to detect novel events. There is no evidence of studies focused on comparing previous efforts to automatically recognize novel events from audio signals and giving a broad and in depth evaluation of recurrent neural network-based autoencoders. The present contribution aims to consistently evaluate our recent novel approaches to fill this white spot in the literature and provide insight by extensive evaluations carried out on three databases: A3Novelty, PASCAL CHiME, and PROMETHEUS. Besides providing an extensive analysis of novel and state-of-the-art methods, the article shows how RNN-based autoencoders outperform statistical approaches up to an absolute improvement of 16.4% average F-measure over the three databases. PMID:28182121

  8. Comparison of the dynamics of neural interactions between current-based and conductance-based integrate-and-fire recurrent networks

    PubMed Central

    Cavallari, Stefano; Panzeri, Stefano; Mazzoni, Alberto

    2014-01-01

    Models of networks of Leaky Integrate-and-Fire (LIF) neurons are a widely used tool for theoretical investigations of brain function. These models have been used both with current- and conductance-based synapses. However, the differences in the dynamics expressed by these two approaches have been so far mainly studied at the single neuron level. To investigate how these synaptic models affect network activity, we compared the single neuron and neural population dynamics of conductance-based networks (COBNs) and current-based networks (CUBNs) of LIF neurons. These networks were endowed with sparse excitatory and inhibitory recurrent connections, and were tested in conditions including both low- and high-conductance states. We developed a novel procedure to obtain comparable networks by properly tuning the synaptic parameters not shared by the models. The so defined comparable networks displayed an excellent and robust match of first order statistics (average single neuron firing rates and average frequency spectrum of network activity). However, these comparable networks showed profound differences in the second order statistics of neural population interactions and in the modulation of these properties by external inputs. The correlation between inhibitory and excitatory synaptic currents and the cross-neuron correlation between synaptic inputs, membrane potentials and spike trains were stronger and more stimulus-modulated in the COBN. Because of these properties, the spike train correlation carried more information about the strength of the input in the COBN, although the firing rates were equally informative in both network models. Moreover, the network activity of COBN showed stronger synchronization in the gamma band, and spectral information about the input higher and spread over a broader range of frequencies. These results suggest that the second order statistics of network dynamics depend strongly on the choice of synaptic model. PMID:24634645

  9. Comparison of the dynamics of neural interactions between current-based and conductance-based integrate-and-fire recurrent networks.

    PubMed

    Cavallari, Stefano; Panzeri, Stefano; Mazzoni, Alberto

    2014-01-01

    Models of networks of Leaky Integrate-and-Fire (LIF) neurons are a widely used tool for theoretical investigations of brain function. These models have been used both with current- and conductance-based synapses. However, the differences in the dynamics expressed by these two approaches have been so far mainly studied at the single neuron level. To investigate how these synaptic models affect network activity, we compared the single neuron and neural population dynamics of conductance-based networks (COBNs) and current-based networks (CUBNs) of LIF neurons. These networks were endowed with sparse excitatory and inhibitory recurrent connections, and were tested in conditions including both low- and high-conductance states. We developed a novel procedure to obtain comparable networks by properly tuning the synaptic parameters not shared by the models. The so defined comparable networks displayed an excellent and robust match of first order statistics (average single neuron firing rates and average frequency spectrum of network activity). However, these comparable networks showed profound differences in the second order statistics of neural population interactions and in the modulation of these properties by external inputs. The correlation between inhibitory and excitatory synaptic currents and the cross-neuron correlation between synaptic inputs, membrane potentials and spike trains were stronger and more stimulus-modulated in the COBN. Because of these properties, the spike train correlation carried more information about the strength of the input in the COBN, although the firing rates were equally informative in both network models. Moreover, the network activity of COBN showed stronger synchronization in the gamma band, and spectral information about the input higher and spread over a broader range of frequencies. These results suggest that the second order statistics of network dynamics depend strongly on the choice of synaptic model.

  10. Recurrent Convolutional Neural Networks: A Better Model of Biological Object Recognition.

    PubMed

    Spoerer, Courtney J; McClure, Patrick; Kriegeskorte, Nikolaus

    2017-01-01

    Feedforward neural networks provide the dominant model of how the brain performs visual object recognition. However, these networks lack the lateral and feedback connections, and the resulting recurrent neuronal dynamics, of the ventral visual pathway in the human and non-human primate brain. Here we investigate recurrent convolutional neural networks with bottom-up (B), lateral (L), and top-down (T) connections. Combining these types of connections yields four architectures (B, BT, BL, and BLT), which we systematically test and compare. We hypothesized that recurrent dynamics might improve recognition performance in the challenging scenario of partial occlusion. We introduce two novel occluded object recognition tasks to test the efficacy of the models, digit clutter (where multiple target digits occlude one another) and digit debris (where target digits are occluded by digit fragments). We find that recurrent neural networks outperform feedforward control models (approximately matched in parametric complexity) at recognizing objects, both in the absence of occlusion and in all occlusion conditions. Recurrent networks were also found to be more robust to the inclusion of additive Gaussian noise. Recurrent neural networks are better in two respects: (1) they are more neurobiologically realistic than their feedforward counterparts; (2) they are better in terms of their ability to recognize objects, especially under challenging conditions. This work shows that computer vision can benefit from using recurrent convolutional architectures and suggests that the ubiquitous recurrent connections in biological brains are essential for task performance.

  11. Solving differential equations with unknown constitutive relations as recurrent neural networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hagge, Tobias J.; Stinis, Panagiotis; Yeung, Enoch H.

    We solve a system of ordinary differential equations with an unknown functional form of a sink (reaction rate) term. We assume that the measurements (time series) of state variables are partially available, and use a recurrent neural network to “learn” the reaction rate from this data. This is achieved by including discretized ordinary differential equations as part of a recurrent neural network training problem. We extend TensorFlow’s recurrent neural network architecture to create a simple but scalable and effective solver for the unknown functions, and apply it to a fedbatch bioreactor simulation problem. Use of techniques from recent deep learningmore » literature enables training of functions with behavior manifesting over thousands of time steps. Our networks are structurally similar to recurrent neural networks, but differ in purpose, and require modified training strategies.« less

  12. An Attractor-Based Complexity Measurement for Boolean Recurrent Neural Networks

    PubMed Central

    Cabessa, Jérémie; Villa, Alessandro E. P.

    2014-01-01

    We provide a novel refined attractor-based complexity measurement for Boolean recurrent neural networks that represents an assessment of their computational power in terms of the significance of their attractor dynamics. This complexity measurement is achieved by first proving a computational equivalence between Boolean recurrent neural networks and some specific class of -automata, and then translating the most refined classification of -automata to the Boolean neural network context. As a result, a hierarchical classification of Boolean neural networks based on their attractive dynamics is obtained, thus providing a novel refined attractor-based complexity measurement for Boolean recurrent neural networks. These results provide new theoretical insights to the computational and dynamical capabilities of neural networks according to their attractive potentialities. An application of our findings is illustrated by the analysis of the dynamics of a simplified model of the basal ganglia-thalamocortical network simulated by a Boolean recurrent neural network. This example shows the significance of measuring network complexity, and how our results bear new founding elements for the understanding of the complexity of real brain circuits. PMID:24727866

  13. Contributions of diverse excitatory and inhibitory neurons to recurrent network activity in cerebral cortex.

    PubMed

    Neske, Garrett T; Patrick, Saundra L; Connors, Barry W

    2015-01-21

    The recurrent synaptic architecture of neocortex allows for self-generated network activity. One form of such activity is the Up state, in which neurons transiently receive barrages of excitatory and inhibitory synaptic inputs that depolarize many neurons to spike threshold before returning to a relatively quiescent Down state. The extent to which different cell types participate in Up states is still unclear. Inhibitory interneurons have particularly diverse intrinsic properties and synaptic connections with the local network, suggesting that different interneurons might play different roles in activated network states. We have studied the firing, subthreshold behavior, and synaptic conductances of identified cell types during Up and Down states in layers 5 and 2/3 in mouse barrel cortex in vitro. We recorded from pyramidal cells and interneurons expressing parvalbumin (PV), somatostatin (SOM), vasoactive intestinal peptide (VIP), or neuropeptide Y. PV cells were the most active interneuron subtype during the Up state, yet the other subtypes also received substantial synaptic conductances and often generated spikes. In all cell types except PV cells, the beginning of the Up state was dominated by synaptic inhibition, which decreased thereafter; excitation was more persistent, suggesting that inhibition is not the dominant force in terminating Up states. Compared with barrel cortex, SOM and VIP cells were much less active in entorhinal cortex during Up states. Our results provide a measure of functional connectivity of various neuron types in barrel cortex and suggest differential roles for interneuron types in the generation and control of persistent network activity. Copyright © 2015 the authors 0270-6474/15/351089-17$15.00/0.

  14. A novel recurrent neural network with finite-time convergence for linear programming.

    PubMed

    Liu, Qingshan; Cao, Jinde; Chen, Guanrong

    2010-11-01

    In this letter, a novel recurrent neural network based on the gradient method is proposed for solving linear programming problems. Finite-time convergence of the proposed neural network is proved by using the Lyapunov method. Compared with the existing neural networks for linear programming, the proposed neural network is globally convergent to exact optimal solutions in finite time, which is remarkable and rare in the literature of neural networks for optimization. Some numerical examples are given to show the effectiveness and excellent performance of the new recurrent neural network.

  15. Real time unsupervised learning of visual stimuli in neuromorphic VLSI systems

    NASA Astrophysics Data System (ADS)

    Giulioni, Massimiliano; Corradi, Federico; Dante, Vittorio; Del Giudice, Paolo

    2015-10-01

    Neuromorphic chips embody computational principles operating in the nervous system, into microelectronic devices. In this domain it is important to identify computational primitives that theory and experiments suggest as generic and reusable cognitive elements. One such element is provided by attractor dynamics in recurrent networks. Point attractors are equilibrium states of the dynamics (up to fluctuations), determined by the synaptic structure of the network; a ‘basin’ of attraction comprises all initial states leading to a given attractor upon relaxation, hence making attractor dynamics suitable to implement robust associative memory. The initial network state is dictated by the stimulus, and relaxation to the attractor state implements the retrieval of the corresponding memorized prototypical pattern. In a previous work we demonstrated that a neuromorphic recurrent network of spiking neurons and suitably chosen, fixed synapses supports attractor dynamics. Here we focus on learning: activating on-chip synaptic plasticity and using a theory-driven strategy for choosing network parameters, we show that autonomous learning, following repeated presentation of simple visual stimuli, shapes a synaptic connectivity supporting stimulus-selective attractors. Associative memory develops on chip as the result of the coupled stimulus-driven neural activity and ensuing synaptic dynamics, with no artificial separation between learning and retrieval phases.

  16. Real time unsupervised learning of visual stimuli in neuromorphic VLSI systems.

    PubMed

    Giulioni, Massimiliano; Corradi, Federico; Dante, Vittorio; del Giudice, Paolo

    2015-10-14

    Neuromorphic chips embody computational principles operating in the nervous system, into microelectronic devices. In this domain it is important to identify computational primitives that theory and experiments suggest as generic and reusable cognitive elements. One such element is provided by attractor dynamics in recurrent networks. Point attractors are equilibrium states of the dynamics (up to fluctuations), determined by the synaptic structure of the network; a 'basin' of attraction comprises all initial states leading to a given attractor upon relaxation, hence making attractor dynamics suitable to implement robust associative memory. The initial network state is dictated by the stimulus, and relaxation to the attractor state implements the retrieval of the corresponding memorized prototypical pattern. In a previous work we demonstrated that a neuromorphic recurrent network of spiking neurons and suitably chosen, fixed synapses supports attractor dynamics. Here we focus on learning: activating on-chip synaptic plasticity and using a theory-driven strategy for choosing network parameters, we show that autonomous learning, following repeated presentation of simple visual stimuli, shapes a synaptic connectivity supporting stimulus-selective attractors. Associative memory develops on chip as the result of the coupled stimulus-driven neural activity and ensuing synaptic dynamics, with no artificial separation between learning and retrieval phases.

  17. Modeling and control of magnetorheological fluid dampers using neural networks

    NASA Astrophysics Data System (ADS)

    Wang, D. H.; Liao, W. H.

    2005-02-01

    Due to the inherent nonlinear nature of magnetorheological (MR) fluid dampers, one of the challenging aspects for utilizing these devices to achieve high system performance is the development of accurate models and control algorithms that can take advantage of their unique characteristics. In this paper, the direct identification and inverse dynamic modeling for MR fluid dampers using feedforward and recurrent neural networks are studied. The trained direct identification neural network model can be used to predict the damping force of the MR fluid damper on line, on the basis of the dynamic responses across the MR fluid damper and the command voltage, and the inverse dynamic neural network model can be used to generate the command voltage according to the desired damping force through supervised learning. The architectures and the learning methods of the dynamic neural network models and inverse neural network models for MR fluid dampers are presented, and some simulation results are discussed. Finally, the trained neural network models are applied to predict and control the damping force of the MR fluid damper. Moreover, validation methods for the neural network models developed are proposed and used to evaluate their performance. Validation results with different data sets indicate that the proposed direct identification dynamic model using the recurrent neural network can be used to predict the damping force accurately and the inverse identification dynamic model using the recurrent neural network can act as a damper controller to generate the command voltage when the MR fluid damper is used in a semi-active mode.

  18. Nonlinearly Activated Neural Network for Solving Time-Varying Complex Sylvester Equation.

    PubMed

    Li, Shuai; Li, Yangming

    2013-10-28

    The Sylvester equation is often encountered in mathematics and control theory. For the general time-invariant Sylvester equation problem, which is defined in the domain of complex numbers, the Bartels-Stewart algorithm and its extensions are effective and widely used with an O(n³) time complexity. When applied to solving the time-varying Sylvester equation, the computation burden increases intensively with the decrease of sampling period and cannot satisfy continuous realtime calculation requirements. For the special case of the general Sylvester equation problem defined in the domain of real numbers, gradient-based recurrent neural networks are able to solve the time-varying Sylvester equation in real time, but there always exists an estimation error while a recently proposed recurrent neural network by Zhang et al [this type of neural network is called Zhang neural network (ZNN)] converges to the solution ideally. The advancements in complex-valued neural networks cast light to extend the existing real-valued ZNN for solving the time-varying real-valued Sylvester equation to its counterpart in the domain of complex numbers. In this paper, a complex-valued ZNN for solving the complex-valued Sylvester equation problem is investigated and the global convergence of the neural network is proven with the proposed nonlinear complex-valued activation functions. Moreover, a special type of activation function with a core function, called sign-bi-power function, is proven to enable the ZNN to converge in finite time, which further enhances its advantage in online processing. In this case, the upper bound of the convergence time is also derived analytically. Simulations are performed to evaluate and compare the performance of the neural network with different parameters and activation functions. Both theoretical analysis and numerical simulations validate the effectiveness of the proposed method.

  19. A mathematical analysis of the effects of Hebbian learning rules on the dynamics and structure of discrete-time random recurrent neural networks.

    PubMed

    Siri, Benoît; Berry, Hugues; Cessac, Bruno; Delord, Bruno; Quoy, Mathias

    2008-12-01

    We present a mathematical analysis of the effects of Hebbian learning in random recurrent neural networks, with a generic Hebbian learning rule, including passive forgetting and different timescales, for neuronal activity and learning dynamics. Previous numerical work has reported that Hebbian learning drives the system from chaos to a steady state through a sequence of bifurcations. Here, we interpret these results mathematically and show that these effects, involving a complex coupling between neuronal dynamics and synaptic graph structure, can be analyzed using Jacobian matrices, which introduce both a structural and a dynamical point of view on neural network evolution. Furthermore, we show that sensitivity to a learned pattern is maximal when the largest Lyapunov exponent is close to 0. We discuss how neural networks may take advantage of this regime of high functional interest.

  20. Learning and retrieval behavior in recurrent neural networks with pre-synaptic dependent homeostatic plasticity

    NASA Astrophysics Data System (ADS)

    Mizusaki, Beatriz E. P.; Agnes, Everton J.; Erichsen, Rubem; Brunnet, Leonardo G.

    2017-08-01

    The plastic character of brain synapses is considered to be one of the foundations for the formation of memories. There are numerous kinds of such phenomenon currently described in the literature, but their role in the development of information pathways in neural networks with recurrent architectures is still not completely clear. In this paper we study the role of an activity-based process, called pre-synaptic dependent homeostatic scaling, in the organization of networks that yield precise-timed spiking patterns. It encodes spatio-temporal information in the synaptic weights as it associates a learned input with a specific response. We introduce a correlation measure to evaluate the precision of the spiking patterns and explore the effects of different inhibitory interactions and learning parameters. We find that large learning periods are important in order to improve the network learning capacity and discuss this ability in the presence of distinct inhibitory currents.

  1. Learning State Space Dynamics in Recurrent Networks

    NASA Astrophysics Data System (ADS)

    Simard, Patrice Yvon

    Fully recurrent (asymmetrical) networks can be used to learn temporal trajectories. The network is unfolded in time, and backpropagation is used to train the weights. The presence of recurrent connections creates internal states in the system which vary as a function of time. The resulting dynamics can provide interesting additional computing power but learning is made more difficult by the existence of internal memories. This study first exhibits the properties of recurrent networks in terms of convergence when the internal states of the system are unknown. A new energy functional is provided to change the weights of the units in order to the control the stability of the fixed points of the network's dynamics. The power of the resultant algorithm is illustrated with the simulation of a content addressable memory. Next, the more general case of time trajectories on a recurrent network is studied. An application is proposed in which trajectories are generated to draw letters as a function of an input. In another application of recurrent systems, a neural network certain temporal properties observed in human callosally sectioned brains. Finally the proposed algorithm for stabilizing dynamics around fixed points is extended to one for stabilizing dynamics around time trajectories. Its effects are illustrated on a network which generates Lisajous curves.

  2. Ultra-Rapid serial visual presentation reveals dynamics of feedforward and feedback processes in the ventral visual pathway.

    PubMed

    Mohsenzadeh, Yalda; Qin, Sheng; Cichy, Radoslaw M; Pantazis, Dimitrios

    2018-06-21

    Human visual recognition activates a dense network of overlapping feedforward and recurrent neuronal processes, making it hard to disentangle processing in the feedforward from the feedback direction. Here, we used ultra-rapid serial visual presentation to suppress sustained activity that blurs the boundaries of processing steps, enabling us to resolve two distinct stages of processing with MEG multivariate pattern classification. The first processing stage was the rapid activation cascade of the bottom-up sweep, which terminated early as visual stimuli were presented at progressively faster rates. The second stage was the emergence of categorical information with peak latency that shifted later in time with progressively faster stimulus presentations, indexing time-consuming recurrent processing. Using MEG-fMRI fusion with representational similarity, we localized recurrent signals in early visual cortex. Together, our findings segregated an initial bottom-up sweep from subsequent feedback processing, and revealed the neural signature of increased recurrent processing demands for challenging viewing conditions. © 2018, Mohsenzadeh et al.

  3. Synthesis of recurrent neural networks for dynamical system simulation.

    PubMed

    Trischler, Adam P; D'Eleuterio, Gabriele M T

    2016-08-01

    We review several of the most widely used techniques for training recurrent neural networks to approximate dynamical systems, then describe a novel algorithm for this task. The algorithm is based on an earlier theoretical result that guarantees the quality of the network approximation. We show that a feedforward neural network can be trained on the vector-field representation of a given dynamical system using backpropagation, then recast it as a recurrent network that replicates the original system's dynamics. After detailing this algorithm and its relation to earlier approaches, we present numerical examples that demonstrate its capabilities. One of the distinguishing features of our approach is that both the original dynamical systems and the recurrent networks that simulate them operate in continuous time. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Deep Recurrent Neural Networks for Human Activity Recognition

    PubMed Central

    Murad, Abdulmajid

    2017-01-01

    Adopting deep learning methods for human activity recognition has been effective in extracting discriminative features from raw input sequences acquired from body-worn sensors. Although human movements are encoded in a sequence of successive samples in time, typical machine learning methods perform recognition tasks without exploiting the temporal correlations between input data samples. Convolutional neural networks (CNNs) address this issue by using convolutions across a one-dimensional temporal sequence to capture dependencies among input data. However, the size of convolutional kernels restricts the captured range of dependencies between data samples. As a result, typical models are unadaptable to a wide range of activity-recognition configurations and require fixed-length input windows. In this paper, we propose the use of deep recurrent neural networks (DRNNs) for building recognition models that are capable of capturing long-range dependencies in variable-length input sequences. We present unidirectional, bidirectional, and cascaded architectures based on long short-term memory (LSTM) DRNNs and evaluate their effectiveness on miscellaneous benchmark datasets. Experimental results show that our proposed models outperform methods employing conventional machine learning, such as support vector machine (SVM) and k-nearest neighbors (KNN). Additionally, the proposed models yield better performance than other deep learning techniques, such as deep believe networks (DBNs) and CNNs. PMID:29113103

  5. Deep Recurrent Neural Networks for Human Activity Recognition.

    PubMed

    Murad, Abdulmajid; Pyun, Jae-Young

    2017-11-06

    Adopting deep learning methods for human activity recognition has been effective in extracting discriminative features from raw input sequences acquired from body-worn sensors. Although human movements are encoded in a sequence of successive samples in time, typical machine learning methods perform recognition tasks without exploiting the temporal correlations between input data samples. Convolutional neural networks (CNNs) address this issue by using convolutions across a one-dimensional temporal sequence to capture dependencies among input data. However, the size of convolutional kernels restricts the captured range of dependencies between data samples. As a result, typical models are unadaptable to a wide range of activity-recognition configurations and require fixed-length input windows. In this paper, we propose the use of deep recurrent neural networks (DRNNs) for building recognition models that are capable of capturing long-range dependencies in variable-length input sequences. We present unidirectional, bidirectional, and cascaded architectures based on long short-term memory (LSTM) DRNNs and evaluate their effectiveness on miscellaneous benchmark datasets. Experimental results show that our proposed models outperform methods employing conventional machine learning, such as support vector machine (SVM) and k-nearest neighbors (KNN). Additionally, the proposed models yield better performance than other deep learning techniques, such as deep believe networks (DBNs) and CNNs.

  6. A model of metastable dynamics during ongoing and evoked cortical activity

    NASA Astrophysics Data System (ADS)

    La Camera, Giancarlo

    The dynamics of simultaneously recorded spike trains in alert animals often evolve through temporal sequences of metastable states. Little is known about the network mechanisms responsible for the genesis of such sequences, or their potential role in neural coding. In the gustatory cortex of alert rates, state sequences can be observed also in the absence of overt sensory stimulation, and thus form the basis of the so-called `ongoing activity'. This activity is characterized by a partial degree of coordination among neurons, sharp transitions among states, and multi-stability of single neurons' firing rates. A recurrent spiking network model with clustered topology can account for both the spontaneous generation of state sequences and the (network-generated) multi-stability. In the model, each network state results from the activation of specific neural clusters with potentiated intra-cluster connections. A mean field solution of the model shows a large number of stable states, each characterized by a subset of simultaneously active clusters. The firing rate in each cluster during ongoing activity depends on the number of active clusters, so that the same neuron can have different firing rates depending on the state of the network. Because of dense intra-cluster connectivity and recurrent inhibition, in finite networks the stable states lose stability due to finite size effects. Simulations of the dynamics show that the model ensemble activity continuously hops among the different states, reproducing the ongoing dynamics observed in the data. Moreover, when probed with external stimuli, the model correctly predicts the quenching of single neuron multi-stability into bi-stability, the reduction of dimensionality of the population activity, the reduction of trial-to-trial variability, and a potential role for metastable states in the anticipation of expected events. Altogether, these results provide a unified mechanistic model of ongoing and evoked cortical dynamics. NSF IIS-1161852, NIDCD K25-DC013557, NIDCD R01-DC010389.

  7. Advanced Aeroservoelastic Testing and Data Analysis (Les Essais Aeroservoelastiques et l’Analyse des Donnees).

    DTIC Science & Technology

    1995-11-01

    network - based AFS concepts. Neural networks can addition of vanes in each engine exhaust for thrust provide...parameter estimation programs 19-11 8.6 Neural Network Based Methods unknown parameters of the postulated state space model Artificial neural network ...Forward Neural Network the network that the applicability of the recurrent neural and ii) Recurrent Neural Network [117-119]. network to

  8. Real-time parallel processing of grammatical structure in the fronto-striatal system: a recurrent network simulation study using reservoir computing.

    PubMed

    Hinaut, Xavier; Dominey, Peter Ford

    2013-01-01

    Sentence processing takes place in real-time. Previous words in the sentence can influence the processing of the current word in the timescale of hundreds of milliseconds. Recent neurophysiological studies in humans suggest that the fronto-striatal system (frontal cortex, and striatum--the major input locus of the basal ganglia) plays a crucial role in this process. The current research provides a possible explanation of how certain aspects of this real-time processing can occur, based on the dynamics of recurrent cortical networks, and plasticity in the cortico-striatal system. We simulate prefrontal area BA47 as a recurrent network that receives on-line input about word categories during sentence processing, with plastic connections between cortex and striatum. We exploit the homology between the cortico-striatal system and reservoir computing, where recurrent frontal cortical networks are the reservoir, and plastic cortico-striatal synapses are the readout. The system is trained on sentence-meaning pairs, where meaning is coded as activation in the striatum corresponding to the roles that different nouns and verbs play in the sentences. The model learns an extended set of grammatical constructions, and demonstrates the ability to generalize to novel constructions. It demonstrates how early in the sentence, a parallel set of predictions are made concerning the meaning, which are then confirmed or updated as the processing of the input sentence proceeds. It demonstrates how on-line responses to words are influenced by previous words in the sentence, and by previous sentences in the discourse, providing new insight into the neurophysiology of the P600 ERP scalp response to grammatical complexity. This demonstrates that a recurrent neural network can decode grammatical structure from sentences in real-time in order to generate a predictive representation of the meaning of the sentences. This can provide insight into the underlying mechanisms of human cortico-striatal function in sentence processing.

  9. Real-Time Parallel Processing of Grammatical Structure in the Fronto-Striatal System: A Recurrent Network Simulation Study Using Reservoir Computing

    PubMed Central

    Hinaut, Xavier; Dominey, Peter Ford

    2013-01-01

    Sentence processing takes place in real-time. Previous words in the sentence can influence the processing of the current word in the timescale of hundreds of milliseconds. Recent neurophysiological studies in humans suggest that the fronto-striatal system (frontal cortex, and striatum – the major input locus of the basal ganglia) plays a crucial role in this process. The current research provides a possible explanation of how certain aspects of this real-time processing can occur, based on the dynamics of recurrent cortical networks, and plasticity in the cortico-striatal system. We simulate prefrontal area BA47 as a recurrent network that receives on-line input about word categories during sentence processing, with plastic connections between cortex and striatum. We exploit the homology between the cortico-striatal system and reservoir computing, where recurrent frontal cortical networks are the reservoir, and plastic cortico-striatal synapses are the readout. The system is trained on sentence-meaning pairs, where meaning is coded as activation in the striatum corresponding to the roles that different nouns and verbs play in the sentences. The model learns an extended set of grammatical constructions, and demonstrates the ability to generalize to novel constructions. It demonstrates how early in the sentence, a parallel set of predictions are made concerning the meaning, which are then confirmed or updated as the processing of the input sentence proceeds. It demonstrates how on-line responses to words are influenced by previous words in the sentence, and by previous sentences in the discourse, providing new insight into the neurophysiology of the P600 ERP scalp response to grammatical complexity. This demonstrates that a recurrent neural network can decode grammatical structure from sentences in real-time in order to generate a predictive representation of the meaning of the sentences. This can provide insight into the underlying mechanisms of human cortico-striatal function in sentence processing. PMID:23383296

  10. Real time unsupervised learning of visual stimuli in neuromorphic VLSI systems

    PubMed Central

    Giulioni, Massimiliano; Corradi, Federico; Dante, Vittorio; del Giudice, Paolo

    2015-01-01

    Neuromorphic chips embody computational principles operating in the nervous system, into microelectronic devices. In this domain it is important to identify computational primitives that theory and experiments suggest as generic and reusable cognitive elements. One such element is provided by attractor dynamics in recurrent networks. Point attractors are equilibrium states of the dynamics (up to fluctuations), determined by the synaptic structure of the network; a ‘basin’ of attraction comprises all initial states leading to a given attractor upon relaxation, hence making attractor dynamics suitable to implement robust associative memory. The initial network state is dictated by the stimulus, and relaxation to the attractor state implements the retrieval of the corresponding memorized prototypical pattern. In a previous work we demonstrated that a neuromorphic recurrent network of spiking neurons and suitably chosen, fixed synapses supports attractor dynamics. Here we focus on learning: activating on-chip synaptic plasticity and using a theory-driven strategy for choosing network parameters, we show that autonomous learning, following repeated presentation of simple visual stimuli, shapes a synaptic connectivity supporting stimulus-selective attractors. Associative memory develops on chip as the result of the coupled stimulus-driven neural activity and ensuing synaptic dynamics, with no artificial separation between learning and retrieval phases. PMID:26463272

  11. Self-organization of synchronous activity propagation in neuronal networks driven by local excitation

    PubMed Central

    Bayati, Mehdi; Valizadeh, Alireza; Abbassian, Abdolhossein; Cheng, Sen

    2015-01-01

    Many experimental and theoretical studies have suggested that the reliable propagation of synchronous neural activity is crucial for neural information processing. The propagation of synchronous firing activity in so-called synfire chains has been studied extensively in feed-forward networks of spiking neurons. However, it remains unclear how such neural activity could emerge in recurrent neuronal networks through synaptic plasticity. In this study, we investigate whether local excitation, i.e., neurons that fire at a higher frequency than the other, spontaneously active neurons in the network, can shape a network to allow for synchronous activity propagation. We use two-dimensional, locally connected and heterogeneous neuronal networks with spike-timing dependent plasticity (STDP). We find that, in our model, local excitation drives profound network changes within seconds. In the emergent network, neural activity propagates synchronously through the network. This activity originates from the site of the local excitation and propagates through the network. The synchronous activity propagation persists, even when the local excitation is removed, since it derives from the synaptic weight matrix. Importantly, once this connectivity is established it remains stable even in the presence of spontaneous activity. Our results suggest that synfire-chain-like activity can emerge in a relatively simple way in realistic neural networks by locally exciting the desired origin of the neuronal sequence. PMID:26089794

  12. Multistability and instability analysis of recurrent neural networks with time-varying delays.

    PubMed

    Zhang, Fanghai; Zeng, Zhigang

    2018-01-01

    This paper provides new theoretical results on the multistability and instability analysis of recurrent neural networks with time-varying delays. It is shown that such n-neuronal recurrent neural networks have exactly [Formula: see text] equilibria, [Formula: see text] of which are locally exponentially stable and the others are unstable, where k 0 is a nonnegative integer such that k 0 ≤n. By using the combination method of two different divisions, recurrent neural networks can possess more dynamic properties. This method improves and extends the existing results in the literature. Finally, one numerical example is provided to show the superiority and effectiveness of the presented results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Learning to Generate Sequences with Combination of Hebbian and Non-hebbian Plasticity in Recurrent Spiking Neural Networks

    PubMed Central

    Panda, Priyadarshini; Roy, Kaushik

    2017-01-01

    Synaptic Plasticity, the foundation for learning and memory formation in the human brain, manifests in various forms. Here, we combine the standard spike timing correlation based Hebbian plasticity with a non-Hebbian synaptic decay mechanism for training a recurrent spiking neural model to generate sequences. We show that inclusion of the adaptive decay of synaptic weights with standard STDP helps learn stable contextual dependencies between temporal sequences, while reducing the strong attractor states that emerge in recurrent models due to feedback loops. Furthermore, we show that the combined learning scheme suppresses the chaotic activity in the recurrent model substantially, thereby enhancing its' ability to generate sequences consistently even in the presence of perturbations. PMID:29311774

  14. Cortical travelling waves: mechanisms and computational principles

    PubMed Central

    Muller, Lyle; Chavane, Frédéric; Reynolds, John

    2018-01-01

    Multichannel recording technologies have revealed travelling waves of neural activity in multiple sensory, motor and cognitive systems. These waves can be spontaneously generated by recurrent circuits or evoked by external stimuli. They travel along brain networks at multiple scales, transiently modulating spiking and excitability as they pass. Here, we review recent experimental findings that have found evidence for travelling waves at single-area (mesoscopic) and whole-brain (macroscopic) scales. We place these findings in the context of the current theoretical understanding of wave generation and propagation in recurrent networks. During the large low-frequency rhythms of sleep or the relatively desynchronized state of the awake cortex, travelling waves may serve a variety of functions, from long-term memory consolidation to processing of dynamic visual stimuli. We explore new avenues for experimental and computational understanding of the role of spatiotemporal activity patterns in the cortex. PMID:29563572

  15. Lactate rescues neuronal sodium homeostasis during impaired energy metabolism.

    PubMed

    Karus, Claudia; Ziemens, Daniel; Rose, Christine R

    2015-01-01

    Recently, we established that recurrent activity evokes network sodium oscillations in neurons and astrocytes in hippocampal tissue slices. Interestingly, metabolic integrity of astrocytes was essential for the neurons' capacity to maintain low sodium and to recover from sodium loads, indicating an intimate metabolic coupling between the 2 cell types. Here, we studied if lactate can support neuronal sodium homeostasis during impaired energy metabolism by analyzing whether glucose removal, pharmacological inhibition of glycolysis and/or addition of lactate affect cellular sodium regulation. Furthermore, we studied the effect of lactate on sodium regulation during recurrent network activity and upon inhibition of the glial Krebs cycle by sodium-fluoroacetate. Our results indicate that lactate is preferentially used by neurons. They demonstrate that lactate supports neuronal sodium homeostasis and rescues the effects of glial poisoning by sodium-fluoroacetate. Altogether, they are in line with the proposed transfer of lactate from astrocytes to neurons, the so-called astrocyte-neuron-lactate shuttle.

  16. The dynamics of discrete-time computation, with application to recurrent neural networks and finite state machine extraction.

    PubMed

    Casey, M

    1996-08-15

    Recurrent neural networks (RNNs) can learn to perform finite state computations. It is shown that an RNN performing a finite state computation must organize its state space to mimic the states in the minimal deterministic finite state machine that can perform that computation, and a precise description of the attractor structure of such systems is given. This knowledge effectively predicts activation space dynamics, which allows one to understand RNN computation dynamics in spite of complexity in activation dynamics. This theory provides a theoretical framework for understanding finite state machine (FSM) extraction techniques and can be used to improve training methods for RNNs performing FSM computations. This provides an example of a successful approach to understanding a general class of complex systems that has not been explicitly designed, e.g., systems that have evolved or learned their internal structure.

  17. Lactate rescues neuronal sodium homeostasis during impaired energy metabolism

    PubMed Central

    Karus, Claudia; Ziemens, Daniel; Rose, Christine R

    2015-01-01

    Recently, we established that recurrent activity evokes network sodium oscillations in neurons and astrocytes in hippocampal tissue slices. Interestingly, metabolic integrity of astrocytes was essential for the neurons' capacity to maintain low sodium and to recover from sodium loads, indicating an intimate metabolic coupling between the 2 cell types. Here, we studied if lactate can support neuronal sodium homeostasis during impaired energy metabolism by analyzing whether glucose removal, pharmacological inhibition of glycolysis and/or addition of lactate affect cellular sodium regulation. Furthermore, we studied the effect of lactate on sodium regulation during recurrent network activity and upon inhibition of the glial Krebs cycle by sodium-fluoroacetate. Our results indicate that lactate is preferentially used by neurons. They demonstrate that lactate supports neuronal sodium homeostasis and rescues the effects of glial poisoning by sodium-fluoroacetate. Altogether, they are in line with the proposed transfer of lactate from astrocytes to neurons, the so-called astrocyte-neuron-lactate shuttle. PMID:26039160

  18. Characterization of chaotic attractors under noise: A recurrence network perspective

    NASA Astrophysics Data System (ADS)

    Jacob, Rinku; Harikrishnan, K. P.; Misra, R.; Ambika, G.

    2016-12-01

    We undertake a detailed numerical investigation to understand how the addition of white and colored noise to a chaotic time series changes the topology and the structure of the underlying attractor reconstructed from the time series. We use the methods and measures of recurrence plot and recurrence network generated from the time series for this analysis. We explicitly show that the addition of noise obscures the property of recurrence of trajectory points in the phase space which is the hallmark of every dynamical system. However, the structure of the attractor is found to be robust even upto high noise levels of 50%. An advantage of recurrence network measures over the conventional nonlinear measures is that they can be applied on short and non stationary time series data. By using the results obtained from the above analysis, we go on to analyse the light curves from a dominant black hole system and show that the recurrence network measures are capable of identifying the nature of noise contamination in a time series.

  19. Decorrelation of Neural-Network Activity by Inhibitory Feedback

    PubMed Central

    Einevoll, Gaute T.; Diesmann, Markus

    2012-01-01

    Correlations in spike-train ensembles can seriously impair the encoding of information by their spatio-temporal structure. An inevitable source of correlation in finite neural networks is common presynaptic input to pairs of neurons. Recent studies demonstrate that spike correlations in recurrent neural networks are considerably smaller than expected based on the amount of shared presynaptic input. Here, we explain this observation by means of a linear network model and simulations of networks of leaky integrate-and-fire neurons. We show that inhibitory feedback efficiently suppresses pairwise correlations and, hence, population-rate fluctuations, thereby assigning inhibitory neurons the new role of active decorrelation. We quantify this decorrelation by comparing the responses of the intact recurrent network (feedback system) and systems where the statistics of the feedback channel is perturbed (feedforward system). Manipulations of the feedback statistics can lead to a significant increase in the power and coherence of the population response. In particular, neglecting correlations within the ensemble of feedback channels or between the external stimulus and the feedback amplifies population-rate fluctuations by orders of magnitude. The fluctuation suppression in homogeneous inhibitory networks is explained by a negative feedback loop in the one-dimensional dynamics of the compound activity. Similarly, a change of coordinates exposes an effective negative feedback loop in the compound dynamics of stable excitatory-inhibitory networks. The suppression of input correlations in finite networks is explained by the population averaged correlations in the linear network model: In purely inhibitory networks, shared-input correlations are canceled by negative spike-train correlations. In excitatory-inhibitory networks, spike-train correlations are typically positive. Here, the suppression of input correlations is not a result of the mere existence of correlations between excitatory (E) and inhibitory (I) neurons, but a consequence of a particular structure of correlations among the three possible pairings (EE, EI, II). PMID:23133368

  20. Video-based convolutional neural networks for activity recognition from robot-centric videos

    NASA Astrophysics Data System (ADS)

    Ryoo, M. S.; Matthies, Larry

    2016-05-01

    In this evaluation paper, we discuss convolutional neural network (CNN)-based approaches for human activity recognition. In particular, we investigate CNN architectures designed to capture temporal information in videos and their applications to the human activity recognition problem. There have been multiple previous works to use CNN-features for videos. These include CNNs using 3-D XYT convolutional filters, CNNs using pooling operations on top of per-frame image-based CNN descriptors, and recurrent neural networks to learn temporal changes in per-frame CNN descriptors. We experimentally compare some of these different representatives CNNs while using first-person human activity videos. We especially focus on videos from a robots viewpoint, captured during its operations and human-robot interactions.

  1. Convolutional neural networks for prostate cancer recurrence prediction

    NASA Astrophysics Data System (ADS)

    Kumar, Neeraj; Verma, Ruchika; Arora, Ashish; Kumar, Abhay; Gupta, Sanchit; Sethi, Amit; Gann, Peter H.

    2017-03-01

    Accurate prediction of the treatment outcome is important for cancer treatment planning. We present an approach to predict prostate cancer (PCa) recurrence after radical prostatectomy using tissue images. We used a cohort whose case vs. control (recurrent vs. non-recurrent) status had been determined using post-treatment follow up. Further, to aid the development of novel biomarkers of PCa recurrence, cases and controls were paired based on matching of other predictive clinical variables such as Gleason grade, stage, age, and race. For this cohort, tissue resection microarray with up to four cores per patient was available. The proposed approach is based on deep learning, and its novelty lies in the use of two separate convolutional neural networks (CNNs) - one to detect individual nuclei even in the crowded areas, and the other to classify them. To detect nuclear centers in an image, the first CNN predicts distance transform of the underlying (but unknown) multi-nuclear map from the input HE image. The second CNN classifies the patches centered at nuclear centers into those belonging to cases or controls. Voting across patches extracted from image(s) of a patient yields the probability of recurrence for the patient. The proposed approach gave 0.81 AUC for a sample of 30 recurrent cases and 30 non-recurrent controls, after being trained on an independent set of 80 case-controls pairs. If validated further, such an approach might help in choosing between a combination of treatment options such as active surveillance, radical prostatectomy, radiation, and hormone therapy. It can also generalize to the prediction of treatment outcomes in other cancers.

  2. Object class segmentation of RGB-D video using recurrent convolutional neural networks.

    PubMed

    Pavel, Mircea Serban; Schulz, Hannes; Behnke, Sven

    2017-04-01

    Object class segmentation is a computer vision task which requires labeling each pixel of an image with the class of the object it belongs to. Deep convolutional neural networks (DNN) are able to learn and take advantage of local spatial correlations required for this task. They are, however, restricted by their small, fixed-sized filters, which limits their ability to learn long-range dependencies. Recurrent Neural Networks (RNN), on the other hand, do not suffer from this restriction. Their iterative interpretation allows them to model long-range dependencies by propagating activity. This property is especially useful when labeling video sequences, where both spatial and temporal long-range dependencies occur. In this work, a novel RNN architecture for object class segmentation is presented. We investigate several ways to train such a network. We evaluate our models on the challenging NYU Depth v2 dataset for object class segmentation and obtain competitive results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Predicting non-linear dynamics by stable local learning in a recurrent spiking neural network.

    PubMed

    Gilra, Aditya; Gerstner, Wulfram

    2017-11-27

    The brain needs to predict how the body reacts to motor commands, but how a network of spiking neurons can learn non-linear body dynamics using local, online and stable learning rules is unclear. Here, we present a supervised learning scheme for the feedforward and recurrent connections in a network of heterogeneous spiking neurons. The error in the output is fed back through fixed random connections with a negative gain, causing the network to follow the desired dynamics. The rule for Feedback-based Online Local Learning Of Weights (FOLLOW) is local in the sense that weight changes depend on the presynaptic activity and the error signal projected onto the postsynaptic neuron. We provide examples of learning linear, non-linear and chaotic dynamics, as well as the dynamics of a two-link arm. Under reasonable approximations, we show, using the Lyapunov method, that FOLLOW learning is uniformly stable, with the error going to zero asymptotically.

  4. Predicting non-linear dynamics by stable local learning in a recurrent spiking neural network

    PubMed Central

    Gerstner, Wulfram

    2017-01-01

    The brain needs to predict how the body reacts to motor commands, but how a network of spiking neurons can learn non-linear body dynamics using local, online and stable learning rules is unclear. Here, we present a supervised learning scheme for the feedforward and recurrent connections in a network of heterogeneous spiking neurons. The error in the output is fed back through fixed random connections with a negative gain, causing the network to follow the desired dynamics. The rule for Feedback-based Online Local Learning Of Weights (FOLLOW) is local in the sense that weight changes depend on the presynaptic activity and the error signal projected onto the postsynaptic neuron. We provide examples of learning linear, non-linear and chaotic dynamics, as well as the dynamics of a two-link arm. Under reasonable approximations, we show, using the Lyapunov method, that FOLLOW learning is uniformly stable, with the error going to zero asymptotically. PMID:29173280

  5. A solution to neural field equations by a recurrent neural network method

    NASA Astrophysics Data System (ADS)

    Alharbi, Abir

    2012-09-01

    Neural field equations (NFE) are used to model the activity of neurons in the brain, it is introduced from a single neuron 'integrate-and-fire model' starting point. The neural continuum is spatially discretized for numerical studies, and the governing equations are modeled as a system of ordinary differential equations. In this article the recurrent neural network approach is used to solve this system of ODEs. This consists of a technique developed by combining the standard numerical method of finite-differences with the Hopfield neural network. The architecture of the net, energy function, updating equations, and algorithms are developed for the NFE model. A Hopfield Neural Network is then designed to minimize the energy function modeling the NFE. Results obtained from the Hopfield-finite-differences net show excellent performance in terms of accuracy and speed. The parallelism nature of the Hopfield approaches may make them easier to implement on fast parallel computers and give them the speed advantage over the traditional methods.

  6. Convergent evolution of gene networks by single-gene duplications in higher eukaryotes.

    PubMed

    Amoutzias, Gregory D; Robertson, David L; Oliver, Stephen G; Bornberg-Bauer, Erich

    2004-03-01

    By combining phylogenetic, proteomic and structural information, we have elucidated the evolutionary driving forces for the gene-regulatory interaction networks of basic helix-loop-helix transcription factors. We infer that recurrent events of single-gene duplication and domain rearrangement repeatedly gave rise to distinct networks with almost identical hub-based topologies, and multiple activators and repressors. We thus provide the first empirical evidence for scale-free protein networks emerging through single-gene duplications, the dominant importance of molecular modularity in the bottom-up construction of complex biological entities, and the convergent evolution of networks.

  7. A novel nonlinear adaptive filter using a pipelined second-order Volterra recurrent neural network.

    PubMed

    Zhao, Haiquan; Zhang, Jiashu

    2009-12-01

    To enhance the performance and overcome the heavy computational complexity of recurrent neural networks (RNN), a novel nonlinear adaptive filter based on a pipelined second-order Volterra recurrent neural network (PSOVRNN) is proposed in this paper. A modified real-time recurrent learning (RTRL) algorithm of the proposed filter is derived in much more detail. The PSOVRNN comprises of a number of simple small-scale second-order Volterra recurrent neural network (SOVRNN) modules. In contrast to the standard RNN, these modules of a PSOVRNN can be performed simultaneously in a pipelined parallelism fashion, which can lead to a significant improvement in its total computational efficiency. Moreover, since each module of the PSOVRNN is a SOVRNN in which nonlinearity is introduced by the recursive second-order Volterra (RSOV) expansion, its performance can be further improved. Computer simulations have demonstrated that the PSOVRNN performs better than the pipelined recurrent neural network (PRNN) and RNN for nonlinear colored signals prediction and nonlinear channel equalization. However, the superiority of the PSOVRNN over the PRNN is at the cost of increasing computational complexity due to the introduced nonlinear expansion of each module.

  8. An evolutionary algorithm that constructs recurrent neural networks.

    PubMed

    Angeline, P J; Saunders, G M; Pollack, J B

    1994-01-01

    Standard methods for simultaneously inducing the structure and weights of recurrent neural networks limit every task to an assumed class of architectures. Such a simplification is necessary since the interactions between network structure and function are not well understood. Evolutionary computations, which include genetic algorithms and evolutionary programming, are population-based search methods that have shown promise in many similarly complex tasks. This paper argues that genetic algorithms are inappropriate for network acquisition and describes an evolutionary program, called GNARL, that simultaneously acquires both the structure and weights for recurrent networks. GNARL's empirical acquisition method allows for the emergence of complex behaviors and topologies that are potentially excluded by the artificial architectural constraints imposed in standard network induction methods.

  9. Exact event-driven implementation for recurrent networks of stochastic perfect integrate-and-fire neurons.

    PubMed

    Taillefumier, Thibaud; Touboul, Jonathan; Magnasco, Marcelo

    2012-12-01

    In vivo cortical recording reveals that indirectly driven neural assemblies can produce reliable and temporally precise spiking patterns in response to stereotyped stimulation. This suggests that despite being fundamentally noisy, the collective activity of neurons conveys information through temporal coding. Stochastic integrate-and-fire models delineate a natural theoretical framework to study the interplay of intrinsic neural noise and spike timing precision. However, there are inherent difficulties in simulating their networks' dynamics in silico with standard numerical discretization schemes. Indeed, the well-posedness of the evolution of such networks requires temporally ordering every neuronal interaction, whereas the order of interactions is highly sensitive to the random variability of spiking times. Here, we answer these issues for perfect stochastic integrate-and-fire neurons by designing an exact event-driven algorithm for the simulation of recurrent networks, with delayed Dirac-like interactions. In addition to being exact from the mathematical standpoint, our proposed method is highly efficient numerically. We envision that our algorithm is especially indicated for studying the emergence of polychronized motifs in networks evolving under spike-timing-dependent plasticity with intrinsic noise.

  10. Retinal co-mediator acetylcholine evokes muscarinic inhibition of recurrent excitation in frog tectum column.

    PubMed

    Baginskas, Armantas; Kuras, Antanas

    2016-08-26

    Acetylcholine receptors contribute to the control of neuronal and neuronal network activity from insects to humans. We have investigated the action of acetylcholine receptors in the optic tectum of Rana temporaria (common frog). Our previous studies have demonstrated that acetylcholine activates presynaptic nicotinic receptors, when released into the frog optic tectum as a co-mediator during firing of a single retinal ganglion cell, and causes: a) potentiation of retinotectal synaptic transmission, and b) facilitation of transition of the tectum column to a higher level of activity. In the present study we have shown that endogenous acetylcholine also activates muscarinic receptors, leading to a delayed inhibition of recurrent excitatory synaptic transmission in the tectum column. The delay of muscarinic inhibition was evaluated to be of ∼80ms, with an extent of inhibition of ∼2 times. The inhibition of the recurrent excitation determines transition of the tectum column back to its resting state, giving a functional sense for the inhibition. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  11. Ultrasonographic Diagnosis of Cirrhosis Based on Preprocessing Using Pyramid Recurrent Neural Network

    NASA Astrophysics Data System (ADS)

    Lu, Jianming; Liu, Jiang; Zhao, Xueqin; Yahagi, Takashi

    In this paper, a pyramid recurrent neural network is applied to characterize the hepatic parenchymal diseases in ultrasonic B-scan texture. The cirrhotic parenchymal diseases are classified into 4 types according to the size of hypoechoic nodular lesions. The B-mode patterns are wavelet transformed , and then the compressed data are feed into a pyramid neural network to diagnose the type of cirrhotic diseases. Compared with the 3-layer neural networks, the performance of the proposed pyramid recurrent neural network is improved by utilizing the lower layer effectively. The simulation result shows that the proposed system is suitable for diagnosis of cirrhosis diseases.

  12. Synchronized and mixed outbreaks of coupled recurrent epidemics.

    PubMed

    Zheng, Muhua; Zhao, Ming; Min, Byungjoon; Liu, Zonghua

    2017-05-25

    Epidemic spreading has been studied for a long time and most of them are focused on the growing aspect of a single epidemic outbreak. Recently, we extended the study to the case of recurrent epidemics (Sci. Rep. 5, 16010 (2015)) but limited only to a single network. We here report from the real data of coupled regions or cities that the recurrent epidemics in two coupled networks are closely related to each other and can show either synchronized outbreak pattern where outbreaks occur simultaneously in both networks or mixed outbreak pattern where outbreaks occur in one network but do not in another one. To reveal the underlying mechanism, we present a two-layered network model of coupled recurrent epidemics to reproduce the synchronized and mixed outbreak patterns. We show that the synchronized outbreak pattern is preferred to be triggered in two coupled networks with the same average degree while the mixed outbreak pattern is likely to show for the case with different average degrees. Further, we show that the coupling between the two layers tends to suppress the mixed outbreak pattern but enhance the synchronized outbreak pattern. A theoretical analysis based on microscopic Markov-chain approach is presented to explain the numerical results. This finding opens a new window for studying the recurrent epidemics in multi-layered networks.

  13. Protein secondary structure prediction using modular reciprocal bidirectional recurrent neural networks.

    PubMed

    Babaei, Sepideh; Geranmayeh, Amir; Seyyedsalehi, Seyyed Ali

    2010-12-01

    The supervised learning of recurrent neural networks well-suited for prediction of protein secondary structures from the underlying amino acids sequence is studied. Modular reciprocal recurrent neural networks (MRR-NN) are proposed to model the strong correlations between adjacent secondary structure elements. Besides, a multilayer bidirectional recurrent neural network (MBR-NN) is introduced to capture the long-range intramolecular interactions between amino acids in formation of the secondary structure. The final modular prediction system is devised based on the interactive integration of the MRR-NN and the MBR-NN structures to arbitrarily engage the neighboring effects of the secondary structure types concurrent with memorizing the sequential dependencies of amino acids along the protein chain. The advanced combined network augments the percentage accuracy (Q₃) to 79.36% and boosts the segment overlap (SOV) up to 70.09% when tested on the PSIPRED dataset in three-fold cross-validation. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  14. Ridge Polynomial Neural Network with Error Feedback for Time Series Forecasting

    PubMed Central

    Ghazali, Rozaida; Herawan, Tutut

    2016-01-01

    Time series forecasting has gained much attention due to its many practical applications. Higher-order neural network with recurrent feedback is a powerful technique that has been used successfully for time series forecasting. It maintains fast learning and the ability to learn the dynamics of the time series over time. Network output feedback is the most common recurrent feedback for many recurrent neural network models. However, not much attention has been paid to the use of network error feedback instead of network output feedback. In this study, we propose a novel model, called Ridge Polynomial Neural Network with Error Feedback (RPNN-EF) that incorporates higher order terms, recurrence and error feedback. To evaluate the performance of RPNN-EF, we used four univariate time series with different forecasting horizons, namely star brightness, monthly smoothed sunspot numbers, daily Euro/Dollar exchange rate, and Mackey-Glass time-delay differential equation. We compared the forecasting performance of RPNN-EF with the ordinary Ridge Polynomial Neural Network (RPNN) and the Dynamic Ridge Polynomial Neural Network (DRPNN). Simulation results showed an average 23.34% improvement in Root Mean Square Error (RMSE) with respect to RPNN and an average 10.74% improvement with respect to DRPNN. That means that using network errors during training helps enhance the overall forecasting performance for the network. PMID:27959927

  15. Ridge Polynomial Neural Network with Error Feedback for Time Series Forecasting.

    PubMed

    Waheeb, Waddah; Ghazali, Rozaida; Herawan, Tutut

    2016-01-01

    Time series forecasting has gained much attention due to its many practical applications. Higher-order neural network with recurrent feedback is a powerful technique that has been used successfully for time series forecasting. It maintains fast learning and the ability to learn the dynamics of the time series over time. Network output feedback is the most common recurrent feedback for many recurrent neural network models. However, not much attention has been paid to the use of network error feedback instead of network output feedback. In this study, we propose a novel model, called Ridge Polynomial Neural Network with Error Feedback (RPNN-EF) that incorporates higher order terms, recurrence and error feedback. To evaluate the performance of RPNN-EF, we used four univariate time series with different forecasting horizons, namely star brightness, monthly smoothed sunspot numbers, daily Euro/Dollar exchange rate, and Mackey-Glass time-delay differential equation. We compared the forecasting performance of RPNN-EF with the ordinary Ridge Polynomial Neural Network (RPNN) and the Dynamic Ridge Polynomial Neural Network (DRPNN). Simulation results showed an average 23.34% improvement in Root Mean Square Error (RMSE) with respect to RPNN and an average 10.74% improvement with respect to DRPNN. That means that using network errors during training helps enhance the overall forecasting performance for the network.

  16. Gradient calculations for dynamic recurrent neural networks: a survey.

    PubMed

    Pearlmutter, B A

    1995-01-01

    Surveys learning algorithms for recurrent neural networks with hidden units and puts the various techniques into a common framework. The authors discuss fixed point learning algorithms, namely recurrent backpropagation and deterministic Boltzmann machines, and nonfixed point algorithms, namely backpropagation through time, Elman's history cutoff, and Jordan's output feedback architecture. Forward propagation, an on-line technique that uses adjoint equations, and variations thereof, are also discussed. In many cases, the unified presentation leads to generalizations of various sorts. The author discusses advantages and disadvantages of temporally continuous neural networks in contrast to clocked ones continues with some "tricks of the trade" for training, using, and simulating continuous time and recurrent neural networks. The author presents some simulations, and at the end, addresses issues of computational complexity and learning speed.

  17. Dynamic effective connectivity in cortically embedded systems of recurrently coupled synfire chains.

    PubMed

    Trengove, Chris; Diesmann, Markus; van Leeuwen, Cees

    2016-02-01

    As a candidate mechanism of neural representation, large numbers of synfire chains can efficiently be embedded in a balanced recurrent cortical network model. Here we study a model in which multiple synfire chains of variable strength are randomly coupled together to form a recurrent system. The system can be implemented both as a large-scale network of integrate-and-fire neurons and as a reduced model. The latter has binary-state pools as basic units but is otherwise isomorphic to the large-scale model, and provides an efficient tool for studying its behavior. Both the large-scale system and its reduced counterpart are able to sustain ongoing endogenous activity in the form of synfire waves, the proliferation of which is regulated by negative feedback caused by collateral noise. Within this equilibrium, diverse repertoires of ongoing activity are observed, including meta-stability and multiple steady states. These states arise in concert with an effective connectivity structure (ECS). The ECS admits a family of effective connectivity graphs (ECGs), parametrized by the mean global activity level. Of these graphs, the strongly connected components and their associated out-components account to a large extent for the observed steady states of the system. These results imply a notion of dynamic effective connectivity as governing neural computation with synfire chains, and related forms of cortical circuitry with complex topologies.

  18. Assessing the Role of Inhibition in Stabilizing Neocortical Networks Requires Large-Scale Perturbation of the Inhibitory Population

    PubMed Central

    Mrsic-Flogel, Thomas D.

    2017-01-01

    Neurons within cortical microcircuits are interconnected with recurrent excitatory synaptic connections that are thought to amplify signals (Douglas and Martin, 2007), form selective subnetworks (Ko et al., 2011), and aid feature discrimination. Strong inhibition (Haider et al., 2013) counterbalances excitation, enabling sensory features to be sharpened and represented by sparse codes (Willmore et al., 2011). This balance between excitation and inhibition makes it difficult to assess the strength, or gain, of recurrent excitatory connections within cortical networks, which is key to understanding their operational regime and the computations that they perform. Networks that combine an unstable high-gain excitatory population with stabilizing inhibitory feedback are known as inhibition-stabilized networks (ISNs) (Tsodyks et al., 1997). Theoretical studies using reduced network models predict that ISNs produce paradoxical responses to perturbation, but experimental perturbations failed to find evidence for ISNs in cortex (Atallah et al., 2012). Here, we reexamined this question by investigating how cortical network models consisting of many neurons behave after perturbations and found that results obtained from reduced network models fail to predict responses to perturbations in more realistic networks. Our models predict that a large proportion of the inhibitory network must be perturbed to reliably detect an ISN regime robustly in cortex. We propose that wide-field optogenetic suppression of inhibition under promoters targeting a large fraction of inhibitory neurons may provide a perturbation of sufficient strength to reveal the operating regime of cortex. Our results suggest that detailed computational models of optogenetic perturbations are necessary to interpret the results of experimental paradigms. SIGNIFICANCE STATEMENT Many useful computational mechanisms proposed for cortex require local excitatory recurrence to be very strong, such that local inhibitory feedback is necessary to avoid epileptiform runaway activity (an “inhibition-stabilized network” or “ISN” regime). However, recent experimental results suggest that this regime may not exist in cortex. We simulated activity perturbations in cortical networks of increasing realism and found that, to detect ISN-like properties in cortex, large proportions of the inhibitory population must be perturbed. Current experimental methods for inhibitory perturbation are unlikely to satisfy this requirement, implying that existing experimental observations are inconclusive about the computational regime of cortex. Our results suggest that new experimental designs targeting a majority of inhibitory neurons may be able to resolve this question. PMID:29074575

  19. Recurrent Coupling Improves Discrimination of Temporal Spike Patterns

    PubMed Central

    Yuan, Chun-Wei; Leibold, Christian

    2012-01-01

    Despite the ubiquitous presence of recurrent synaptic connections in sensory neuronal systems, their general functional purpose is not well understood. A recent conceptual advance has been achieved by theories of reservoir computing in which recurrent networks have been proposed to generate short-term memory as well as to improve neuronal representation of the sensory input for subsequent computations. Here, we present a numerical study on the distinct effects of inhibitory and excitatory recurrence in a canonical linear classification task. It is found that both types of coupling improve the ability to discriminate temporal spike patterns as compared to a purely feed-forward system, although in different ways. For a large class of inhibitory networks, the network’s performance is optimal as long as a fraction of roughly 50% of neurons per stimulus is active in the resulting population code. Thereby the contribution of inactive neurons to the neural code is found to be even more informative than that of the active neurons, generating an inherent robustness of classification performance against temporal jitter of the input spikes. Excitatory couplings are found to not only produce a short-term memory buffer but also to improve linear separability of the population patterns by evoking more irregular firing as compared to the purely inhibitory case. As the excitatory connectivity becomes more sparse, firing becomes more variable, and pattern separability improves. We argue that the proposed paradigm is particularly well-suited as a conceptual framework for processing of sensory information in the auditory pathway. PMID:22586392

  20. Linking structure and activity in nonlinear spiking networks

    PubMed Central

    Josić, Krešimir; Shea-Brown, Eric

    2017-01-01

    Recent experimental advances are producing an avalanche of data on both neural connectivity and neural activity. To take full advantage of these two emerging datasets we need a framework that links them, revealing how collective neural activity arises from the structure of neural connectivity and intrinsic neural dynamics. This problem of structure-driven activity has drawn major interest in computational neuroscience. Existing methods for relating activity and architecture in spiking networks rely on linearizing activity around a central operating point and thus fail to capture the nonlinear responses of individual neurons that are the hallmark of neural information processing. Here, we overcome this limitation and present a new relationship between connectivity and activity in networks of nonlinear spiking neurons by developing a diagrammatic fluctuation expansion based on statistical field theory. We explicitly show how recurrent network structure produces pairwise and higher-order correlated activity, and how nonlinearities impact the networks’ spiking activity. Our findings open new avenues to investigating how single-neuron nonlinearities—including those of different cell types—combine with connectivity to shape population activity and function. PMID:28644840

  1. Ads' click-through rates predicting based on gated recurrent unit neural networks

    NASA Astrophysics Data System (ADS)

    Chen, Qiaohong; Guo, Zixuan; Dong, Wen; Jin, Lingzi

    2018-05-01

    In order to improve the effect of online advertising and to increase the revenue of advertising, the gated recurrent unit neural networks(GRU) model is used as the ads' click through rates(CTR) predicting. Combined with the characteristics of gated unit structure and the unique of time sequence in data, using BPTT algorithm to train the model. Furthermore, by optimizing the step length algorithm of the gated unit recurrent neural networks, making the model reach optimal point better and faster in less iterative rounds. The experiment results show that the model based on the gated recurrent unit neural networks and its optimization of step length algorithm has the better effect on the ads' CTR predicting, which helps advertisers, media and audience achieve a win-win and mutually beneficial situation in Three-Side Game.

  2. Information processing in echo state networks at the edge of chaos.

    PubMed

    Boedecker, Joschka; Obst, Oliver; Lizier, Joseph T; Mayer, N Michael; Asada, Minoru

    2012-09-01

    We investigate information processing in randomly connected recurrent neural networks. It has been shown previously that the computational capabilities of these networks are maximized when the recurrent layer is close to the border between a stable and an unstable dynamics regime, the so called edge of chaos. The reasons, however, for this maximized performance are not completely understood. We adopt an information-theoretical framework and are for the first time able to quantify the computational capabilities between elements of these networks directly as they undergo the phase transition to chaos. Specifically, we present evidence that both information transfer and storage in the recurrent layer are maximized close to this phase transition, providing an explanation for why guiding the recurrent layer toward the edge of chaos is computationally useful. As a consequence, our study suggests self-organized ways of improving performance in recurrent neural networks, driven by input data. Moreover, the networks we study share important features with biological systems such as feedback connections and online computation on input streams. A key example is the cerebral cortex, which was shown to also operate close to the edge of chaos. Consequently, the behavior of model systems as studied here is likely to shed light on reasons why biological systems are tuned into this specific regime.

  3. A recurrence network approach for the analysis of skin blood flow dynamics in response to loading pressure.

    PubMed

    Liao, Fuyuan; Jan, Yih-Kuen

    2012-06-01

    This paper presents a recurrence network approach for the analysis of skin blood flow dynamics in response to loading pressure. Recurrence is a fundamental property of many dynamical systems, which can be explored in phase spaces constructed from observational time series. A visualization tool of recurrence analysis called recurrence plot (RP) has been proved to be highly effective to detect transitions in the dynamics of the system. However, it was found that delay embedding can produce spurious structures in RPs. Network-based concepts have been applied for the analysis of nonlinear time series recently. We demonstrate that time series with different types of dynamics exhibit distinct global clustering coefficients and distributions of local clustering coefficients and that the global clustering coefficient is robust to the embedding parameters. We applied the approach to study skin blood flow oscillations (BFO) response to loading pressure. The results showed that global clustering coefficients of BFO significantly decreased in response to loading pressure (p<0.01). Moreover, surrogate tests indicated that such a decrease was associated with a loss of nonlinearity of BFO. Our results suggest that the recurrence network approach can practically quantify the nonlinear dynamics of BFO.

  4. The Brain as an Efficient and Robust Adaptive Learner.

    PubMed

    Denève, Sophie; Alemi, Alireza; Bourdoukan, Ralph

    2017-06-07

    Understanding how the brain learns to compute functions reliably, efficiently, and robustly with noisy spiking activity is a fundamental challenge in neuroscience. Most sensory and motor tasks can be described as dynamical systems and could presumably be learned by adjusting connection weights in a recurrent biological neural network. However, this is greatly complicated by the credit assignment problem for learning in recurrent networks, e.g., the contribution of each connection to the global output error cannot be determined based only on locally accessible quantities to the synapse. Combining tools from adaptive control theory and efficient coding theories, we propose that neural circuits can indeed learn complex dynamic tasks with local synaptic plasticity rules as long as they associate two experimentally established neural mechanisms. First, they should receive top-down feedbacks driving both their activity and their synaptic plasticity. Second, inhibitory interneurons should maintain a tight balance between excitation and inhibition in the circuit. The resulting networks could learn arbitrary dynamical systems and produce irregular spike trains as variable as those observed experimentally. Yet, this variability in single neurons may hide an extremely efficient and robust computation at the population level. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Confused or not Confused?: Disentangling Brain Activity from EEG Data Using Bidirectional LSTM Recurrent Neural Networks.

    PubMed

    Ni, Zhaoheng; Yuksel, Ahmet Cem; Ni, Xiuyan; Mandel, Michael I; Xie, Lei

    2017-08-01

    Brain fog, also known as confusion, is one of the main reasons for low performance in the learning process or any kind of daily task that involves and requires thinking. Detecting confusion in a human's mind in real time is a challenging and important task that can be applied to online education, driver fatigue detection and so on. In this paper, we apply Bidirectional LSTM Recurrent Neural Networks to classify students' confusion in watching online course videos from EEG data. The results show that Bidirectional LSTM model achieves the state-of-the-art performance compared with other machine learning approaches, and shows strong robustness as evaluated by cross-validation. We can predict whether or not a student is confused in the accuracy of 73.3%. Furthermore, we find the most important feature to detecting the brain confusion is the gamma 1 wave of EEG signal. Our results suggest that machine learning is a potentially powerful tool to model and understand brain activity.

  6. Generating Focused Molecule Libraries for Drug Discovery with Recurrent Neural Networks

    PubMed Central

    2017-01-01

    In de novo drug design, computational strategies are used to generate novel molecules with good affinity to the desired biological target. In this work, we show that recurrent neural networks can be trained as generative models for molecular structures, similar to statistical language models in natural language processing. We demonstrate that the properties of the generated molecules correlate very well with the properties of the molecules used to train the model. In order to enrich libraries with molecules active toward a given biological target, we propose to fine-tune the model with small sets of molecules, which are known to be active against that target. Against Staphylococcus aureus, the model reproduced 14% of 6051 hold-out test molecules that medicinal chemists designed, whereas against Plasmodium falciparum (Malaria), it reproduced 28% of 1240 test molecules. When coupled with a scoring function, our model can perform the complete de novo drug design cycle to generate large sets of novel molecules for drug discovery. PMID:29392184

  7. Unified functional network and nonlinear time series analysis for complex systems science: The pyunicorn package

    NASA Astrophysics Data System (ADS)

    Donges, Jonathan F.; Heitzig, Jobst; Beronov, Boyan; Wiedermann, Marc; Runge, Jakob; Feng, Qing Yi; Tupikina, Liubov; Stolbova, Veronika; Donner, Reik V.; Marwan, Norbert; Dijkstra, Henk A.; Kurths, Jürgen

    2015-11-01

    We introduce the pyunicorn (Pythonic unified complex network and recurrence analysis toolbox) open source software package for applying and combining modern methods of data analysis and modeling from complex network theory and nonlinear time series analysis. pyunicorn is a fully object-oriented and easily parallelizable package written in the language Python. It allows for the construction of functional networks such as climate networks in climatology or functional brain networks in neuroscience representing the structure of statistical interrelationships in large data sets of time series and, subsequently, investigating this structure using advanced methods of complex network theory such as measures and models for spatial networks, networks of interacting networks, node-weighted statistics, or network surrogates. Additionally, pyunicorn provides insights into the nonlinear dynamics of complex systems as recorded in uni- and multivariate time series from a non-traditional perspective by means of recurrence quantification analysis, recurrence networks, visibility graphs, and construction of surrogate time series. The range of possible applications of the library is outlined, drawing on several examples mainly from the field of climatology.

  8. Reduced-Order Modeling for Flutter/LCO Using Recurrent Artificial Neural Network

    NASA Technical Reports Server (NTRS)

    Yao, Weigang; Liou, Meng-Sing

    2012-01-01

    The present study demonstrates the efficacy of a recurrent artificial neural network to provide a high fidelity time-dependent nonlinear reduced-order model (ROM) for flutter/limit-cycle oscillation (LCO) modeling. An artificial neural network is a relatively straightforward nonlinear method for modeling an input-output relationship from a set of known data, for which we use the radial basis function (RBF) with its parameters determined through a training process. The resulting RBF neural network, however, is only static and is not yet adequate for an application to problems of dynamic nature. The recurrent neural network method [1] is applied to construct a reduced order model resulting from a series of high-fidelity time-dependent data of aero-elastic simulations. Once the RBF neural network ROM is constructed properly, an accurate approximate solution can be obtained at a fraction of the cost of a full-order computation. The method derived during the study has been validated for predicting nonlinear aerodynamic forces in transonic flow and is capable of accurate flutter/LCO simulations. The obtained results indicate that the present recurrent RBF neural network is accurate and efficient for nonlinear aero-elastic system analysis

  9. A Markov model for the temporal dynamics of balanced random networks of finite size

    PubMed Central

    Lagzi, Fereshteh; Rotter, Stefan

    2014-01-01

    The balanced state of recurrent networks of excitatory and inhibitory spiking neurons is characterized by fluctuations of population activity about an attractive fixed point. Numerical simulations show that these dynamics are essentially nonlinear, and the intrinsic noise (self-generated fluctuations) in networks of finite size is state-dependent. Therefore, stochastic differential equations with additive noise of fixed amplitude cannot provide an adequate description of the stochastic dynamics. The noise model should, rather, result from a self-consistent description of the network dynamics. Here, we consider a two-state Markovian neuron model, where spikes correspond to transitions from the active state to the refractory state. Excitatory and inhibitory input to this neuron affects the transition rates between the two states. The corresponding nonlinear dependencies can be identified directly from numerical simulations of networks of leaky integrate-and-fire neurons, discretized at a time resolution in the sub-millisecond range. Deterministic mean-field equations, and a noise component that depends on the dynamic state of the network, are obtained from this model. The resulting stochastic model reflects the behavior observed in numerical simulations quite well, irrespective of the size of the network. In particular, a strong temporal correlation between the two populations, a hallmark of the balanced state in random recurrent networks, are well represented by our model. Numerical simulations of such networks show that a log-normal distribution of short-term spike counts is a property of balanced random networks with fixed in-degree that has not been considered before, and our model shares this statistical property. Furthermore, the reconstruction of the flow from simulated time series suggests that the mean-field dynamics of finite-size networks are essentially of Wilson-Cowan type. We expect that this novel nonlinear stochastic model of the interaction between neuronal populations also opens new doors to analyze the joint dynamics of multiple interacting networks. PMID:25520644

  10. A Three-Threshold Learning Rule Approaches the Maximal Capacity of Recurrent Neural Networks

    PubMed Central

    Alemi, Alireza; Baldassi, Carlo; Brunel, Nicolas; Zecchina, Riccardo

    2015-01-01

    Understanding the theoretical foundations of how memories are encoded and retrieved in neural populations is a central challenge in neuroscience. A popular theoretical scenario for modeling memory function is the attractor neural network scenario, whose prototype is the Hopfield model. The model simplicity and the locality of the synaptic update rules come at the cost of a poor storage capacity, compared with the capacity achieved with perceptron learning algorithms. Here, by transforming the perceptron learning rule, we present an online learning rule for a recurrent neural network that achieves near-maximal storage capacity without an explicit supervisory error signal, relying only upon locally accessible information. The fully-connected network consists of excitatory binary neurons with plastic recurrent connections and non-plastic inhibitory feedback stabilizing the network dynamics; the memory patterns to be memorized are presented online as strong afferent currents, producing a bimodal distribution for the neuron synaptic inputs. Synapses corresponding to active inputs are modified as a function of the value of the local fields with respect to three thresholds. Above the highest threshold, and below the lowest threshold, no plasticity occurs. In between these two thresholds, potentiation/depression occurs when the local field is above/below an intermediate threshold. We simulated and analyzed a network of binary neurons implementing this rule and measured its storage capacity for different sizes of the basins of attraction. The storage capacity obtained through numerical simulations is shown to be close to the value predicted by analytical calculations. We also measured the dependence of capacity on the strength of external inputs. Finally, we quantified the statistics of the resulting synaptic connectivity matrix, and found that both the fraction of zero weight synapses and the degree of symmetry of the weight matrix increase with the number of stored patterns. PMID:26291608

  11. A Three-Threshold Learning Rule Approaches the Maximal Capacity of Recurrent Neural Networks.

    PubMed

    Alemi, Alireza; Baldassi, Carlo; Brunel, Nicolas; Zecchina, Riccardo

    2015-08-01

    Understanding the theoretical foundations of how memories are encoded and retrieved in neural populations is a central challenge in neuroscience. A popular theoretical scenario for modeling memory function is the attractor neural network scenario, whose prototype is the Hopfield model. The model simplicity and the locality of the synaptic update rules come at the cost of a poor storage capacity, compared with the capacity achieved with perceptron learning algorithms. Here, by transforming the perceptron learning rule, we present an online learning rule for a recurrent neural network that achieves near-maximal storage capacity without an explicit supervisory error signal, relying only upon locally accessible information. The fully-connected network consists of excitatory binary neurons with plastic recurrent connections and non-plastic inhibitory feedback stabilizing the network dynamics; the memory patterns to be memorized are presented online as strong afferent currents, producing a bimodal distribution for the neuron synaptic inputs. Synapses corresponding to active inputs are modified as a function of the value of the local fields with respect to three thresholds. Above the highest threshold, and below the lowest threshold, no plasticity occurs. In between these two thresholds, potentiation/depression occurs when the local field is above/below an intermediate threshold. We simulated and analyzed a network of binary neurons implementing this rule and measured its storage capacity for different sizes of the basins of attraction. The storage capacity obtained through numerical simulations is shown to be close to the value predicted by analytical calculations. We also measured the dependence of capacity on the strength of external inputs. Finally, we quantified the statistics of the resulting synaptic connectivity matrix, and found that both the fraction of zero weight synapses and the degree of symmetry of the weight matrix increase with the number of stored patterns.

  12. Earthquake correlations and networks: A comparative study

    NASA Astrophysics Data System (ADS)

    Krishna Mohan, T. R.; Revathi, P. G.

    2011-04-01

    We quantify the correlation between earthquakes and use the same to extract causally connected earthquake pairs. Our correlation metric is a variation on the one introduced by Baiesi and Paczuski [M. Baiesi and M. Paczuski, Phys. Rev. E EULEEJ1539-375510.1103/PhysRevE.69.06610669, 066106 (2004)]. A network of earthquakes is then constructed from the time-ordered catalog and with links between the more correlated ones. A list of recurrences to each of the earthquakes is identified employing correlation thresholds to demarcate the most meaningful ones in each cluster. Data pertaining to three different seismic regions (viz., California, Japan, and the Himalayas) are comparatively analyzed using such a network model. The distribution of recurrence lengths and recurrence times are two of the key features analyzed to draw conclusions about the universal aspects of such a network model. We find that the unimodal feature of recurrence length distribution, which helps to associate typical rupture lengths with different magnitude earthquakes, is robust across the different seismic regions. The out-degree of the networks shows a hub structure rooted on the large magnitude earthquakes. In-degree distribution is seen to be dependent on the density of events in the neighborhood. Power laws, with two regimes having different exponents, are obtained with recurrence time distribution. The first regime confirms the Omori law for aftershocks while the second regime, with a faster falloff for the larger recurrence times, establishes that pure spatial recurrences also follow a power-law distribution. The crossover to the second power-law regime can be taken to be signaling the end of the aftershock regime in an objective fashion.

  13. Predicting local field potentials with recurrent neural networks.

    PubMed

    Kim, Louis; Harer, Jacob; Rangamani, Akshay; Moran, James; Parks, Philip D; Widge, Alik; Eskandar, Emad; Dougherty, Darin; Chin, Sang Peter

    2016-08-01

    We present a Recurrent Neural Network using LSTM (Long Short Term Memory) that is capable of modeling and predicting Local Field Potentials. We train and test the network on real data recorded from epilepsy patients. We construct networks that predict multi-channel LFPs for 1, 10, and 100 milliseconds forward in time. Our results show that prediction using LSTM outperforms regression when predicting 10 and 100 millisecond forward in time.

  14. Spatially dynamic recurrent information flow across long-range dorsal motor network encodes selective motor goals.

    PubMed

    Yoo, Peter E; Hagan, Maureen A; John, Sam E; Opie, Nicholas L; Ordidge, Roger J; O'Brien, Terence J; Oxley, Thomas J; Moffat, Bradford A; Wong, Yan T

    2018-06-01

    Performing voluntary movements involves many regions of the brain, but it is unknown how they work together to plan and execute specific movements. We recorded high-resolution ultra-high-field blood-oxygen-level-dependent signal during a cued ankle-dorsiflexion task. The spatiotemporal dynamics and the patterns of task-relevant information flow across the dorsal motor network were investigated. We show that task-relevant information appears and decays earlier in the higher order areas of the dorsal motor network then in the primary motor cortex. Furthermore, the results show that task-relevant information is encoded in general initially, and then selective goals are subsequently encoded in specifics subregions across the network. Importantly, the patterns of recurrent information flow across the network vary across different subregions depending on the goal. Recurrent information flow was observed across all higher order areas of the dorsal motor network in the subregions encoding for the current goal. In contrast, only the top-down information flow from the supplementary motor cortex to the frontoparietal regions, with weakened recurrent information flow between the frontoparietal regions and bottom-up information flow from the frontoparietal regions to the supplementary cortex were observed in the subregions encoding for the opposing goal. We conclude that selective motor goal encoding and execution rely on goal-dependent differences in subregional recurrent information flow patterns across the long-range dorsal motor network areas that exhibit graded functional specialization. © 2018 Wiley Periodicals, Inc.

  15. Efficient Coding and Energy Efficiency Are Promoted by Balanced Excitatory and Inhibitory Synaptic Currents in Neuronal Network

    PubMed Central

    Yu, Lianchun; Shen, Zhou; Wang, Chen; Yu, Yuguo

    2018-01-01

    Selective pressure may drive neural systems to process as much information as possible with the lowest energy cost. Recent experiment evidence revealed that the ratio between synaptic excitation and inhibition (E/I) in local cortex is generally maintained at a certain value which may influence the efficiency of energy consumption and information transmission of neural networks. To understand this issue deeply, we constructed a typical recurrent Hodgkin-Huxley network model and studied the general principles that governs the relationship among the E/I synaptic current ratio, the energy cost and total amount of information transmission. We observed in such a network that there exists an optimal E/I synaptic current ratio in the network by which the information transmission achieves the maximum with relatively low energy cost. The coding energy efficiency which is defined as the mutual information divided by the energy cost, achieved the maximum with the balanced synaptic current. Although background noise degrades information transmission and imposes an additional energy cost, we find an optimal noise intensity that yields the largest information transmission and energy efficiency at this optimal E/I synaptic transmission ratio. The maximization of energy efficiency also requires a certain part of energy cost associated with spontaneous spiking and synaptic activities. We further proved this finding with analytical solution based on the response function of bistable neurons, and demonstrated that optimal net synaptic currents are capable of maximizing both the mutual information and energy efficiency. These results revealed that the development of E/I synaptic current balance could lead a cortical network to operate at a highly efficient information transmission rate at a relatively low energy cost. The generality of neuronal models and the recurrent network configuration used here suggest that the existence of an optimal E/I cell ratio for highly efficient energy costs and information maximization is a potential principle for cortical circuit networks. Summary We conducted numerical simulations and mathematical analysis to examine the energy efficiency of neural information transmission in a recurrent network as a function of the ratio of excitatory and inhibitory synaptic connections. We obtained a general solution showing that there exists an optimal E/I synaptic ratio in a recurrent network at which the information transmission as well as the energy efficiency of this network achieves a global maximum. These results reflect general mechanisms for sensory coding processes, which may give insight into the energy efficiency of neural communication and coding. PMID:29773979

  16. Efficient Coding and Energy Efficiency Are Promoted by Balanced Excitatory and Inhibitory Synaptic Currents in Neuronal Network.

    PubMed

    Yu, Lianchun; Shen, Zhou; Wang, Chen; Yu, Yuguo

    2018-01-01

    Selective pressure may drive neural systems to process as much information as possible with the lowest energy cost. Recent experiment evidence revealed that the ratio between synaptic excitation and inhibition (E/I) in local cortex is generally maintained at a certain value which may influence the efficiency of energy consumption and information transmission of neural networks. To understand this issue deeply, we constructed a typical recurrent Hodgkin-Huxley network model and studied the general principles that governs the relationship among the E/I synaptic current ratio, the energy cost and total amount of information transmission. We observed in such a network that there exists an optimal E/I synaptic current ratio in the network by which the information transmission achieves the maximum with relatively low energy cost. The coding energy efficiency which is defined as the mutual information divided by the energy cost, achieved the maximum with the balanced synaptic current. Although background noise degrades information transmission and imposes an additional energy cost, we find an optimal noise intensity that yields the largest information transmission and energy efficiency at this optimal E/I synaptic transmission ratio. The maximization of energy efficiency also requires a certain part of energy cost associated with spontaneous spiking and synaptic activities. We further proved this finding with analytical solution based on the response function of bistable neurons, and demonstrated that optimal net synaptic currents are capable of maximizing both the mutual information and energy efficiency. These results revealed that the development of E/I synaptic current balance could lead a cortical network to operate at a highly efficient information transmission rate at a relatively low energy cost. The generality of neuronal models and the recurrent network configuration used here suggest that the existence of an optimal E/I cell ratio for highly efficient energy costs and information maximization is a potential principle for cortical circuit networks. We conducted numerical simulations and mathematical analysis to examine the energy efficiency of neural information transmission in a recurrent network as a function of the ratio of excitatory and inhibitory synaptic connections. We obtained a general solution showing that there exists an optimal E/I synaptic ratio in a recurrent network at which the information transmission as well as the energy efficiency of this network achieves a global maximum. These results reflect general mechanisms for sensory coding processes, which may give insight into the energy efficiency of neural communication and coding.

  17. Structural network heterogeneities and network dynamics: a possible dynamical mechanism for hippocampal memory reactivation.

    NASA Astrophysics Data System (ADS)

    Jablonski, Piotr; Poe, Gina; Zochowski, Michal

    2007-03-01

    The hippocampus has the capacity for reactivating recently acquired memories and it is hypothesized that one of the functions of sleep reactivation is the facilitation of consolidation of novel memory traces. The dynamic and network processes underlying such a reactivation remain, however, unknown. We show that such a reactivation characterized by local, self-sustained activity of a network region may be an inherent property of the recurrent excitatory-inhibitory network with a heterogeneous structure. The entry into the reactivation phase is mediated through a physiologically feasible regulation of global excitability and external input sources, while the reactivated component of the network is formed through induced network heterogeneities during learning. We show that structural changes needed for robust reactivation of a given network region are well within known physiological parameters.

  18. Structural network heterogeneities and network dynamics: A possible dynamical mechanism for hippocampal memory reactivation

    NASA Astrophysics Data System (ADS)

    Jablonski, Piotr; Poe, Gina R.; Zochowski, Michal

    2007-01-01

    The hippocampus has the capacity for reactivating recently acquired memories and it is hypothesized that one of the functions of sleep reactivation is the facilitation of consolidation of novel memory traces. The dynamic and network processes underlying such a reactivation remain, however, unknown. We show that such a reactivation characterized by local, self-sustained activity of a network region may be an inherent property of the recurrent excitatory-inhibitory network with a heterogeneous structure. The entry into the reactivation phase is mediated through a physiologically feasible regulation of global excitability and external input sources, while the reactivated component of the network is formed through induced network heterogeneities during learning. We show that structural changes needed for robust reactivation of a given network region are well within known physiological parameters.

  19. Identification of potential drug targets based on a computational biology algorithm for venous thromboembolism.

    PubMed

    Xie, Ruiqiang; Li, Lei; Chen, Lina; Li, Wan; Chen, Binbin; Jiang, Jing; Huang, Hao; Li, Yiran; He, Yuehan; Lv, Junjie; He, Weiming

    2017-02-01

    Venous thromboembolism (VTE) is a common, fatal and frequently recurrent disease. Changes in the activity of different coagulation factors serve as a pathophysiological basis for the recurrent risk of VTE. Systems biology approaches provide a better understanding of the pathological mechanisms responsible for recurrent VTE. In this study, a novel computational method was presented to identify the recurrent risk modules (RRMs) based on the integration of expression profiles and human signaling network, which hold promise for achieving new and deeper insights into the mechanisms responsible for VTE. The results revealed that the RRMs had good classification performance to discriminate patients with recurrent VTE. The functional annotation analysis demonstrated that the RRMs played a crucial role in the pathogenesis of VTE. Furthermore, a variety of approved drug targets in the RRM M5 were related to VTE. Thus, the M5 may be applied to select potential drug targets for combination therapy and the extended treatment of VTE.

  20. Reinforcement Learning of Linking and Tracing Contours in Recurrent Neural Networks

    PubMed Central

    Brosch, Tobias; Neumann, Heiko; Roelfsema, Pieter R.

    2015-01-01

    The processing of a visual stimulus can be subdivided into a number of stages. Upon stimulus presentation there is an early phase of feedforward processing where the visual information is propagated from lower to higher visual areas for the extraction of basic and complex stimulus features. This is followed by a later phase where horizontal connections within areas and feedback connections from higher areas back to lower areas come into play. In this later phase, image elements that are behaviorally relevant are grouped by Gestalt grouping rules and are labeled in the cortex with enhanced neuronal activity (object-based attention in psychology). Recent neurophysiological studies revealed that reward-based learning influences these recurrent grouping processes, but it is not well understood how rewards train recurrent circuits for perceptual organization. This paper examines the mechanisms for reward-based learning of new grouping rules. We derive a learning rule that can explain how rewards influence the information flow through feedforward, horizontal and feedback connections. We illustrate the efficiency with two tasks that have been used to study the neuronal correlates of perceptual organization in early visual cortex. The first task is called contour-integration and demands the integration of collinear contour elements into an elongated curve. We show how reward-based learning causes an enhancement of the representation of the to-be-grouped elements at early levels of a recurrent neural network, just as is observed in the visual cortex of monkeys. The second task is curve-tracing where the aim is to determine the endpoint of an elongated curve composed of connected image elements. If trained with the new learning rule, neural networks learn to propagate enhanced activity over the curve, in accordance with neurophysiological data. We close the paper with a number of model predictions that can be tested in future neurophysiological and computational studies. PMID:26496502

  1. Dopamine-Modulated Recurrent Corticoefferent Feedback in Primary Sensory Cortex Promotes Detection of Behaviorally Relevant Stimuli

    PubMed Central

    Handschuh, Juliane

    2014-01-01

    Dopaminergic neurotransmission in primary auditory cortex (AI) has been shown to be involved in learning and memory functions. Moreover, dopaminergic projections and D1/D5 receptor distributions display a layer-dependent organization, suggesting specific functions in the cortical circuitry. However, the circuit effects of dopaminergic neurotransmission in sensory cortex and their possible roles in perception, learning, and memory are largely unknown. Here, we investigated layer-specific circuit effects of dopaminergic neuromodulation using current source density (CSD) analysis in AI of Mongolian gerbils. Pharmacological stimulation of D1/D5 receptors increased auditory-evoked synaptic currents in infragranular layers, prolonging local thalamocortical input via positive feedback between infragranular output and granular input. Subsequently, dopamine promoted sustained cortical activation by prolonged recruitment of long-range corticocortical networks. A detailed circuit analysis combining layer-specific intracortical microstimulation (ICMS), CSD analysis, and pharmacological cortical silencing revealed that cross-laminar feedback enhanced by dopamine relied on a positive, fast-acting recurrent corticoefferent loop, most likely relayed via local thalamic circuits. Behavioral signal detection analysis further showed that activation of corticoefferent output by infragranular ICMS, which mimicked auditory activation under dopaminergic influence, was most effective in eliciting a behaviorally detectable signal. Our results show that D1/D5-mediated dopaminergic modulation in sensory cortex regulates positive recurrent corticoefferent feedback, which enhances states of high, persistent activity in sensory cortex evoked by behaviorally relevant stimuli. In boosting horizontal network interactions, this potentially promotes the readout of task-related information from cortical synapses and improves behavioral stimulus detection. PMID:24453315

  2. Robust passivity analysis for discrete-time recurrent neural networks with mixed delays

    NASA Astrophysics Data System (ADS)

    Huang, Chuan-Kuei; Shu, Yu-Jeng; Chang, Koan-Yuh; Shou, Ho-Nien; Lu, Chien-Yu

    2015-02-01

    This article considers the robust passivity analysis for a class of discrete-time recurrent neural networks (DRNNs) with mixed time-delays and uncertain parameters. The mixed time-delays that consist of both the discrete time-varying and distributed time-delays in a given range are presented, and the uncertain parameters are norm-bounded. The activation functions are assumed to be globally Lipschitz continuous. Based on new bounding technique and appropriate type of Lyapunov functional, a sufficient condition is investigated to guarantee the existence of the desired robust passivity condition for the DRNNs, which can be derived in terms of a family of linear matrix inequality (LMI). Some free-weighting matrices are introduced to reduce the conservatism of the criterion by using the bounding technique. A numerical example is given to illustrate the effectiveness and applicability.

  3. A One-Layer Recurrent Neural Network for Real-Time Portfolio Optimization With Probability Criterion.

    PubMed

    Liu, Qingshan; Dang, Chuangyin; Huang, Tingwen

    2013-02-01

    This paper presents a decision-making model described by a recurrent neural network for dynamic portfolio optimization. The portfolio-optimization problem is first converted into a constrained fractional programming problem. Since the objective function in the programming problem is not convex, the traditional optimization techniques are no longer applicable for solving this problem. Fortunately, the objective function in the fractional programming is pseudoconvex on the feasible region. It leads to a one-layer recurrent neural network modeled by means of a discontinuous dynamic system. To ensure the optimal solutions for portfolio optimization, the convergence of the proposed neural network is analyzed and proved. In fact, the neural network guarantees to get the optimal solutions for portfolio-investment advice if some mild conditions are satisfied. A numerical example with simulation results substantiates the effectiveness and illustrates the characteristics of the proposed neural network.

  4. Mechanisms of Seizure Propagation in 2-Dimensional Centre-Surround Recurrent Networks

    PubMed Central

    Hall, David; Kuhlmann, Levin

    2013-01-01

    Understanding how seizures spread throughout the brain is an important problem in the treatment of epilepsy, especially for implantable devices that aim to avert focal seizures before they spread to, and overwhelm, the rest of the brain. This paper presents an analysis of the speed of propagation in a computational model of seizure-like activity in a 2-dimensional recurrent network of integrate-and-fire neurons containing both excitatory and inhibitory populations and having a difference of Gaussians connectivity structure, an approximation to that observed in cerebral cortex. In the same computational model network, alternative mechanisms are explored in order to simulate the range of seizure-like activity propagation speeds (0.1–100 mm/s) observed in two animal-slice-based models of epilepsy: (1) low extracellular , which creates excess excitation and (2) introduction of gamma-aminobutyric acid (GABA) antagonists, which reduce inhibition. Moreover, two alternative connection topologies are considered: excitation broader than inhibition, and inhibition broader than excitation. It was found that the empirically observed range of propagation velocities can be obtained for both connection topologies. For the case of the GABA antagonist model simulation, consistent with other studies, it was found that there is an effective threshold in the degree of inhibition below which waves begin to propagate. For the case of the low extracellular model simulation, it was found that activity-dependent reductions in inhibition provide a potential explanation for the emergence of slowly propagating waves. This was simulated as a depression of inhibitory synapses, but it may also be achieved by other mechanisms. This work provides a localised network understanding of the propagation of seizures in 2-dimensional centre-surround networks that can be tested empirically. PMID:23967201

  5. Feature to prototype transition in neural networks

    NASA Astrophysics Data System (ADS)

    Krotov, Dmitry; Hopfield, John

    Models of associative memory with higher order (higher than quadratic) interactions, and their relationship to neural networks used in deep learning are discussed. Associative memory is conventionally described by recurrent neural networks with dynamical convergence to stable points. Deep learning typically uses feedforward neural nets without dynamics. However, a simple duality relates these two different views when applied to problems of pattern classification. From the perspective of associative memory such models deserve attention because they make it possible to store a much larger number of memories, compared to the quadratic case. In the dual description, these models correspond to feedforward neural networks with one hidden layer and unusual activation functions transmitting the activities of the visible neurons to the hidden layer. These activation functions are rectified polynomials of a higher degree rather than the rectified linear functions used in deep learning. The network learns representations of the data in terms of features for rectified linear functions, but as the power in the activation function is increased there is a gradual shift to a prototype-based representation, the two extreme regimes of pattern recognition known in cognitive psychology. Simons Center for Systems Biology.

  6. Statistical downscaling of precipitation using long short-term memory recurrent neural networks

    NASA Astrophysics Data System (ADS)

    Misra, Saptarshi; Sarkar, Sudeshna; Mitra, Pabitra

    2017-11-01

    Hydrological impacts of global climate change on regional scale are generally assessed by downscaling large-scale climatic variables, simulated by General Circulation Models (GCMs), to regional, small-scale hydrometeorological variables like precipitation, temperature, etc. In this study, we propose a new statistical downscaling model based on Recurrent Neural Network with Long Short-Term Memory which captures the spatio-temporal dependencies in local rainfall. The previous studies have used several other methods such as linear regression, quantile regression, kernel regression, beta regression, and artificial neural networks. Deep neural networks and recurrent neural networks have been shown to be highly promising in modeling complex and highly non-linear relationships between input and output variables in different domains and hence we investigated their performance in the task of statistical downscaling. We have tested this model on two datasets—one on precipitation in Mahanadi basin in India and the second on precipitation in Campbell River basin in Canada. Our autoencoder coupled long short-term memory recurrent neural network model performs the best compared to other existing methods on both the datasets with respect to temporal cross-correlation, mean squared error, and capturing the extremes.

  7. Recursive Bayesian recurrent neural networks for time-series modeling.

    PubMed

    Mirikitani, Derrick T; Nikolaev, Nikolay

    2010-02-01

    This paper develops a probabilistic approach to recursive second-order training of recurrent neural networks (RNNs) for improved time-series modeling. A general recursive Bayesian Levenberg-Marquardt algorithm is derived to sequentially update the weights and the covariance (Hessian) matrix. The main strengths of the approach are a principled handling of the regularization hyperparameters that leads to better generalization, and stable numerical performance. The framework involves the adaptation of a noise hyperparameter and local weight prior hyperparameters, which represent the noise in the data and the uncertainties in the model parameters. Experimental investigations using artificial and real-world data sets show that RNNs equipped with the proposed approach outperform standard real-time recurrent learning and extended Kalman training algorithms for recurrent networks, as well as other contemporary nonlinear neural models, on time-series modeling.

  8. Modeling Belt-Servomechanism by Chebyshev Functional Recurrent Neuro-Fuzzy Network

    NASA Astrophysics Data System (ADS)

    Huang, Yuan-Ruey; Kang, Yuan; Chu, Ming-Hui; Chang, Yeon-Pun

    A novel Chebyshev functional recurrent neuro-fuzzy (CFRNF) network is developed from a combination of the Takagi-Sugeno-Kang (TSK) fuzzy model and the Chebyshev recurrent neural network (CRNN). The CFRNF network can emulate the nonlinear dynamics of a servomechanism system. The system nonlinearity is addressed by enhancing the input dimensions of the consequent parts in the fuzzy rules due to functional expansion of a Chebyshev polynomial. The back propagation algorithm is used to adjust the parameters of the antecedent membership functions as well as those of consequent functions. To verify the performance of the proposed CFRNF, the experiment of the belt servomechanism is presented in this paper. Both of identification methods of adaptive neural fuzzy inference system (ANFIS) and recurrent neural network (RNN) are also studied for modeling of the belt servomechanism. The analysis and comparison results indicate that CFRNF makes identification of complex nonlinear dynamic systems easier. It is verified that the accuracy and convergence of the CFRNF are superior to those of ANFIS and RNN by the identification results of a belt servomechanism.

  9. Self-Consistent Scheme for Spike-Train Power Spectra in Heterogeneous Sparse Networks.

    PubMed

    Pena, Rodrigo F O; Vellmer, Sebastian; Bernardi, Davide; Roque, Antonio C; Lindner, Benjamin

    2018-01-01

    Recurrent networks of spiking neurons can be in an asynchronous state characterized by low or absent cross-correlations and spike statistics which resemble those of cortical neurons. Although spatial correlations are negligible in this state, neurons can show pronounced temporal correlations in their spike trains that can be quantified by the autocorrelation function or the spike-train power spectrum. Depending on cellular and network parameters, correlations display diverse patterns (ranging from simple refractory-period effects and stochastic oscillations to slow fluctuations) and it is generally not well-understood how these dependencies come about. Previous work has explored how the single-cell correlations in a homogeneous network (excitatory and inhibitory integrate-and-fire neurons with nearly balanced mean recurrent input) can be determined numerically from an iterative single-neuron simulation. Such a scheme is based on the fact that every neuron is driven by the network noise (i.e., the input currents from all its presynaptic partners) but also contributes to the network noise, leading to a self-consistency condition for the input and output spectra. Here we first extend this scheme to homogeneous networks with strong recurrent inhibition and a synaptic filter, in which instabilities of the previous scheme are avoided by an averaging procedure. We then extend the scheme to heterogeneous networks in which (i) different neural subpopulations (e.g., excitatory and inhibitory neurons) have different cellular or connectivity parameters; (ii) the number and strength of the input connections are random (Erdős-Rényi topology) and thus different among neurons. In all heterogeneous cases, neurons are lumped in different classes each of which is represented by a single neuron in the iterative scheme; in addition, we make a Gaussian approximation of the input current to the neuron. These approximations seem to be justified over a broad range of parameters as indicated by comparison with simulation results of large recurrent networks. Our method can help to elucidate how network heterogeneity shapes the asynchronous state in recurrent neural networks.

  10. Changes in matrix metalloproteinase network in a spontaneous autoimmune uveitis model.

    PubMed

    Hofmaier, Florian; Hauck, Stefanie M; Amann, Barbara; Degroote, Roxane L; Deeg, Cornelia A

    2011-04-08

    Autoimmune uveitis is a sight-threatening disease in which autoreactive T cells cross the blood-retinal barrier. Molecular mechanisms contributing to the loss of eye immune privilege in this autoimmune disease are not well understood. In this study, the authors investigated the changes in the matrix metalloproteinase network in spontaneous uveitis. Matrix metalloproteinase (MMP) MMP2, MMP9, and MMP14 expression and tissue inhibitor of metalloproteinase (TIMP)-2 and lipocalin 2 (LCN2) expression were analyzed using Western blot quantification. Enzyme activities were examined with zymography. Expression patterns of network candidates were revealed with immunohistochemistry, comparing physiological appearance and changes in a spontaneous recurrent uveitis model. TIMP2 protein expression was found to be decreased in both the vitreous and the retina of a spontaneous model for autoimmune uveitis (equine recurrent uveitis [ERU]), and TIMP2 activity was significantly reduced in ERU vitreous. Functionally associated MMPs such as MMP2, MMP14, and MMP9 were found to show altered or shifted expression and activity. Although MMP2 decreased in ERU vitreous, MMP9 expression and activity were found to be increased. These changes were reflected by profound changes within uveitic target tissue, where TIMP2, MMP9, and MMP14 decreased in expression, whereas MMP2 displayed a shifted expression pattern. LCN2, a potential stabilizer of MMP9, was found prominently expressed in equine healthy retina and displayed notable changes in expression patterns accompanied by significant upregulation in autoimmune conditions. Invading cells expressed MMP9 and LCN2. This study implicates a dysregulation or a change in functional protein-protein interactions in this TIMP2-associated protein network, together with altered expression of functionally related MMPs.

  11. Application of dynamic recurrent neural networks in nonlinear system identification

    NASA Astrophysics Data System (ADS)

    Du, Yun; Wu, Xueli; Sun, Huiqin; Zhang, Suying; Tian, Qiang

    2006-11-01

    An adaptive identification method of simple dynamic recurrent neural network (SRNN) for nonlinear dynamic systems is presented in this paper. This method based on the theory that by using the inner-states feed-back of dynamic network to describe the nonlinear kinetic characteristics of system can reflect the dynamic characteristics more directly, deduces the recursive prediction error (RPE) learning algorithm of SRNN, and improves the algorithm by studying topological structure on recursion layer without the weight values. The simulation results indicate that this kind of neural network can be used in real-time control, due to its less weight values, simpler learning algorithm, higher identification speed, and higher precision of model. It solves the problems of intricate in training algorithm and slow rate in convergence caused by the complicate topological structure in usual dynamic recurrent neural network.

  12. Financial Time Series Prediction Using Elman Recurrent Random Neural Networks

    PubMed Central

    Wang, Jie; Wang, Jun; Fang, Wen; Niu, Hongli

    2016-01-01

    In recent years, financial market dynamics forecasting has been a focus of economic research. To predict the price indices of stock markets, we developed an architecture which combined Elman recurrent neural networks with stochastic time effective function. By analyzing the proposed model with the linear regression, complexity invariant distance (CID), and multiscale CID (MCID) analysis methods and taking the model compared with different models such as the backpropagation neural network (BPNN), the stochastic time effective neural network (STNN), and the Elman recurrent neural network (ERNN), the empirical results show that the proposed neural network displays the best performance among these neural networks in financial time series forecasting. Further, the empirical research is performed in testing the predictive effects of SSE, TWSE, KOSPI, and Nikkei225 with the established model, and the corresponding statistical comparisons of the above market indices are also exhibited. The experimental results show that this approach gives good performance in predicting the values from the stock market indices. PMID:27293423

  13. Financial Time Series Prediction Using Elman Recurrent Random Neural Networks.

    PubMed

    Wang, Jie; Wang, Jun; Fang, Wen; Niu, Hongli

    2016-01-01

    In recent years, financial market dynamics forecasting has been a focus of economic research. To predict the price indices of stock markets, we developed an architecture which combined Elman recurrent neural networks with stochastic time effective function. By analyzing the proposed model with the linear regression, complexity invariant distance (CID), and multiscale CID (MCID) analysis methods and taking the model compared with different models such as the backpropagation neural network (BPNN), the stochastic time effective neural network (STNN), and the Elman recurrent neural network (ERNN), the empirical results show that the proposed neural network displays the best performance among these neural networks in financial time series forecasting. Further, the empirical research is performed in testing the predictive effects of SSE, TWSE, KOSPI, and Nikkei225 with the established model, and the corresponding statistical comparisons of the above market indices are also exhibited. The experimental results show that this approach gives good performance in predicting the values from the stock market indices.

  14. A recurrent self-organizing neural fuzzy inference network.

    PubMed

    Juang, C F; Lin, C T

    1999-01-01

    A recurrent self-organizing neural fuzzy inference network (RSONFIN) is proposed in this paper. The RSONFIN is inherently a recurrent multilayered connectionist network for realizing the basic elements and functions of dynamic fuzzy inference, and may be considered to be constructed from a series of dynamic fuzzy rules. The temporal relations embedded in the network are built by adding some feedback connections representing the memory elements to a feedforward neural fuzzy network. Each weight as well as node in the RSONFIN has its own meaning and represents a special element in a fuzzy rule. There are no hidden nodes (i.e., no membership functions and fuzzy rules) initially in the RSONFIN. They are created on-line via concurrent structure identification (the construction of dynamic fuzzy if-then rules) and parameter identification (the tuning of the free parameters of membership functions). The structure learning together with the parameter learning forms a fast learning algorithm for building a small, yet powerful, dynamic neural fuzzy network. Two major characteristics of the RSONFIN can thus be seen: 1) the recurrent property of the RSONFIN makes it suitable for dealing with temporal problems and 2) no predetermination, like the number of hidden nodes, must be given, since the RSONFIN can find its optimal structure and parameters automatically and quickly. Moreover, to reduce the number of fuzzy rules generated, a flexible input partition method, the aligned clustering-based algorithm, is proposed. Various simulations on temporal problems are done and performance comparisons with some existing recurrent networks are also made. Efficiency of the RSONFIN is verified from these results.

  15. Strong Recurrent Networks Compute the Orientation-Tuning of Surround Modulation in Primate V1

    PubMed Central

    Shushruth, S.; Mangapathy, Pradeep; Ichida, Jennifer M.; Bressloff, Paul C.; Schwabe, Lars; Angelucci, Alessandra

    2012-01-01

    In macaque primary visual cortex (V1) neuronal responses to stimuli inside the receptive field (RF) are modulated by stimuli in the RF surround. This modulation is orientation-specific. Previous studies suggested that for some cells this specificity may not be fixed, but changes with the stimulus orientation presented to the RF. We demonstrate, in recording studies, that this tuning behavior is instead highly prevalent in V1 and, in theoretical work, that it arises only if V1 operates in a regime of strong local recurrence. Strongest surround suppression occurs when the stimuli in the RF and the surround are iso-oriented, and strongest facilitation when the stimuli are cross-oriented. This is the case even when the RF is sub-optimally activated by a stimulus of non-preferred orientation, but only if this stimulus can activate the cell when presented alone. This tuning behavior emerges from the interaction of lateral inhibition (via the surround pathways), which is tuned to the RF’s preferred orientation, with weakly-tuned, but strong, local recurrent connections, causing maximal withdrawal of recurrent excitation at the feedforward input orientation. Thus, horizontal and feedback modulation of strong recurrent circuits allows the tuning of contextual effects to change with changing feedforward inputs. PMID:22219292

  16. Encoding sensory and motor patterns as time-invariant trajectories in recurrent neural networks

    PubMed Central

    2018-01-01

    Much of the information the brain processes and stores is temporal in nature—a spoken word or a handwritten signature, for example, is defined by how it unfolds in time. However, it remains unclear how neural circuits encode complex time-varying patterns. We show that by tuning the weights of a recurrent neural network (RNN), it can recognize and then transcribe spoken digits. The model elucidates how neural dynamics in cortical networks may resolve three fundamental challenges: first, encode multiple time-varying sensory and motor patterns as stable neural trajectories; second, generalize across relevant spatial features; third, identify the same stimuli played at different speeds—we show that this temporal invariance emerges because the recurrent dynamics generate neural trajectories with appropriately modulated angular velocities. Together our results generate testable predictions as to how recurrent networks may use different mechanisms to generalize across the relevant spatial and temporal features of complex time-varying stimuli. PMID:29537963

  17. Core reactivity estimation in space reactors using recurrent dynamic networks

    NASA Technical Reports Server (NTRS)

    Parlos, Alexander G.; Tsai, Wei K.

    1991-01-01

    A recurrent multilayer perceptron network topology is used in the identification of nonlinear dynamic systems from only the input/output measurements. The identification is performed in the discrete time domain, with the learning algorithm being a modified form of the back propagation (BP) rule. The recurrent dynamic network (RDN) developed is applied for the total core reactivity prediction of a spacecraft reactor from only neutronic power level measurements. Results indicate that the RDN can reproduce the nonlinear response of the reactor while keeping the number of nodes roughly equal to the relative order of the system. As accuracy requirements are increased, the number of required nodes also increases, however, the order of the RDN necessary to obtain such results is still in the same order of magnitude as the order of the mathematical model of the system. It is believed that use of the recurrent MLP structure with a variety of different learning algorithms may prove useful in utilizing artificial neural networks for recognition, classification, and prediction of dynamic systems.

  18. Encoding sensory and motor patterns as time-invariant trajectories in recurrent neural networks.

    PubMed

    Goudar, Vishwa; Buonomano, Dean V

    2018-03-14

    Much of the information the brain processes and stores is temporal in nature-a spoken word or a handwritten signature, for example, is defined by how it unfolds in time. However, it remains unclear how neural circuits encode complex time-varying patterns. We show that by tuning the weights of a recurrent neural network (RNN), it can recognize and then transcribe spoken digits. The model elucidates how neural dynamics in cortical networks may resolve three fundamental challenges: first, encode multiple time-varying sensory and motor patterns as stable neural trajectories; second, generalize across relevant spatial features; third, identify the same stimuli played at different speeds-we show that this temporal invariance emerges because the recurrent dynamics generate neural trajectories with appropriately modulated angular velocities. Together our results generate testable predictions as to how recurrent networks may use different mechanisms to generalize across the relevant spatial and temporal features of complex time-varying stimuli. © 2018, Goudar et al.

  19. Hierarchical singleton-type recurrent neural fuzzy networks for noisy speech recognition.

    PubMed

    Juang, Chia-Feng; Chiou, Chyi-Tian; Lai, Chun-Lung

    2007-05-01

    This paper proposes noisy speech recognition using hierarchical singleton-type recurrent neural fuzzy networks (HSRNFNs). The proposed HSRNFN is a hierarchical connection of two singleton-type recurrent neural fuzzy networks (SRNFNs), where one is used for noise filtering and the other for recognition. The SRNFN is constructed by recurrent fuzzy if-then rules with fuzzy singletons in the consequences, and their recurrent properties make them suitable for processing speech patterns with temporal characteristics. In n words recognition, n SRNFNs are created for modeling n words, where each SRNFN receives the current frame feature and predicts the next one of its modeling word. The prediction error of each SRNFN is used as recognition criterion. In filtering, one SRNFN is created, and each SRNFN recognizer is connected to the same SRNFN filter, which filters noisy speech patterns in the feature domain before feeding them to the SRNFN recognizer. Experiments with Mandarin word recognition under different types of noise are performed. Other recognizers, including multilayer perceptron (MLP), time-delay neural networks (TDNNs), and hidden Markov models (HMMs), are also tested and compared. These experiments and comparisons demonstrate good results with HSRNFN for noisy speech recognition tasks.

  20. Deep Gate Recurrent Neural Network

    DTIC Science & Technology

    2016-11-22

    Schmidhuber. A system for robotic heart surgery that learns to tie knots using recurrent neural networks. In IEEE International Conference on...tasks, such as Machine Translation (Bahdanau et al. (2015)) or Robot Reinforcement Learning (Bakker (2001)). The main idea behind these networks is to...and J. Peters. Reinforcement learning in robotics : A survey. The International Journal of Robotics Research, 32:1238–1274, 2013. ISSN 0278-3649. doi

  1. A new neural observer for an anaerobic bioreactor.

    PubMed

    Belmonte-Izquierdo, R; Carlos-Hernandez, S; Sanchez, E N

    2010-02-01

    In this paper, a recurrent high order neural observer (RHONO) for anaerobic processes is proposed. The main objective is to estimate variables of methanogenesis: biomass, substrate and inorganic carbon in a completely stirred tank reactor (CSTR). The recurrent high order neural network (RHONN) structure is based on the hyperbolic tangent as activation function. The learning algorithm is based on an extended Kalman filter (EKF). The applicability of the proposed scheme is illustrated via simulation. A validation using real data from a lab scale process is included. Thus, this observer can be successfully implemented for control purposes.

  2. Different-Level Simultaneous Minimization Scheme for Fault Tolerance of Redundant Manipulator Aided with Discrete-Time Recurrent Neural Network

    PubMed Central

    Jin, Long; Liao, Bolin; Liu, Mei; Xiao, Lin; Guo, Dongsheng; Yan, Xiaogang

    2017-01-01

    By incorporating the physical constraints in joint space, a different-level simultaneous minimization scheme, which takes both the robot kinematics and robot dynamics into account, is presented and investigated for fault-tolerant motion planning of redundant manipulator in this paper. The scheme is reformulated as a quadratic program (QP) with equality and bound constraints, which is then solved by a discrete-time recurrent neural network. Simulative verifications based on a six-link planar redundant robot manipulator substantiate the efficacy and accuracy of the presented acceleration fault-tolerant scheme, the resultant QP and the corresponding discrete-time recurrent neural network. PMID:28955217

  3. Global stabilization analysis of inertial memristive recurrent neural networks with discrete and distributed delays.

    PubMed

    Wang, Leimin; Zeng, Zhigang; Ge, Ming-Feng; Hu, Junhao

    2018-05-02

    This paper deals with the stabilization problem of memristive recurrent neural networks with inertial items, discrete delays, bounded and unbounded distributed delays. First, for inertial memristive recurrent neural networks (IMRNNs) with second-order derivatives of states, an appropriate variable substitution method is invoked to transfer IMRNNs into a first-order differential form. Then, based on nonsmooth analysis theory, several algebraic criteria are established for the global stabilizability of IMRNNs under proposed feedback control, where the cases with both bounded and unbounded distributed delays are successfully addressed. Finally, the theoretical results are illustrated via the numerical simulations. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Training trajectories by continuous recurrent multilayer networks.

    PubMed

    Leistritz, L; Galicki, M; Witte, H; Kochs, E

    2002-01-01

    This paper addresses the problem of training trajectories by means of continuous recurrent neural networks whose feedforward parts are multilayer perceptrons. Such networks can approximate a general nonlinear dynamic system with arbitrary accuracy. The learning process is transformed into an optimal control framework where the weights are the controls to be determined. A training algorithm based upon a variational formulation of Pontryagin's maximum principle is proposed for such networks. Computer examples demonstrating the efficiency of the given approach are also presented.

  5. Reservoir Computing Properties of Neural Dynamics in Prefrontal Cortex

    PubMed Central

    Procyk, Emmanuel; Dominey, Peter Ford

    2016-01-01

    Primates display a remarkable ability to adapt to novel situations. Determining what is most pertinent in these situations is not always possible based only on the current sensory inputs, and often also depends on recent inputs and behavioral outputs that contribute to internal states. Thus, one can ask how cortical dynamics generate representations of these complex situations. It has been observed that mixed selectivity in cortical neurons contributes to represent diverse situations defined by a combination of the current stimuli, and that mixed selectivity is readily obtained in randomly connected recurrent networks. In this context, these reservoir networks reproduce the highly recurrent nature of local cortical connectivity. Recombining present and past inputs, random recurrent networks from the reservoir computing framework generate mixed selectivity which provides pre-coded representations of an essentially universal set of contexts. These representations can then be selectively amplified through learning to solve the task at hand. We thus explored their representational power and dynamical properties after training a reservoir to perform a complex cognitive task initially developed for monkeys. The reservoir model inherently displayed a dynamic form of mixed selectivity, key to the representation of the behavioral context over time. The pre-coded representation of context was amplified by training a feedback neuron to explicitly represent this context, thereby reproducing the effect of learning and allowing the model to perform more robustly. This second version of the model demonstrates how a hybrid dynamical regime combining spatio-temporal processing of reservoirs, and input driven attracting dynamics generated by the feedback neuron, can be used to solve a complex cognitive task. We compared reservoir activity to neural activity of dorsal anterior cingulate cortex of monkeys which revealed similar network dynamics. We argue that reservoir computing is a pertinent framework to model local cortical dynamics and their contribution to higher cognitive function. PMID:27286251

  6. Emergent latent symbol systems in recurrent neural networks

    NASA Astrophysics Data System (ADS)

    Monner, Derek; Reggia, James A.

    2012-12-01

    Fodor and Pylyshyn [(1988). Connectionism and cognitive architecture: A critical analysis. Cognition, 28(1-2), 3-71] famously argued that neural networks cannot behave systematically short of implementing a combinatorial symbol system. A recent response from Frank et al. [(2009). Connectionist semantic systematicity. Cognition, 110(3), 358-379] claimed to have trained a neural network to behave systematically without implementing a symbol system and without any in-built predisposition towards combinatorial representations. We believe systems like theirs may in fact implement a symbol system on a deeper and more interesting level: one where the symbols are latent - not visible at the level of network structure. In order to illustrate this possibility, we demonstrate our own recurrent neural network that learns to understand sentence-level language in terms of a scene. We demonstrate our model's learned understanding by testing it on novel sentences and scenes. By paring down our model into an architecturally minimal version, we demonstrate how it supports combinatorial computation over distributed representations by using the associative memory operations of Vector Symbolic Architectures. Knowledge of the model's memory scheme gives us tools to explain its errors and construct superior future models. We show how the model designs and manipulates a latent symbol system in which the combinatorial symbols are patterns of activation distributed across the layers of a neural network, instantiating a hybrid of classical symbolic and connectionist representations that combines advantages of both.

  7. A recurrence network approach to analyzing forced synchronization in hydrodynamic systems

    NASA Astrophysics Data System (ADS)

    Murugesan, Meenatchidevi; Zhu, Yuanhang; Li, Larry K. B.

    2016-11-01

    Hydrodynamically self-excited systems can lock into external forcing, but their lock-in boundaries and the specific bifurcations through which they lock in can be difficult to detect. We propose using recurrence networks to analyze forced synchronization in a hydrodynamic system: a low-density jet. We find that as the jet bifurcates from periodicity (unforced) to quasiperiodicity (weak forcing) and then to lock-in (strong forcing), its recurrence network changes from a regular distribution of links between nodes (unforced) to a disordered topology (weak forcing) and then to a regular distribution again at lock-in (strong forcing). The emergence of order at lock-in can be either smooth or abrupt depending on the specific lock-in route taken. Furthermore, we find that before lock-in, the probability distribution of links in the network is a function of the characteristic scales of the system, which can be quantified with network measures and used to estimate the proximity to the lock-in boundaries. This study shows that recurrence networks can be used (i) to detect lock-in, (ii) to distinguish between different routes to lock-in, and (iii) as an early warning indicator of the proximity of a system to its lock-in boundaries. This work was supported by the Research Grants Council of Hong Kong (Project No. 16235716 and 26202815).

  8. Modeling long-term human activeness using recurrent neural networks for biometric data.

    PubMed

    Kim, Zae Myung; Oh, Hyungrai; Kim, Han-Gyu; Lim, Chae-Gyun; Oh, Kyo-Joong; Choi, Ho-Jin

    2017-05-18

    With the invention of fitness trackers, it has been possible to continuously monitor a user's biometric data such as heart rates, number of footsteps taken, and amount of calories burned. This paper names the time series of these three types of biometric data, the user's "activeness", and investigates the feasibility in modeling and predicting the long-term activeness of the user. The dataset used in this study consisted of several months of biometric time-series data gathered by seven users independently. Four recurrent neural network (RNN) architectures-as well as a deep neural network and a simple regression model-were proposed to investigate the performance on predicting the activeness of the user under various length-related hyper-parameter settings. In addition, the learned model was tested to predict the time period when the user's activeness falls below a certain threshold. A preliminary experimental result shows that each type of activeness data exhibited a short-term autocorrelation; and among the three types of data, the consumed calories and the number of footsteps were positively correlated, while the heart rate data showed almost no correlation with neither of them. It is probably due to this characteristic of the dataset that although the RNN models produced the best results on modeling the user's activeness, the difference was marginal; and other baseline models, especially the linear regression model, performed quite admirably as well. Further experimental results show that it is feasible to predict a user's future activeness with precision, for example, a trained RNN model could predict-with the precision of 84%-when the user would be less active within the next hour given the latest 15 min of his activeness data. This paper defines and investigates the notion of a user's "activeness", and shows that forecasting the long-term activeness of the user is indeed possible. Such information can be utilized by a health-related application to proactively recommend suitable events or services to the user.

  9. Signal processing in local neuronal circuits based on activity-dependent noise and competition

    NASA Astrophysics Data System (ADS)

    Volman, Vladislav; Levine, Herbert

    2009-09-01

    We study the characteristics of weak signal detection by a recurrent neuronal network with plastic synaptic coupling. It is shown that in the presence of an asynchronous component in synaptic transmission, the network acquires selectivity with respect to the frequency of weak periodic stimuli. For nonperiodic frequency-modulated stimuli, the response is quantified by the mutual information between input (signal) and output (network's activity) and is optimized by synaptic depression. Introducing correlations in signal structure resulted in the decrease in input-output mutual information. Our results suggest that in neural systems with plastic connectivity, information is not merely carried passively by the signal; rather, the information content of the signal itself might determine the mode of its processing by a local neuronal circuit.

  10. Spontaneously emerging direction selectivity maps in visual cortex through STDP.

    PubMed

    Wenisch, Oliver G; Noll, Joachim; Hemmen, J Leo van

    2005-10-01

    It is still an open question as to whether, and how, direction-selective neuronal responses in primary visual cortex are generated by feedforward thalamocortical or recurrent intracortical connections, or a combination of both. Here we present an investigation that concentrates on and, only for the sake of simplicity, restricts itself to intracortical circuits, in particular, with respect to the developmental aspects of direction selectivity through spike-timing-dependent synaptic plasticity. We show that directional responses can emerge in a recurrent network model of visual cortex with spiking neurons that integrate inputs mainly from a particular direction, thus giving rise to an asymmetrically shaped receptive field. A moving stimulus that enters the receptive field from this (preferred) direction will activate a neuron most strongly because of the increased number and/or strength of inputs from this direction and since delayed isotropic inhibition will neither overlap with, nor cancel excitation, as would be the case for other stimulus directions. It is demonstrated how direction-selective responses result from spatial asymmetries in the distribution of synaptic contacts or weights of inputs delivered to a neuron by slowly conducting intracortical axonal delay lines. By means of spike-timing-dependent synaptic plasticity with an asymmetric learning window this kind of coupling asymmetry develops naturally in a recurrent network of stochastically spiking neurons in a scenario where the neurons are activated by unidirectionally moving bar stimuli and even when only intrinsic spontaneous activity drives the learning process. We also present simulation results to show the ability of this model to produce direction preference maps similar to experimental findings.

  11. Cell cycle-coupled expansion of AR activity promotes cancer progression.

    PubMed

    McNair, C; Urbanucci, A; Comstock, C E S; Augello, M A; Goodwin, J F; Launchbury, R; Zhao, S G; Schiewer, M J; Ertel, A; Karnes, J; Davicioni, E; Wang, L; Wang, Q; Mills, I G; Feng, F Y; Li, W; Carroll, J S; Knudsen, K E

    2017-03-23

    The androgen receptor (AR) is required for prostate cancer (PCa) survival and progression, and ablation of AR activity is the first line of therapeutic intervention for disseminated disease. While initially effective, recurrent tumors ultimately arise for which there is no durable cure. Despite the dependence of PCa on AR activity throughout the course of disease, delineation of the AR-dependent transcriptional network that governs disease progression remains elusive, and the function of AR in mitotically active cells is not well understood. Analyzing AR activity as a function of cell cycle revealed an unexpected and highly expanded repertoire of AR-regulated gene networks in actively cycling cells. New AR functions segregated into two major clusters: those that are specific to cycling cells and retained throughout the mitotic cell cycle ('Cell Cycle Common'), versus those that were specifically enriched in a subset of cell cycle phases ('Phase Restricted'). Further analyses identified previously unrecognized AR functions in major pathways associated with clinical PCa progression. Illustrating the impact of these unmasked AR-driven pathways, dihydroceramide desaturase 1 was identified as an AR-regulated gene in mitotically active cells that promoted pro-metastatic phenotypes, and in advanced PCa proved to be highly associated with development of metastases, recurrence after therapeutic intervention and reduced overall survival. Taken together, these findings delineate AR function in mitotically active tumor cells, thus providing critical insight into the molecular basis by which AR promotes development of lethal PCa and nominate new avenues for therapeutic intervention.

  12. Multiplex visibility graphs to investigate recurrent neural network dynamics

    NASA Astrophysics Data System (ADS)

    Bianchi, Filippo Maria; Livi, Lorenzo; Alippi, Cesare; Jenssen, Robert

    2017-03-01

    A recurrent neural network (RNN) is a universal approximator of dynamical systems, whose performance often depends on sensitive hyperparameters. Tuning them properly may be difficult and, typically, based on a trial-and-error approach. In this work, we adopt a graph-based framework to interpret and characterize internal dynamics of a class of RNNs called echo state networks (ESNs). We design principled unsupervised methods to derive hyperparameters configurations yielding maximal ESN performance, expressed in terms of prediction error and memory capacity. In particular, we propose to model time series generated by each neuron activations with a horizontal visibility graph, whose topological properties have been shown to be related to the underlying system dynamics. Successively, horizontal visibility graphs associated with all neurons become layers of a larger structure called a multiplex. We show that topological properties of such a multiplex reflect important features of ESN dynamics that can be used to guide the tuning of its hyperparamers. Results obtained on several benchmarks and a real-world dataset of telephone call data records show the effectiveness of the proposed methods.

  13. Multiplex visibility graphs to investigate recurrent neural network dynamics

    PubMed Central

    Bianchi, Filippo Maria; Livi, Lorenzo; Alippi, Cesare; Jenssen, Robert

    2017-01-01

    A recurrent neural network (RNN) is a universal approximator of dynamical systems, whose performance often depends on sensitive hyperparameters. Tuning them properly may be difficult and, typically, based on a trial-and-error approach. In this work, we adopt a graph-based framework to interpret and characterize internal dynamics of a class of RNNs called echo state networks (ESNs). We design principled unsupervised methods to derive hyperparameters configurations yielding maximal ESN performance, expressed in terms of prediction error and memory capacity. In particular, we propose to model time series generated by each neuron activations with a horizontal visibility graph, whose topological properties have been shown to be related to the underlying system dynamics. Successively, horizontal visibility graphs associated with all neurons become layers of a larger structure called a multiplex. We show that topological properties of such a multiplex reflect important features of ESN dynamics that can be used to guide the tuning of its hyperparamers. Results obtained on several benchmarks and a real-world dataset of telephone call data records show the effectiveness of the proposed methods. PMID:28281563

  14. Segmented-memory recurrent neural networks.

    PubMed

    Chen, Jinmiao; Chaudhari, Narendra S

    2009-08-01

    Conventional recurrent neural networks (RNNs) have difficulties in learning long-term dependencies. To tackle this problem, we propose an architecture called segmented-memory recurrent neural network (SMRNN). A symbolic sequence is broken into segments and then presented as inputs to the SMRNN one symbol per cycle. The SMRNN uses separate internal states to store symbol-level context, as well as segment-level context. The symbol-level context is updated for each symbol presented for input. The segment-level context is updated after each segment. The SMRNN is trained using an extended real-time recurrent learning algorithm. We test the performance of SMRNN on the information latching problem, the "two-sequence problem" and the problem of protein secondary structure (PSS) prediction. Our implementation results indicate that SMRNN performs better on long-term dependency problems than conventional RNNs. Besides, we also theoretically analyze how the segmented memory of SMRNN helps learning long-term temporal dependencies and study the impact of the segment length.

  15. A Two-Factor Model of Relapse/Recurrence Vulnerability in Unipolar Depression

    PubMed Central

    Farb, Norman A. S.; Irving, Julie A.; Anderson, Adam K.; Segal, Zindel V.

    2015-01-01

    The substantial health burden associated with Major Depressive Disorder is a product of both its high prevalence and the significant risk of relapse, recurrence and chronicity. Establishing recurrence vulnerability factors (VFs) could improve the long-term management of MDD by identifying the need for further intervention in seemingly recovered patients. We present a model of sensitization in depression vulnerability, with an emphasis on the integration of behavioral and neural systems accounts. Evidence suggests that VFs fall into two categories: dysphoric attention and dysphoric elaboration. Dysphoric attention is driven by fixation on negative life events, and is characterized behaviorally by reduced executive control, and neurally by elevated activity in the brain’s salience network. Dysphoric elaboration is driven by rumination that promotes over-general self and contextual appraisals, and is characterized behaviorally by dysfunctional attitudes, and neurally by elevated connectivity within normally-distinct prefrontal brain networks. While, at present, few prospective VF studies exist from which to catalogue a definitive neurobehavioral account, extant data support the value of the proposed two-factor model. Measuring the continued presence of these two VFs during recovery may more accurately identify remitted patients who would benefit from targeted prophylactic intervention. PMID:25688431

  16. Self-Consistent Scheme for Spike-Train Power Spectra in Heterogeneous Sparse Networks

    PubMed Central

    Pena, Rodrigo F. O.; Vellmer, Sebastian; Bernardi, Davide; Roque, Antonio C.; Lindner, Benjamin

    2018-01-01

    Recurrent networks of spiking neurons can be in an asynchronous state characterized by low or absent cross-correlations and spike statistics which resemble those of cortical neurons. Although spatial correlations are negligible in this state, neurons can show pronounced temporal correlations in their spike trains that can be quantified by the autocorrelation function or the spike-train power spectrum. Depending on cellular and network parameters, correlations display diverse patterns (ranging from simple refractory-period effects and stochastic oscillations to slow fluctuations) and it is generally not well-understood how these dependencies come about. Previous work has explored how the single-cell correlations in a homogeneous network (excitatory and inhibitory integrate-and-fire neurons with nearly balanced mean recurrent input) can be determined numerically from an iterative single-neuron simulation. Such a scheme is based on the fact that every neuron is driven by the network noise (i.e., the input currents from all its presynaptic partners) but also contributes to the network noise, leading to a self-consistency condition for the input and output spectra. Here we first extend this scheme to homogeneous networks with strong recurrent inhibition and a synaptic filter, in which instabilities of the previous scheme are avoided by an averaging procedure. We then extend the scheme to heterogeneous networks in which (i) different neural subpopulations (e.g., excitatory and inhibitory neurons) have different cellular or connectivity parameters; (ii) the number and strength of the input connections are random (Erdős-Rényi topology) and thus different among neurons. In all heterogeneous cases, neurons are lumped in different classes each of which is represented by a single neuron in the iterative scheme; in addition, we make a Gaussian approximation of the input current to the neuron. These approximations seem to be justified over a broad range of parameters as indicated by comparison with simulation results of large recurrent networks. Our method can help to elucidate how network heterogeneity shapes the asynchronous state in recurrent neural networks. PMID:29551968

  17. Engine cylinder pressure reconstruction using crank kinematics and recurrently-trained neural networks

    NASA Astrophysics Data System (ADS)

    Bennett, C.; Dunne, J. F.; Trimby, S.; Richardson, D.

    2017-02-01

    A recurrent non-linear autoregressive with exogenous input (NARX) neural network is proposed, and a suitable fully-recurrent training methodology is adapted and tuned, for reconstructing cylinder pressure in multi-cylinder IC engines using measured crank kinematics. This type of indirect sensing is important for cost effective closed-loop combustion control and for On-Board Diagnostics. The challenge addressed is to accurately predict cylinder pressure traces within the cycle under generalisation conditions: i.e. using data not previously seen by the network during training. This involves direct construction and calibration of a suitable inverse crank dynamic model, which owing to singular behaviour at top-dead-centre (TDC), has proved difficult via physical model construction, calibration, and inversion. The NARX architecture is specialised and adapted to cylinder pressure reconstruction, using a fully-recurrent training methodology which is needed because the alternatives are too slow and unreliable for practical network training on production engines. The fully-recurrent Robust Adaptive Gradient Descent (RAGD) algorithm, is tuned initially using synthesised crank kinematics, and then tested on real engine data to assess the reconstruction capability. Real data is obtained from a 1.125 l, 3-cylinder, in-line, direct injection spark ignition (DISI) engine involving synchronised measurements of crank kinematics and cylinder pressure across a range of steady-state speed and load conditions. The paper shows that a RAGD-trained NARX network using both crank velocity and crank acceleration as input information, provides fast and robust training. By using the optimum epoch identified during RAGD training, acceptably accurate cylinder pressures, and especially accurate location-of-peak-pressure, can be reconstructed robustly under generalisation conditions, making it the most practical NARX configuration and recurrent training methodology for use on production engines.

  18. Diagonal recurrent neural network based adaptive control of nonlinear dynamical systems using lyapunov stability criterion.

    PubMed

    Kumar, Rajesh; Srivastava, Smriti; Gupta, J R P

    2017-03-01

    In this paper adaptive control of nonlinear dynamical systems using diagonal recurrent neural network (DRNN) is proposed. The structure of DRNN is a modification of fully connected recurrent neural network (FCRNN). Presence of self-recurrent neurons in the hidden layer of DRNN gives it an ability to capture the dynamic behaviour of the nonlinear plant under consideration (to be controlled). To ensure stability, update rules are developed using lyapunov stability criterion. These rules are then used for adjusting the various parameters of DRNN. The responses of plants obtained with DRNN are compared with those obtained when multi-layer feed forward neural network (MLFFNN) is used as a controller. Also, in example 4, FCRNN is also investigated and compared with DRNN and MLFFNN. Robustness of the proposed control scheme is also tested against parameter variations and disturbance signals. Four simulation examples including one-link robotic manipulator and inverted pendulum are considered on which the proposed controller is applied. The results so obtained show the superiority of DRNN over MLFFNN as a controller. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  19. Training the Recurrent neural network by the Fuzzy Min-Max algorithm for fault prediction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zemouri, Ryad; Racoceanu, Daniel; Zerhouni, Noureddine

    2009-03-05

    In this paper, we present a training technique of a Recurrent Radial Basis Function neural network for fault prediction. We use the Fuzzy Min-Max technique to initialize the k-center of the RRBF neural network. The k-means algorithm is then applied to calculate the centers that minimize the mean square error of the prediction task. The performances of the k-means algorithm are then boosted by the Fuzzy Min-Max technique.

  20. MATLAB Simulation of Gradient-Based Neural Network for Online Matrix Inversion

    NASA Astrophysics Data System (ADS)

    Zhang, Yunong; Chen, Ke; Ma, Weimu; Li, Xiao-Dong

    This paper investigates the simulation of a gradient-based recurrent neural network for online solution of the matrix-inverse problem. Several important techniques are employed as follows to simulate such a neural system. 1) Kronecker product of matrices is introduced to transform a matrix-differential-equation (MDE) to a vector-differential-equation (VDE); i.e., finally, a standard ordinary-differential-equation (ODE) is obtained. 2) MATLAB routine "ode45" is introduced to solve the transformed initial-value ODE problem. 3) In addition to various implementation errors, different kinds of activation functions are simulated to show the characteristics of such a neural network. Simulation results substantiate the theoretical analysis and efficacy of the gradient-based neural network for online constant matrix inversion.

  1. Advanced functional network analysis in the geosciences: The pyunicorn package

    NASA Astrophysics Data System (ADS)

    Donges, Jonathan F.; Heitzig, Jobst; Runge, Jakob; Schultz, Hanna C. H.; Wiedermann, Marc; Zech, Alraune; Feldhoff, Jan; Rheinwalt, Aljoscha; Kutza, Hannes; Radebach, Alexander; Marwan, Norbert; Kurths, Jürgen

    2013-04-01

    Functional networks are a powerful tool for analyzing large geoscientific datasets such as global fields of climate time series originating from observations or model simulations. pyunicorn (pythonic unified complex network and recurrence analysis toolbox) is an open-source, fully object-oriented and easily parallelizable package written in the language Python. It allows for constructing functional networks (aka climate networks) representing the structure of statistical interrelationships in large datasets and, subsequently, investigating this structure using advanced methods of complex network theory such as measures for networks of interacting networks, node-weighted statistics or network surrogates. Additionally, pyunicorn allows to study the complex dynamics of geoscientific systems as recorded by time series by means of recurrence networks and visibility graphs. The range of possible applications of the package is outlined drawing on several examples from climatology.

  2. Network perturbation by recurrent regulatory variants in cancer

    PubMed Central

    Cho, Ara; Lee, Insuk; Choi, Jung Kyoon

    2017-01-01

    Cancer driving genes have been identified as recurrently affected by variants that alter protein-coding sequences. However, a majority of cancer variants arise in noncoding regions, and some of them are thought to play a critical role through transcriptional perturbation. Here we identified putative transcriptional driver genes based on combinatorial variant recurrence in cis-regulatory regions. The identified genes showed high connectivity in the cancer type-specific transcription regulatory network, with high outdegree and many downstream genes, highlighting their causative role during tumorigenesis. In the protein interactome, the identified transcriptional drivers were not as highly connected as coding driver genes but appeared to form a network module centered on the coding drivers. The coding and regulatory variants associated via these interactions between the coding and transcriptional drivers showed exclusive and complementary occurrence patterns across tumor samples. Transcriptional cancer drivers may act through an extensive perturbation of the regulatory network and by altering protein network modules through interactions with coding driver genes. PMID:28333928

  3. Correlations Decrease with Propagation of Spiking Activity in the Mouse Barrel Cortex

    PubMed Central

    Ranganathan, Gayathri Nattar; Koester, Helmut Joachim

    2011-01-01

    Propagation of suprathreshold spiking activity through neuronal populations is important for the function of the central nervous system. Neural correlations have an impact on cortical function particularly on the signaling of information and propagation of spiking activity. Therefore we measured the change in correlations as suprathreshold spiking activity propagated between recurrent neuronal networks of the mammalian cerebral cortex. Using optical methods we recorded spiking activity from large samples of neurons from two neural populations simultaneously. The results indicate that correlations decreased as spiking activity propagated from layer 4 to layer 2/3 in the rodent barrel cortex. PMID:21629764

  4. Computational Modeling of Statistical Learning: Effects of Transitional Probability versus Frequency and Links to Word Learning

    ERIC Educational Resources Information Center

    Mirman, Daniel; Estes, Katharine Graf; Magnuson, James S.

    2010-01-01

    Statistical learning mechanisms play an important role in theories of language acquisition and processing. Recurrent neural network models have provided important insights into how these mechanisms might operate. We examined whether such networks capture two key findings in human statistical learning. In Simulation 1, a simple recurrent network…

  5. The Influence of Mexican Hat Recurrent Connectivity on Noise Correlations and Stimulus Encoding

    PubMed Central

    Meyer, Robert; Ladenbauer, Josef; Obermayer, Klaus

    2017-01-01

    Noise correlations are a common feature of neural responses and have been observed in many cortical areas across different species. These correlations can influence information processing by enhancing or diminishing the quality of the neural code, but the origin of these correlations is still a matter of controversy. In this computational study we explore the hypothesis that noise correlations are the result of local recurrent excitatory and inhibitory connections. We simulated two-dimensional networks of adaptive spiking neurons with local connection patterns following Gaussian kernels. Noise correlations decay with distance between neurons but are only observed if the range of excitatory connections is smaller than the range of inhibitory connections (“Mexican hat” connectivity) and if the connection strengths are sufficiently strong. These correlations arise from a moving blob-like structure of evoked activity, which is absent if inhibitory interactions have a smaller range (“inverse Mexican hat” connectivity). Spatially structured external inputs fixate these blobs to certain locations and thus effectively reduce noise correlations. We further investigated the influence of these network configurations on stimulus encoding. On the one hand, the observed correlations diminish information about a stimulus encoded by a network. On the other hand, correlated activity allows for more precise encoding of stimulus information if the decoder has only access to a limited amount of neurons. PMID:28539881

  6. Network recruitment to coherent oscillations in a hippocampal computer model

    PubMed Central

    Krieger, Abba; Litt, Brian

    2011-01-01

    Coherent neural oscillations represent transient synchronization of local neuronal populations in both normal and pathological brain activity. These oscillations occur at or above gamma frequencies (>30 Hz) and often are propagated to neighboring tissue under circumstances that are both normal and abnormal, such as gamma binding or seizures. The mechanisms that generate and propagate these oscillations are poorly understood. In the present study we demonstrate, via a detailed computational model, a mechanism whereby physiological noise and coupling initiate oscillations and then recruit neighboring tissue, in a manner well described by a combination of stochastic resonance and coherence resonance. We develop a novel statistical method to quantify recruitment using several measures of network synchrony. This measurement demonstrates that oscillations spread via preexisting network connections such as interneuronal connections, recurrent synapses, and gap junctions, provided that neighboring cells also receive sufficient inputs in the form of random synaptic noise. “Epileptic” high-frequency oscillations (HFOs), produced by pathologies such as increased synaptic activity and recurrent connections, were superior at recruiting neighboring tissue. “Normal” HFOs, associated with fast firing of inhibitory cells and sparse pyramidal cell firing, tended to suppress surrounding cells and showed very limited ability to recruit. These findings point to synaptic noise and physiological coupling as important targets for understanding the generation and propagation of both normal and pathological HFOs, suggesting potential new diagnostic and therapeutic approaches to human disorders such as epilepsy. PMID:21273309

  7. The experimental identification of magnetorheological dampers and evaluation of their controllers

    NASA Astrophysics Data System (ADS)

    Metered, H.; Bonello, P.; Oyadiji, S. O.

    2010-05-01

    Magnetorheological (MR) fluid dampers are semi-active control devices that have been applied over a wide range of practical vibration control applications. This paper concerns the experimental identification of the dynamic behaviour of an MR damper and the use of the identified parameters in the control of such a damper. Feed-forward and recurrent neural networks are used to model both the direct and inverse dynamics of the damper. Training and validation of the proposed neural networks are achieved by using the data generated through dynamic tests with the damper mounted on a tensile testing machine. The validation test results clearly show that the proposed neural networks can reliably represent both the direct and inverse dynamic behaviours of an MR damper. The effect of the cylinder's surface temperature on both the direct and inverse dynamics of the damper is studied, and the neural network model is shown to be reasonably robust against significant temperature variation. The inverse recurrent neural network model is introduced as a damper controller and experimentally evaluated against alternative controllers proposed in the literature. The results reveal that the neural-based damper controller offers superior damper control. This observation and the added advantages of low-power requirement, extended service life of the damper and the minimal use of sensors, indicate that a neural-based damper controller potentially offers the most cost-effective vibration control solution among the controllers investigated.

  8. Global exponential periodicity and stability of discrete-time complex-valued recurrent neural networks with time-delays.

    PubMed

    Hu, Jin; Wang, Jun

    2015-06-01

    In recent years, complex-valued recurrent neural networks have been developed and analysed in-depth in view of that they have good modelling performance for some applications involving complex-valued elements. In implementing continuous-time dynamical systems for simulation or computational purposes, it is quite necessary to utilize a discrete-time model which is an analogue of the continuous-time system. In this paper, we analyse a discrete-time complex-valued recurrent neural network model and obtain the sufficient conditions on its global exponential periodicity and exponential stability. Simulation results of several numerical examples are delineated to illustrate the theoretical results and an application on associative memory is also given. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Recurrent Neural Network for Computing the Drazin Inverse.

    PubMed

    Stanimirović, Predrag S; Zivković, Ivan S; Wei, Yimin

    2015-11-01

    This paper presents a recurrent neural network (RNN) for computing the Drazin inverse of a real matrix in real time. This recurrent neural network (RNN) is composed of n independent parts (subnetworks), where n is the order of the input matrix. These subnetworks can operate concurrently, so parallel and distributed processing can be achieved. In this way, the computational advantages over the existing sequential algorithms can be attained in real-time applications. The RNN defined in this paper is convenient for an implementation in an electronic circuit. The number of neurons in the neural network is the same as the number of elements in the output matrix, which represents the Drazin inverse. The difference between the proposed RNN and the existing ones for the Drazin inverse computation lies in their network architecture and dynamics. The conditions that ensure the stability of the defined RNN as well as its convergence toward the Drazin inverse are considered. In addition, illustrative examples and examples of application to the practical engineering problems are discussed to show the efficacy of the proposed neural network.

  10. Anti-correlated cortical networks arise from spontaneous neuronal dynamics at slow timescales.

    PubMed

    Kodama, Nathan X; Feng, Tianyi; Ullett, James J; Chiel, Hillel J; Sivakumar, Siddharth S; Galán, Roberto F

    2018-01-12

    In the highly interconnected architectures of the cerebral cortex, recurrent intracortical loops disproportionately outnumber thalamo-cortical inputs. These networks are also capable of generating neuronal activity without feedforward sensory drive. It is unknown, however, what spatiotemporal patterns may be solely attributed to intrinsic connections of the local cortical network. Using high-density microelectrode arrays, here we show that in the isolated, primary somatosensory cortex of mice, neuronal firing fluctuates on timescales from milliseconds to tens of seconds. Slower firing fluctuations reveal two spatially distinct neuronal ensembles, which correspond to superficial and deeper layers. These ensembles are anti-correlated: when one fires more, the other fires less and vice versa. This interplay is clearest at timescales of several seconds and is therefore consistent with shifts between active sensing and anticipatory behavioral states in mice.

  11. Population equations for degree-heterogenous neural networks

    NASA Astrophysics Data System (ADS)

    Kähne, M.; Sokolov, I. M.; Rüdiger, S.

    2017-11-01

    We develop a statistical framework for studying recurrent networks with broad distributions of the number of synaptic links per neuron. We treat each group of neurons with equal input degree as one population and derive a system of equations determining the population-averaged firing rates. The derivation rests on an assumption of a large number of neurons and, additionally, an assumption of a large number of synapses per neuron. For the case of binary neurons, analytical solutions can be constructed, which correspond to steps in the activity versus degree space. We apply this theory to networks with degree-correlated topology and show that complex, multi-stable regimes can result for increasing correlations. Our work is motivated by the recent finding of subnetworks of highly active neurons and the fact that these neurons tend to be connected to each other with higher probability.

  12. Deep Neural Network for Structural Prediction and Lane Detection in Traffic Scene.

    PubMed

    Li, Jun; Mei, Xue; Prokhorov, Danil; Tao, Dacheng

    2017-03-01

    Hierarchical neural networks have been shown to be effective in learning representative image features and recognizing object classes. However, most existing networks combine the low/middle level cues for classification without accounting for any spatial structures. For applications such as understanding a scene, how the visual cues are spatially distributed in an image becomes essential for successful analysis. This paper extends the framework of deep neural networks by accounting for the structural cues in the visual signals. In particular, two kinds of neural networks have been proposed. First, we develop a multitask deep convolutional network, which simultaneously detects the presence of the target and the geometric attributes (location and orientation) of the target with respect to the region of interest. Second, a recurrent neuron layer is adopted for structured visual detection. The recurrent neurons can deal with the spatial distribution of visible cues belonging to an object whose shape or structure is difficult to explicitly define. Both the networks are demonstrated by the practical task of detecting lane boundaries in traffic scenes. The multitask convolutional neural network provides auxiliary geometric information to help the subsequent modeling of the given lane structures. The recurrent neural network automatically detects lane boundaries, including those areas containing no marks, without any explicit prior knowledge or secondary modeling.

  13. Back-propagation learning of infinite-dimensional dynamical systems.

    PubMed

    Tokuda, Isao; Tokunaga, Ryuji; Aihara, Kazuyuki

    2003-10-01

    This paper presents numerical studies of applying back-propagation learning to a delayed recurrent neural network (DRNN). The DRNN is a continuous-time recurrent neural network having time delayed feedbacks and the back-propagation learning is to teach spatio-temporal dynamics to the DRNN. Since the time-delays make the dynamics of the DRNN infinite-dimensional, the learning algorithm and the learning capability of the DRNN are different from those of the ordinary recurrent neural network (ORNN) having no time-delays. First, two types of learning algorithms are developed for a class of DRNNs. Then, using chaotic signals generated from the Mackey-Glass equation and the Rössler equations, learning capability of the DRNN is examined. Comparing the learning algorithms, learning capability, and robustness against noise of the DRNN with those of the ORNN and time delay neural network, advantages as well as disadvantages of the DRNN are investigated.

  14. Spatiotemporal Recurrent Convolutional Networks for Traffic Prediction in Transportation Networks

    PubMed Central

    Yu, Haiyang; Wu, Zhihai; Wang, Shuqin; Wang, Yunpeng; Ma, Xiaolei

    2017-01-01

    Predicting large-scale transportation network traffic has become an important and challenging topic in recent decades. Inspired by the domain knowledge of motion prediction, in which the future motion of an object can be predicted based on previous scenes, we propose a network grid representation method that can retain the fine-scale structure of a transportation network. Network-wide traffic speeds are converted into a series of static images and input into a novel deep architecture, namely, spatiotemporal recurrent convolutional networks (SRCNs), for traffic forecasting. The proposed SRCNs inherit the advantages of deep convolutional neural networks (DCNNs) and long short-term memory (LSTM) neural networks. The spatial dependencies of network-wide traffic can be captured by DCNNs, and the temporal dynamics can be learned by LSTMs. An experiment on a Beijing transportation network with 278 links demonstrates that SRCNs outperform other deep learning-based algorithms in both short-term and long-term traffic prediction. PMID:28672867

  15. Spatiotemporal Recurrent Convolutional Networks for Traffic Prediction in Transportation Networks.

    PubMed

    Yu, Haiyang; Wu, Zhihai; Wang, Shuqin; Wang, Yunpeng; Ma, Xiaolei

    2017-06-26

    Predicting large-scale transportation network traffic has become an important and challenging topic in recent decades. Inspired by the domain knowledge of motion prediction, in which the future motion of an object can be predicted based on previous scenes, we propose a network grid representation method that can retain the fine-scale structure of a transportation network. Network-wide traffic speeds are converted into a series of static images and input into a novel deep architecture, namely, spatiotemporal recurrent convolutional networks (SRCNs), for traffic forecasting. The proposed SRCNs inherit the advantages of deep convolutional neural networks (DCNNs) and long short-term memory (LSTM) neural networks. The spatial dependencies of network-wide traffic can be captured by DCNNs, and the temporal dynamics can be learned by LSTMs. An experiment on a Beijing transportation network with 278 links demonstrates that SRCNs outperform other deep learning-based algorithms in both short-term and long-term traffic prediction.

  16. Using Elman recurrent neural networks with conjugate gradient algorithm in determining the anesthetic the amount of anesthetic medicine to be applied.

    PubMed

    Güntürkün, Rüştü

    2010-08-01

    In this study, Elman recurrent neural networks have been defined by using conjugate gradient algorithm in order to determine the depth of anesthesia in the continuation stage of the anesthesia and to estimate the amount of medicine to be applied at that moment. The feed forward neural networks are also used for comparison. The conjugate gradient algorithm is compared with back propagation (BP) for training of the neural Networks. The applied artificial neural network is composed of three layers, namely the input layer, the hidden layer and the output layer. The nonlinear activation function sigmoid (sigmoid function) has been used in the hidden layer and the output layer. EEG data has been recorded with Nihon Kohden 9200 brand 22-channel EEG device. The international 8-channel bipolar 10-20 montage system (8 TB-b system) has been used in assembling the recording electrodes. EEG data have been recorded by being sampled once in every 2 milliseconds. The artificial neural network has been designed so as to have 60 neurons in the input layer, 30 neurons in the hidden layer and 1 neuron in the output layer. The values of the power spectral density (PSD) of 10-second EEG segments which correspond to the 1-50 Hz frequency range; the ratio of the total power of PSD values of the EEG segment at that moment in the same range to the total of PSD values of EEG segment taken prior to the anesthesia.

  17. A loop-based neural architecture for structured behavior encoding and decoding.

    PubMed

    Gisiger, Thomas; Boukadoum, Mounir

    2018-02-01

    We present a new type of artificial neural network that generalizes on anatomical and dynamical aspects of the mammal brain. Its main novelty lies in its topological structure which is built as an array of interacting elementary motifs shaped like loops. These loops come in various types and can implement functions such as gating, inhibitory or executive control, or encoding of task elements to name a few. Each loop features two sets of neurons and a control region, linked together by non-recurrent projections. The two neural sets do the bulk of the loop's computations while the control unit specifies the timing and the conditions under which the computations implemented by the loop are to be performed. By functionally linking many such loops together, a neural network is obtained that may perform complex cognitive computations. To demonstrate the potential offered by such a system, we present two neural network simulations. The first illustrates the structure and dynamics of a single loop implementing a simple gating mechanism. The second simulation shows how connecting four loops in series can produce neural activity patterns that are sufficient to pass a simplified delayed-response task. We also show that this network reproduces electrophysiological measurements gathered in various regions of the brain of monkeys performing similar tasks. We also demonstrate connections between this type of neural network and recurrent or long short-term memory network models, and suggest ways to generalize them for future artificial intelligence research. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. A modular architecture for transparent computation in recurrent neural networks.

    PubMed

    Carmantini, Giovanni S; Beim Graben, Peter; Desroches, Mathieu; Rodrigues, Serafim

    2017-01-01

    Computation is classically studied in terms of automata, formal languages and algorithms; yet, the relation between neural dynamics and symbolic representations and operations is still unclear in traditional eliminative connectionism. Therefore, we suggest a unique perspective on this central issue, to which we would like to refer as transparent connectionism, by proposing accounts of how symbolic computation can be implemented in neural substrates. In this study we first introduce a new model of dynamics on a symbolic space, the versatile shift, showing that it supports the real-time simulation of a range of automata. We then show that the Gödelization of versatile shifts defines nonlinear dynamical automata, dynamical systems evolving on a vectorial space. Finally, we present a mapping between nonlinear dynamical automata and recurrent artificial neural networks. The mapping defines an architecture characterized by its granular modularity, where data, symbolic operations and their control are not only distinguishable in activation space, but also spatially localizable in the network itself, while maintaining a distributed encoding of symbolic representations. The resulting networks simulate automata in real-time and are programmed directly, in the absence of network training. To discuss the unique characteristics of the architecture and their consequences, we present two examples: (i) the design of a Central Pattern Generator from a finite-state locomotive controller, and (ii) the creation of a network simulating a system of interactive automata that supports the parsing of garden-path sentences as investigated in psycholinguistics experiments. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Short-term memory capacity in networks via the restricted isometry property.

    PubMed

    Charles, Adam S; Yap, Han Lun; Rozell, Christopher J

    2014-06-01

    Cortical networks are hypothesized to rely on transient network activity to support short-term memory (STM). In this letter, we study the capacity of randomly connected recurrent linear networks for performing STM when the input signals are approximately sparse in some basis. We leverage results from compressed sensing to provide rigorous nonasymptotic recovery guarantees, quantifying the impact of the input sparsity level, the input sparsity basis, and the network characteristics on the system capacity. Our analysis demonstrates that network memory capacities can scale superlinearly with the number of nodes and in some situations can achieve STM capacities that are much larger than the network size. We provide perfect recovery guarantees for finite sequences and recovery bounds for infinite sequences. The latter analysis predicts that network STM systems may have an optimal recovery length that balances errors due to omission and recall mistakes. Furthermore, we show that the conditions yielding optimal STM capacity can be embodied in several network topologies, including networks with sparse or dense connectivities.

  20. Dynamic neural network models of the premotoneuronal circuitry controlling wrist movements in primates.

    PubMed

    Maier, M A; Shupe, L E; Fetz, E E

    2005-10-01

    Dynamic recurrent neural networks were derived to simulate neuronal populations generating bidirectional wrist movements in the monkey. The models incorporate anatomical connections of cortical and rubral neurons, muscle afferents, segmental interneurons and motoneurons; they also incorporate the response profiles of four populations of neurons observed in behaving monkeys. The networks were derived by gradient descent algorithms to generate the eight characteristic patterns of motor unit activations observed during alternating flexion-extension wrist movements. The resulting model generated the appropriate input-output transforms and developed connection strengths resembling those in physiological pathways. We found that this network could be further trained to simulate additional tasks, such as experimentally observed reflex responses to limb perturbations that stretched or shortened the active muscles, and scaling of response amplitudes in proportion to inputs. In the final comprehensive network, motor units are driven by the combined activity of cortical, rubral, spinal and afferent units during step tracking and perturbations. The model displayed many emergent properties corresponding to physiological characteristics. The resulting neural network provides a working model of premotoneuronal circuitry and elucidates the neural mechanisms controlling motoneuron activity. It also predicts several features to be experimentally tested, for example the consequences of eliminating inhibitory connections in cortex and red nucleus. It also reveals that co-contraction can be achieved by simultaneous activation of the flexor and extensor circuits without invoking features specific to co-contraction.

  1. Computations in the deep vs superficial layers of the cerebral cortex.

    PubMed

    Rolls, Edmund T; Mills, W Patrick C

    2017-11-01

    A fundamental question is how the cerebral neocortex operates functionally, computationally. The cerebral neocortex with its superficial and deep layers and highly developed recurrent collateral systems that provide a basis for memory-related processing might perform somewhat different computations in the superficial and deep layers. Here we take into account the quantitative connectivity within and between laminae. Using integrate-and-fire neuronal network simulations that incorporate this connectivity, we first show that attractor networks implemented in the deep layers that are activated by the superficial layers could be partly independent in that the deep layers might have a different time course, which might because of adaptation be more transient and useful for outputs from the neocortex. In contrast the superficial layers could implement more prolonged firing, useful for slow learning and for short-term memory. Second, we show that a different type of computation could in principle be performed in the superficial and deep layers, by showing that the superficial layers could operate as a discrete attractor network useful for categorisation and feeding information forward up a cortical hierarchy, whereas the deep layers could operate as a continuous attractor network useful for providing a spatially and temporally smooth output to output systems in the brain. A key advance is that we draw attention to the functions of the recurrent collateral connections between cortical pyramidal cells, often omitted in canonical models of the neocortex, and address principles of operation of the neocortex by which the superficial and deep layers might be specialized for different types of attractor-related memory functions implemented by the recurrent collaterals. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Regenerating time series from ordinal networks.

    PubMed

    McCullough, Michael; Sakellariou, Konstantinos; Stemler, Thomas; Small, Michael

    2017-03-01

    Recently proposed ordinal networks not only afford novel methods of nonlinear time series analysis but also constitute stochastic approximations of the deterministic flow time series from which the network models are constructed. In this paper, we construct ordinal networks from discrete sampled continuous chaotic time series and then regenerate new time series by taking random walks on the ordinal network. We then investigate the extent to which the dynamics of the original time series are encoded in the ordinal networks and retained through the process of regenerating new time series by using several distinct quantitative approaches. First, we use recurrence quantification analysis on traditional recurrence plots and order recurrence plots to compare the temporal structure of the original time series with random walk surrogate time series. Second, we estimate the largest Lyapunov exponent from the original time series and investigate the extent to which this invariant measure can be estimated from the surrogate time series. Finally, estimates of correlation dimension are computed to compare the topological properties of the original and surrogate time series dynamics. Our findings show that ordinal networks constructed from univariate time series data constitute stochastic models which approximate important dynamical properties of the original systems.

  3. Regenerating time series from ordinal networks

    NASA Astrophysics Data System (ADS)

    McCullough, Michael; Sakellariou, Konstantinos; Stemler, Thomas; Small, Michael

    2017-03-01

    Recently proposed ordinal networks not only afford novel methods of nonlinear time series analysis but also constitute stochastic approximations of the deterministic flow time series from which the network models are constructed. In this paper, we construct ordinal networks from discrete sampled continuous chaotic time series and then regenerate new time series by taking random walks on the ordinal network. We then investigate the extent to which the dynamics of the original time series are encoded in the ordinal networks and retained through the process of regenerating new time series by using several distinct quantitative approaches. First, we use recurrence quantification analysis on traditional recurrence plots and order recurrence plots to compare the temporal structure of the original time series with random walk surrogate time series. Second, we estimate the largest Lyapunov exponent from the original time series and investigate the extent to which this invariant measure can be estimated from the surrogate time series. Finally, estimates of correlation dimension are computed to compare the topological properties of the original and surrogate time series dynamics. Our findings show that ordinal networks constructed from univariate time series data constitute stochastic models which approximate important dynamical properties of the original systems.

  4. Finite-time synchronization control of a class of memristor-based recurrent neural networks.

    PubMed

    Jiang, Minghui; Wang, Shuangtao; Mei, Jun; Shen, Yanjun

    2015-03-01

    This paper presents a global and local finite-time synchronization control law for memristor neural networks. By utilizing the drive-response concept, differential inclusions theory, and Lyapunov functional method, we establish several sufficient conditions for finite-time synchronization between the master and corresponding slave memristor-based neural network with the designed controller. In comparison with the existing results, the proposed stability conditions are new, and the obtained results extend some previous works on conventional recurrent neural networks. Two numerical examples are provided to illustrate the effective of the design method. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Unified functional network and nonlinear time series analysis for complex systems science: The pyunicorn package

    NASA Astrophysics Data System (ADS)

    Donges, Jonathan; Heitzig, Jobst; Beronov, Boyan; Wiedermann, Marc; Runge, Jakob; Feng, Qing Yi; Tupikina, Liubov; Stolbova, Veronika; Donner, Reik; Marwan, Norbert; Dijkstra, Henk; Kurths, Jürgen

    2016-04-01

    We introduce the pyunicorn (Pythonic unified complex network and recurrence analysis toolbox) open source software package for applying and combining modern methods of data analysis and modeling from complex network theory and nonlinear time series analysis. pyunicorn is a fully object-oriented and easily parallelizable package written in the language Python. It allows for the construction of functional networks such as climate networks in climatology or functional brain networks in neuroscience representing the structure of statistical interrelationships in large data sets of time series and, subsequently, investigating this structure using advanced methods of complex network theory such as measures and models for spatial networks, networks of interacting networks, node-weighted statistics, or network surrogates. Additionally, pyunicorn provides insights into the nonlinear dynamics of complex systems as recorded in uni- and multivariate time series from a non-traditional perspective by means of recurrence quantification analysis, recurrence networks, visibility graphs, and construction of surrogate time series. The range of possible applications of the library is outlined, drawing on several examples mainly from the field of climatology. pyunicorn is available online at https://github.com/pik-copan/pyunicorn. Reference: J.F. Donges, J. Heitzig, B. Beronov, M. Wiedermann, J. Runge, Q.-Y. Feng, L. Tupikina, V. Stolbova, R.V. Donner, N. Marwan, H.A. Dijkstra, and J. Kurths, Unified functional network and nonlinear time series analysis for complex systems science: The pyunicorn package, Chaos 25, 113101 (2015), DOI: 10.1063/1.4934554, Preprint: arxiv.org:1507.01571 [physics.data-an].

  6. Medical Concept Normalization in Social Media Posts with Recurrent Neural Networks.

    PubMed

    Tutubalina, Elena; Miftahutdinov, Zulfat; Nikolenko, Sergey; Malykh, Valentin

    2018-06-12

    Text mining of scientific libraries and social media has already proven itself as a reliable tool for drug repurposing and hypothesis generation. The task of mapping a disease mention to a concept in a controlled vocabulary, typically to the standard thesaurus in the Unified Medical Language System (UMLS), is known as medical concept normalization. This task is challenging due to the differences in the use of medical terminology between health care professionals and social media texts coming from the lay public. To bridge this gap, we use sequence learning with recurrent neural networks and semantic representation of one- or multi-word expressions: we develop end-to-end architectures directly tailored to the task, including bidirectional Long Short-Term Memory, Gated Recurrent Units with an attention mechanism, and additional semantic similarity features based on UMLS. Our evaluation against a standard benchmark shows that recurrent neural networks improve results over an effective baseline for classification based on convolutional neural networks. A qualitative examination of mentions discovered in a dataset of user reviews collected from popular online health information platforms as well as a quantitative evaluation both show improvements in the semantic representation of health-related expressions in social media. Copyright © 2018. Published by Elsevier Inc.

  7. Columnar interactions determine horizontal propagation of recurrent network activity in neocortex

    PubMed Central

    Wester, Jason C.; Contreras, Diego

    2012-01-01

    The cortex is organized in vertical and horizontal circuits that determine the spatiotemporal properties of distributed cortical activity. Despite detailed knowledge of synaptic interactions among individual cells in the neocortex, little is known about the rules governing interactions among local populations. Here we used self-sustained recurrent activity generated in cortex, also known as up-states, in rat thalamocortical slices in vitro to understand interactions among laminar and horizontal circuits. By means of intracellular recordings and fast optical imaging with voltage sensitive dyes, we show that single thalamic inputs activate the cortical column in a preferential L4→L2/3→L5 sequence, followed by horizontal propagation with a leading front in supra and infragranular layers. To understand the laminar and columnar interactions, we used focal injections of TTX to block activity in small local populations, while preserving functional connectivity in the rest of the network. We show that L2/3 alone, without underlying L5, does not generate self-sustained activity and is inefficient propagating activity horizontally. In contrast, L5 sustains activity in the absence of L2/3 and is necessary and sufficient to propagate activity horizontally. However, loss of L2/3 delays horizontal propagation via L5. Finally, L5 amplifies activity in L2/3. Our results show for the first time that columnar interactions between supra and infragranular layers are required for the normal propagation of activity in the neocortex. Our data suggest that supra and infragranular circuits with their specific and complex set of inputs and outputs, work in tandem to determine the patterns of cortical activation observed in vivo. PMID:22514308

  8. Integrative gene network construction to analyze cancer recurrence using semi-supervised learning.

    PubMed

    Park, Chihyun; Ahn, Jaegyoon; Kim, Hyunjin; Park, Sanghyun

    2014-01-01

    The prognosis of cancer recurrence is an important research area in bioinformatics and is challenging due to the small sample sizes compared to the vast number of genes. There have been several attempts to predict cancer recurrence. Most studies employed a supervised approach, which uses only a few labeled samples. Semi-supervised learning can be a great alternative to solve this problem. There have been few attempts based on manifold assumptions to reveal the detailed roles of identified cancer genes in recurrence. In order to predict cancer recurrence, we proposed a novel semi-supervised learning algorithm based on a graph regularization approach. We transformed the gene expression data into a graph structure for semi-supervised learning and integrated protein interaction data with the gene expression data to select functionally-related gene pairs. Then, we predicted the recurrence of cancer by applying a regularization approach to the constructed graph containing both labeled and unlabeled nodes. The average improvement rate of accuracy for three different cancer datasets was 24.9% compared to existing supervised and semi-supervised methods. We performed functional enrichment on the gene networks used for learning. We identified that those gene networks are significantly associated with cancer-recurrence-related biological functions. Our algorithm was developed with standard C++ and is available in Linux and MS Windows formats in the STL library. The executable program is freely available at: http://embio.yonsei.ac.kr/~Park/ssl.php.

  9. Control of magnetic bearing systems via the Chebyshev polynomial-based unified model (CPBUM) neural network.

    PubMed

    Jeng, J T; Lee, T T

    2000-01-01

    A Chebyshev polynomial-based unified model (CPBUM) neural network is introduced and applied to control a magnetic bearing systems. First, we show that the CPBUM neural network not only has the same capability of universal approximator, but also has faster learning speed than conventional feedforward/recurrent neural network. It turns out that the CPBUM neural network is more suitable in the design of controller than the conventional feedforward/recurrent neural network. Second, we propose the inverse system method, based on the CPBUM neural networks, to control a magnetic bearing system. The proposed controller has two structures; namely, off-line and on-line learning structures. We derive a new learning algorithm for each proposed structure. The experimental results show that the proposed neural network architecture provides a greater flexibility and better performance in controlling magnetic bearing systems.

  10. An artificial network model for estimating the network structure underlying partially observed neuronal signals.

    PubMed

    Komatsu, Misako; Namikawa, Jun; Chao, Zenas C; Nagasaka, Yasuo; Fujii, Naotaka; Nakamura, Kiyohiko; Tani, Jun

    2014-01-01

    Many previous studies have proposed methods for quantifying neuronal interactions. However, these methods evaluated the interactions between recorded signals in an isolated network. In this study, we present a novel approach for estimating interactions between observed neuronal signals by theorizing that those signals are observed from only a part of the network that also includes unobserved structures. We propose a variant of the recurrent network model that consists of both observable and unobservable units. The observable units represent recorded neuronal activity, and the unobservable units are introduced to represent activity from unobserved structures in the network. The network structures are characterized by connective weights, i.e., the interaction intensities between individual units, which are estimated from recorded signals. We applied this model to multi-channel brain signals recorded from monkeys, and obtained robust network structures with physiological relevance. Furthermore, the network exhibited common features that portrayed cortical dynamics as inversely correlated interactions between excitatory and inhibitory populations of neurons, which are consistent with the previous view of cortical local circuits. Our results suggest that the novel concept of incorporating an unobserved structure into network estimations has theoretical advantages and could provide insights into brain dynamics beyond what can be directly observed. Copyright © 2014 Elsevier Ireland Ltd and the Japan Neuroscience Society. All rights reserved.

  11. Dynamic Balance of Excitation and Inhibition in Human and Monkey Neocortex

    NASA Astrophysics Data System (ADS)

    Dehghani, Nima; Peyrache, Adrien; Telenczuk, Bartosz; Le van Quyen, Michel; Halgren, Eric; Cash, Sydney S.; Hatsopoulos, Nicholas G.; Destexhe, Alain

    2016-03-01

    Balance of excitation and inhibition is a fundamental feature of in vivo network activity and is important for its computations. However, its presence in the neocortex of higher mammals is not well established. We investigated the dynamics of excitation and inhibition using dense multielectrode recordings in humans and monkeys. We found that in all states of the wake-sleep cycle, excitatory and inhibitory ensembles are well balanced, and co-fluctuate with slight instantaneous deviations from perfect balance, mostly in slow-wave sleep. Remarkably, these correlated fluctuations are seen for many different temporal scales. The similarity of these computational features with a network model of self-generated balanced states suggests that such balanced activity is essentially generated by recurrent activity in the local network and is not due to external inputs. Finally, we find that this balance breaks down during seizures, where the temporal correlation of excitatory and inhibitory populations is disrupted. These results show that balanced activity is a feature of normal brain activity, and break down of the balance could be an important factor to define pathological states.

  12. The geometry of chaotic dynamics — a complex network perspective

    NASA Astrophysics Data System (ADS)

    Donner, R. V.; Heitzig, J.; Donges, J. F.; Zou, Y.; Marwan, N.; Kurths, J.

    2011-12-01

    Recently, several complex network approaches to time series analysis have been developed and applied to study a wide range of model systems as well as real-world data, e.g., geophysical or financial time series. Among these techniques, recurrence-based concepts and prominently ɛ-recurrence networks, most faithfully represent the geometrical fine structure of the attractors underlying chaotic (and less interestingly non-chaotic) time series. In this paper we demonstrate that the well known graph theoretical properties local clustering coefficient and global (network) transitivity can meaningfully be exploited to define two new local and two new global measures of dimension in phase space: local upper and lower clustering dimension as well as global upper and lower transitivity dimension. Rigorous analytical as well as numerical results for self-similar sets and simple chaotic model systems suggest that these measures are well-behaved in most non-pathological situations and that they can be estimated reasonably well using ɛ-recurrence networks constructed from relatively short time series. Moreover, we study the relationship between clustering and transitivity dimensions on the one hand, and traditional measures like pointwise dimension or local Lyapunov dimension on the other hand. We also provide further evidence that the local clustering coefficients, or equivalently the local clustering dimensions, are useful for identifying unstable periodic orbits and other dynamically invariant objects from time series. Our results demonstrate that ɛ-recurrence networks exhibit an important link between dynamical systems and graph theory.

  13. Inference in the brain: Statistics flowing in redundant population codes

    PubMed Central

    Pitkow, Xaq; Angelaki, Dora E

    2017-01-01

    It is widely believed that the brain performs approximate probabilistic inference to estimate causal variables in the world from ambiguous sensory data. To understand these computations, we need to analyze how information is represented and transformed by the actions of nonlinear recurrent neural networks. We propose that these probabilistic computations function by a message-passing algorithm operating at the level of redundant neural populations. To explain this framework, we review its underlying concepts, including graphical models, sufficient statistics, and message-passing, and then describe how these concepts could be implemented by recurrently connected probabilistic population codes. The relevant information flow in these networks will be most interpretable at the population level, particularly for redundant neural codes. We therefore outline a general approach to identify the essential features of a neural message-passing algorithm. Finally, we argue that to reveal the most important aspects of these neural computations, we must study large-scale activity patterns during moderately complex, naturalistic behaviors. PMID:28595050

  14. Dual coding with STDP in a spiking recurrent neural network model of the hippocampus.

    PubMed

    Bush, Daniel; Philippides, Andrew; Husbands, Phil; O'Shea, Michael

    2010-07-01

    The firing rate of single neurons in the mammalian hippocampus has been demonstrated to encode for a range of spatial and non-spatial stimuli. It has also been demonstrated that phase of firing, with respect to the theta oscillation that dominates the hippocampal EEG during stereotype learning behaviour, correlates with an animal's spatial location. These findings have led to the hypothesis that the hippocampus operates using a dual (rate and temporal) coding system. To investigate the phenomenon of dual coding in the hippocampus, we examine a spiking recurrent network model with theta coded neural dynamics and an STDP rule that mediates rate-coded Hebbian learning when pre- and post-synaptic firing is stochastic. We demonstrate that this plasticity rule can generate both symmetric and asymmetric connections between neurons that fire at concurrent or successive theta phase, respectively, and subsequently produce both pattern completion and sequence prediction from partial cues. This unifies previously disparate auto- and hetero-associative network models of hippocampal function and provides them with a firmer basis in modern neurobiology. Furthermore, the encoding and reactivation of activity in mutually exciting Hebbian cell assemblies demonstrated here is believed to represent a fundamental mechanism of cognitive processing in the brain.

  15. Phase-locking and bistability in neuronal networks with synaptic depression

    NASA Astrophysics Data System (ADS)

    Akcay, Zeynep; Huang, Xinxian; Nadim, Farzan; Bose, Amitabha

    2018-02-01

    We consider a recurrent network of two oscillatory neurons that are coupled with inhibitory synapses. We use the phase response curves of the neurons and the properties of short-term synaptic depression to define Poincaré maps for the activity of the network. The fixed points of these maps correspond to phase-locked modes of the network. Using these maps, we analyze the conditions that allow short-term synaptic depression to lead to the existence of bistable phase-locked, periodic solutions. We show that bistability arises when either the phase response curve of the neuron or the short-term depression profile changes steeply enough. The results apply to any Type I oscillator and we illustrate our findings using the Quadratic Integrate-and-Fire and Morris-Lecar neuron models.

  16. Noise promotes independent control of gamma oscillations and grid firing within recurrent attractor networks

    PubMed Central

    Solanka, Lukas; van Rossum, Mark CW; Nolan, Matthew F

    2015-01-01

    Neural computations underlying cognitive functions require calibration of the strength of excitatory and inhibitory synaptic connections and are associated with modulation of gamma frequency oscillations in network activity. However, principles relating gamma oscillations, synaptic strength and circuit computations are unclear. We address this in attractor network models that account for grid firing and theta-nested gamma oscillations in the medial entorhinal cortex. We show that moderate intrinsic noise massively increases the range of synaptic strengths supporting gamma oscillations and grid computation. With moderate noise, variation in excitatory or inhibitory synaptic strength tunes the amplitude and frequency of gamma activity without disrupting grid firing. This beneficial role for noise results from disruption of epileptic-like network states. Thus, moderate noise promotes independent control of multiplexed firing rate- and gamma-based computational mechanisms. Our results have implications for tuning of normal circuit function and for disorders associated with changes in gamma oscillations and synaptic strength. DOI: http://dx.doi.org/10.7554/eLife.06444.001 PMID:26146940

  17. Dynamics of intracranial electroencephalographic recordings from epilepsy patients using univariate and bivariate recurrence networks.

    PubMed

    Subramaniyam, Narayan Puthanmadam; Hyttinen, Jari

    2015-02-01

    Recently Andrezejak et al. combined the randomness and nonlinear independence test with iterative amplitude adjusted Fourier transform (iAAFT) surrogates to distinguish between the dynamics of seizure-free intracranial electroencephalographic (EEG) signals recorded from epileptogenic (focal) and nonepileptogenic (nonfocal) brain areas of epileptic patients. However, stationarity is a part of the null hypothesis for iAAFT surrogates and thus nonstationarity can violate the null hypothesis. In this work we first propose the application of the randomness and nonlinear independence test based on recurrence network measures to distinguish between the dynamics of focal and nonfocal EEG signals. Furthermore, we combine these tests with both iAAFT and truncated Fourier transform (TFT) surrogate methods, which also preserves the nonstationarity of the original data in the surrogates along with its linear structure. Our results indicate that focal EEG signals exhibit an increased degree of structural complexity and interdependency compared to nonfocal EEG signals. In general, we find higher rejections for randomness and nonlinear independence tests for focal EEG signals compared to nonfocal EEG signals. In particular, the univariate recurrence network measures, the average clustering coefficient C and assortativity R, and the bivariate recurrence network measure, the average cross-clustering coefficient C(cross), can successfully distinguish between the focal and nonfocal EEG signals, even when the analysis is restricted to nonstationary signals, irrespective of the type of surrogates used. On the other hand, we find that the univariate recurrence network measures, the average path length L, and the average betweenness centrality BC fail to distinguish between the focal and nonfocal EEG signals when iAAFT surrogates are used. However, these two measures can distinguish between focal and nonfocal EEG signals when TFT surrogates are used for nonstationary signals. We also report an improvement in the performance of nonlinear prediction error N and nonlinear interdependence measure L used by Andrezejak et al., when TFT surrogates are used for nonstationary EEG signals. We also find that the outcome of the nonlinear independence test based on the average cross-clustering coefficient C(cross) is independent of the outcome of the randomness test based on the average clustering coefficient C. Thus, the univariate and bivariate recurrence network measures provide independent information regarding the dynamics of the focal and nonfocal EEG signals. In conclusion, recurrence network analysis combined with nonstationary surrogates can be applied to derive reliable biomarkers to distinguish between epileptogenic and nonepileptogenic brain areas using EEG signals.

  18. Dynamics of intracranial electroencephalographic recordings from epilepsy patients using univariate and bivariate recurrence networks

    NASA Astrophysics Data System (ADS)

    Subramaniyam, Narayan Puthanmadam; Hyttinen, Jari

    2015-02-01

    Recently Andrezejak et al. combined the randomness and nonlinear independence test with iterative amplitude adjusted Fourier transform (iAAFT) surrogates to distinguish between the dynamics of seizure-free intracranial electroencephalographic (EEG) signals recorded from epileptogenic (focal) and nonepileptogenic (nonfocal) brain areas of epileptic patients. However, stationarity is a part of the null hypothesis for iAAFT surrogates and thus nonstationarity can violate the null hypothesis. In this work we first propose the application of the randomness and nonlinear independence test based on recurrence network measures to distinguish between the dynamics of focal and nonfocal EEG signals. Furthermore, we combine these tests with both iAAFT and truncated Fourier transform (TFT) surrogate methods, which also preserves the nonstationarity of the original data in the surrogates along with its linear structure. Our results indicate that focal EEG signals exhibit an increased degree of structural complexity and interdependency compared to nonfocal EEG signals. In general, we find higher rejections for randomness and nonlinear independence tests for focal EEG signals compared to nonfocal EEG signals. In particular, the univariate recurrence network measures, the average clustering coefficient C and assortativity R , and the bivariate recurrence network measure, the average cross-clustering coefficient Ccross, can successfully distinguish between the focal and nonfocal EEG signals, even when the analysis is restricted to nonstationary signals, irrespective of the type of surrogates used. On the other hand, we find that the univariate recurrence network measures, the average path length L , and the average betweenness centrality BC fail to distinguish between the focal and nonfocal EEG signals when iAAFT surrogates are used. However, these two measures can distinguish between focal and nonfocal EEG signals when TFT surrogates are used for nonstationary signals. We also report an improvement in the performance of nonlinear prediction error N and nonlinear interdependence measure L used by Andrezejak et al., when TFT surrogates are used for nonstationary EEG signals. We also find that the outcome of the nonlinear independence test based on the average cross-clustering coefficient Ccross is independent of the outcome of the randomness test based on the average clustering coefficient C . Thus, the univariate and bivariate recurrence network measures provide independent information regarding the dynamics of the focal and nonfocal EEG signals. In conclusion, recurrence network analysis combined with nonstationary surrogates can be applied to derive reliable biomarkers to distinguish between epileptogenic and nonepileptogenic brain areas using EEG signals.

  19. Analysis of fMRI data using noise-diffusion network models: a new covariance-coding perspective.

    PubMed

    Gilson, Matthieu

    2018-04-01

    Since the middle of the 1990s, studies of resting-state fMRI/BOLD data have explored the correlation patterns of activity across the whole brain, which is referred to as functional connectivity (FC). Among the many methods that have been developed to interpret FC, a recently proposed model-based approach describes the propagation of fluctuating BOLD activity within the recurrently connected brain network by inferring the effective connectivity (EC). In this model, EC quantifies the strengths of directional interactions between brain regions, viewed from the proxy of BOLD activity. In addition, the tuning procedure for the model provides estimates for the local variability (input variances) to explain how the observed FC is generated. Generalizing, the network dynamics can be studied in the context of an input-output mapping-determined by EC-for the second-order statistics of fluctuating nodal activities. The present paper focuses on the following detection paradigm: observing output covariances, how discriminative is the (estimated) network model with respect to various input covariance patterns? An application with the model fitted to experimental fMRI data-movie viewing versus resting state-illustrates that changes in local variability and changes in brain coordination go hand in hand.

  20. MiRNA and TF co-regulatory network analysis for the pathology and recurrence of myocardial infarction.

    PubMed

    Lin, Ying; Sibanda, Vusumuzi Leroy; Zhang, Hong-Mei; Hu, Hui; Liu, Hui; Guo, An-Yuan

    2015-04-13

    Myocardial infarction (MI) is a leading cause of death in the world and many genes are involved in it. Transcription factor (TFs) and microRNAs (miRNAs) are key regulators of gene expression. We hypothesized that miRNAs and TFs might play combinatory regulatory roles in MI. After collecting MI candidate genes and miRNAs from various resources, we constructed a comprehensive MI-specific miRNA-TF co-regulatory network by integrating predicted and experimentally validated TF and miRNA targets. We found some hub nodes (e.g. miR-16 and miR-26) in this network are important regulators, and the network can be severed as a bridge to interpret the associations of previous results, which is shown by the case of miR-29 in this study. We also constructed a regulatory network for MI recurrence and found several important genes (e.g. DAB2, BMP6, miR-320 and miR-103), the abnormal expressions of which may be potential regulatory mechanisms and markers of MI recurrence. At last we proposed a cellular model to discuss major TF and miRNA regulators with signaling pathways in MI. This study provides more details on gene expression regulation and regulators involved in MI progression and recurrence. It also linked up and interpreted many previous results.

  1. Firing rate dynamics in recurrent spiking neural networks with intrinsic and network heterogeneity.

    PubMed

    Ly, Cheng

    2015-12-01

    Heterogeneity of neural attributes has recently gained a lot of attention and is increasing recognized as a crucial feature in neural processing. Despite its importance, this physiological feature has traditionally been neglected in theoretical studies of cortical neural networks. Thus, there is still a lot unknown about the consequences of cellular and circuit heterogeneity in spiking neural networks. In particular, combining network or synaptic heterogeneity and intrinsic heterogeneity has yet to be considered systematically despite the fact that both are known to exist and likely have significant roles in neural network dynamics. In a canonical recurrent spiking neural network model, we study how these two forms of heterogeneity lead to different distributions of excitatory firing rates. To analytically characterize how these types of heterogeneities affect the network, we employ a dimension reduction method that relies on a combination of Monte Carlo simulations and probability density function equations. We find that the relationship between intrinsic and network heterogeneity has a strong effect on the overall level of heterogeneity of the firing rates. Specifically, this relationship can lead to amplification or attenuation of firing rate heterogeneity, and these effects depend on whether the recurrent network is firing asynchronously or rhythmically firing. These observations are captured with the aforementioned reduction method, and furthermore simpler analytic descriptions based on this dimension reduction method are developed. The final analytic descriptions provide compact and descriptive formulas for how the relationship between intrinsic and network heterogeneity determines the firing rate heterogeneity dynamics in various settings.

  2. Gap junction plasticity as a mechanism to regulate network-wide oscillations

    PubMed Central

    Nicola, Wilten; Clopath, Claudia

    2018-01-01

    Cortical oscillations are thought to be involved in many cognitive functions and processes. Several mechanisms have been proposed to regulate oscillations. One prominent but understudied mechanism is gap junction coupling. Gap junctions are ubiquitous in cortex between GABAergic interneurons. Moreover, recent experiments indicate their strength can be modified in an activity-dependent manner, similar to chemical synapses. We hypothesized that activity-dependent gap junction plasticity acts as a mechanism to regulate oscillations in the cortex. We developed a computational model of gap junction plasticity in a recurrent cortical network based on recent experimental findings. We showed that gap junction plasticity can serve as a homeostatic mechanism for oscillations by maintaining a tight balance between two network states: asynchronous irregular activity and synchronized oscillations. This homeostatic mechanism allows for robust communication between neuronal assemblies through two different mechanisms: transient oscillations and frequency modulation. This implies a direct functional role for gap junction plasticity in information transmission in cortex. PMID:29529034

  3. Activity-dependent stochastic resonance in recurrent neuronal networks

    NASA Astrophysics Data System (ADS)

    Volman, Vladislav

    2009-03-01

    An important source of noise for neuronal networks is that of the stochastic nature of synaptic transmission. In particular, there can occur spontaneous asynchronous release of neurotransmitter at a rate that is strongly dependent on the presynaptic Ca2+ concentration and hence strongly dependent on the rate of spike induced Ca2+. Here it is shown that this noise can lead to a new form of stochastic resonance for local circuits consisting of roughly 100 neurons - a ``microcolumn''- coupled via noisy plastic synapses. Furthermore, due to the plastic coupling and activity-dependent noise component, the detection of weak stimuli will also depend on the structure of the latter. In addition, the circuit can exhibit short-term memory, by which we mean that spiking will continue to occur for a transient period following removal of the stimulus. These results can be directly tested in experiments on cultured networks.

  4. Different propagation speeds of recalled sequences in plastic spiking neural networks

    NASA Astrophysics Data System (ADS)

    Huang, Xuhui; Zheng, Zhigang; Hu, Gang; Wu, Si; Rasch, Malte J.

    2015-03-01

    Neural networks can generate spatiotemporal patterns of spike activity. Sequential activity learning and retrieval have been observed in many brain areas, and e.g. is crucial for coding of episodic memory in the hippocampus or generating temporal patterns during song production in birds. In a recent study, a sequential activity pattern was directly entrained onto the neural activity of the primary visual cortex (V1) of rats and subsequently successfully recalled by a local and transient trigger. It was observed that the speed of activity propagation in coordinates of the retinotopically organized neural tissue was constant during retrieval regardless how the speed of light stimulation sweeping across the visual field during training was varied. It is well known that spike-timing dependent plasticity (STDP) is a potential mechanism for embedding temporal sequences into neural network activity. How training and retrieval speeds relate to each other and how network and learning parameters influence retrieval speeds, however, is not well described. We here theoretically analyze sequential activity learning and retrieval in a recurrent neural network with realistic synaptic short-term dynamics and STDP. Testing multiple STDP rules, we confirm that sequence learning can be achieved by STDP. However, we found that a multiplicative nearest-neighbor (NN) weight update rule generated weight distributions and recall activities that best matched the experiments in V1. Using network simulations and mean-field analysis, we further investigated the learning mechanisms and the influence of network parameters on recall speeds. Our analysis suggests that a multiplicative STDP rule with dominant NN spike interaction might be implemented in V1 since recall speed was almost constant in an NMDA-dominant regime. Interestingly, in an AMPA-dominant regime, neural circuits might exhibit recall speeds that instead follow the change in stimulus speeds. This prediction could be tested in experiments.

  5. Frame prediction using recurrent convolutional encoder with residual learning

    NASA Astrophysics Data System (ADS)

    Yue, Boxuan; Liang, Jun

    2018-05-01

    The prediction for the frame of a video is difficult but in urgent need in auto-driving. Conventional methods can only predict some abstract trends of the region of interest. The boom of deep learning makes the prediction for frames possible. In this paper, we propose a novel recurrent convolutional encoder and DE convolutional decoder structure to predict frames. We introduce the residual learning in the convolution encoder structure to solve the gradient issues. The residual learning can transform the gradient back propagation to an identity mapping. It can reserve the whole gradient information and overcome the gradient issues in Recurrent Neural Networks (RNN) and Convolutional Neural Networks (CNN). Besides, compared with the branches in CNNs and the gated structures in RNNs, the residual learning can save the training time significantly. In the experiments, we use UCF101 dataset to train our networks, the predictions are compared with some state-of-the-art methods. The results show that our networks can predict frames fast and efficiently. Furthermore, our networks are used for the driving video to verify the practicability.

  6. LFNet: A Novel Bidirectional Recurrent Convolutional Neural Network for Light-Field Image Super-Resolution.

    PubMed

    Wang, Yunlong; Liu, Fei; Zhang, Kunbo; Hou, Guangqi; Sun, Zhenan; Tan, Tieniu

    2018-09-01

    The low spatial resolution of light-field image poses significant difficulties in exploiting its advantage. To mitigate the dependency of accurate depth or disparity information as priors for light-field image super-resolution, we propose an implicitly multi-scale fusion scheme to accumulate contextual information from multiple scales for super-resolution reconstruction. The implicitly multi-scale fusion scheme is then incorporated into bidirectional recurrent convolutional neural network, which aims to iteratively model spatial relations between horizontally or vertically adjacent sub-aperture images of light-field data. Within the network, the recurrent convolutions are modified to be more effective and flexible in modeling the spatial correlations between neighboring views. A horizontal sub-network and a vertical sub-network of the same network structure are ensembled for final outputs via stacked generalization. Experimental results on synthetic and real-world data sets demonstrate that the proposed method outperforms other state-of-the-art methods by a large margin in peak signal-to-noise ratio and gray-scale structural similarity indexes, which also achieves superior quality for human visual systems. Furthermore, the proposed method can enhance the performance of light field applications such as depth estimation.

  7. Recurrence network measures for hypothesis testing using surrogate data: Application to black hole light curves

    NASA Astrophysics Data System (ADS)

    Jacob, Rinku; Harikrishnan, K. P.; Misra, R.; Ambika, G.

    2018-01-01

    Recurrence networks and the associated statistical measures have become important tools in the analysis of time series data. In this work, we test how effective the recurrence network measures are in analyzing real world data involving two main types of noise, white noise and colored noise. We use two prominent network measures as discriminating statistic for hypothesis testing using surrogate data for a specific null hypothesis that the data is derived from a linear stochastic process. We show that the characteristic path length is especially efficient as a discriminating measure with the conclusions reasonably accurate even with limited number of data points in the time series. We also highlight an additional advantage of the network approach in identifying the dimensionality of the system underlying the time series through a convergence measure derived from the probability distribution of the local clustering coefficients. As examples of real world data, we use the light curves from a prominent black hole system and show that a combined analysis using three primary network measures can provide vital information regarding the nature of temporal variability of light curves from different spectroscopic classes.

  8. Training Excitatory-Inhibitory Recurrent Neural Networks for Cognitive Tasks: A Simple and Flexible Framework

    PubMed Central

    Wang, Xiao-Jing

    2016-01-01

    The ability to simultaneously record from large numbers of neurons in behaving animals has ushered in a new era for the study of the neural circuit mechanisms underlying cognitive functions. One promising approach to uncovering the dynamical and computational principles governing population responses is to analyze model recurrent neural networks (RNNs) that have been optimized to perform the same tasks as behaving animals. Because the optimization of network parameters specifies the desired output but not the manner in which to achieve this output, “trained” networks serve as a source of mechanistic hypotheses and a testing ground for data analyses that link neural computation to behavior. Complete access to the activity and connectivity of the circuit, and the ability to manipulate them arbitrarily, make trained networks a convenient proxy for biological circuits and a valuable platform for theoretical investigation. However, existing RNNs lack basic biological features such as the distinction between excitatory and inhibitory units (Dale’s principle), which are essential if RNNs are to provide insights into the operation of biological circuits. Moreover, trained networks can achieve the same behavioral performance but differ substantially in their structure and dynamics, highlighting the need for a simple and flexible framework for the exploratory training of RNNs. Here, we describe a framework for gradient descent-based training of excitatory-inhibitory RNNs that can incorporate a variety of biological knowledge. We provide an implementation based on the machine learning library Theano, whose automatic differentiation capabilities facilitate modifications and extensions. We validate this framework by applying it to well-known experimental paradigms such as perceptual decision-making, context-dependent integration, multisensory integration, parametric working memory, and motor sequence generation. Our results demonstrate the wide range of neural activity patterns and behavior that can be modeled, and suggest a unified setting in which diverse cognitive computations and mechanisms can be studied. PMID:26928718

  9. An Investigation of the Application of Artificial Neural Networks to Adaptive Optics Imaging Systems

    DTIC Science & Technology

    1991-12-01

    neural network and the feedforward neural network studied is the single layer perceptron artificial neural network . The recurrent artificial neural network input...features are the wavefront sensor slope outputs and neighboring actuator feedback commands. The feedforward artificial neural network input

  10. Feedforward Inhibition Allows Input Summation to Vary in Recurrent Cortical Networks

    PubMed Central

    2018-01-01

    Abstract Brain computations depend on how neurons transform inputs to spike outputs. Here, to understand input-output transformations in cortical networks, we recorded spiking responses from visual cortex (V1) of awake mice of either sex while pairing sensory stimuli with optogenetic perturbation of excitatory and parvalbumin-positive inhibitory neurons. We found that V1 neurons’ average responses were primarily additive (linear). We used a recurrent cortical network model to determine whether these data, as well as past observations of nonlinearity, could be described by a common circuit architecture. Simulations showed that cortical input-output transformations can be changed from linear to sublinear with moderate (∼20%) strengthening of connections between inhibitory neurons, but this change away from linear scaling depends on the presence of feedforward inhibition. Simulating a variety of recurrent connection strengths showed that, compared with when input arrives only to excitatory neurons, networks produce a wider range of output spiking responses in the presence of feedforward inhibition. PMID:29682603

  11. Comparative study of stock trend prediction using time delay, recurrent and probabilistic neural networks.

    PubMed

    Saad, E W; Prokhorov, D V; Wunsch, D C

    1998-01-01

    Three networks are compared for low false alarm stock trend predictions. Short-term trends, particularly attractive for neural network analysis, can be used profitably in scenarios such as option trading, but only with significant risk. Therefore, we focus on limiting false alarms, which improves the risk/reward ratio by preventing losses. To predict stock trends, we exploit time delay, recurrent, and probabilistic neural networks (TDNN, RNN, and PNN, respectively), utilizing conjugate gradient and multistream extended Kalman filter training for TDNN and RNN. We also discuss different predictability analysis techniques and perform an analysis of predictability based on a history of daily closing price. Our results indicate that all the networks are feasible, the primary preference being one of convenience.

  12. A recurrent network mechanism of time integration in perceptual decisions.

    PubMed

    Wong, Kong-Fatt; Wang, Xiao-Jing

    2006-01-25

    Recent physiological studies using behaving monkeys revealed that, in a two-alternative forced-choice visual motion discrimination task, reaction time was correlated with ramping of spike activity of lateral intraparietal cortical neurons. The ramping activity appears to reflect temporal accumulation, on a timescale of hundreds of milliseconds, of sensory evidence before a decision is reached. To elucidate the cellular and circuit basis of such integration times, we developed and investigated a simplified two-variable version of a biophysically realistic cortical network model of decision making. In this model, slow time integration can be achieved robustly if excitatory reverberation is primarily mediated by NMDA receptors; our model with only fast AMPA receptors at recurrent synapses produces decision times that are not comparable with experimental observations. Moreover, we found two distinct modes of network behavior, in which decision computation by winner-take-all competition is instantiated with or without attractor states for working memory. Decision process is closely linked to the local dynamics, in the "decision space" of the system, in the vicinity of an unstable saddle steady state that separates the basins of attraction for the two alternative choices. This picture provides a rigorous and quantitative explanation for the dependence of performance and response time on the degree of task difficulty, and the reason for which reaction times are longer in error trials than in correct trials as observed in the monkey experiment. Our reduced two-variable neural model offers a simple yet biophysically plausible framework for studying perceptual decision making in general.

  13. Long-term Recurrent Convolutional Networks for Visual Recognition and Description

    DTIC Science & Technology

    2014-11-17

    deep???, are effective for tasks involving sequences, visual and otherwise. We develop a novel recurrent convolutional architecture suitable for large...models which are also recurrent, or “temporally deep”, are effective for tasks involving sequences, visual and otherwise. We develop a novel recurrent...limitation of simple RNN models which strictly integrate state information over time is known as the “vanishing gradient” effect : the ability to

  14. A novel prosodic-information synthesizer based on recurrent fuzzy neural network for the Chinese TTS system.

    PubMed

    Lin, Chin-Teng; Wu, Rui-Cheng; Chang, Jyh-Yeong; Liang, Sheng-Fu

    2004-02-01

    In this paper, a new technique for the Chinese text-to-speech (TTS) system is proposed. Our major effort focuses on the prosodic information generation. New methodologies for constructing fuzzy rules in a prosodic model simulating human's pronouncing rules are developed. The proposed Recurrent Fuzzy Neural Network (RFNN) is a multilayer recurrent neural network (RNN) which integrates a Self-cOnstructing Neural Fuzzy Inference Network (SONFIN) into a recurrent connectionist structure. The RFNN can be functionally divided into two parts. The first part adopts the SONFIN as a prosodic model to explore the relationship between high-level linguistic features and prosodic information based on fuzzy inference rules. As compared to conventional neural networks, the SONFIN can always construct itself with an economic network size in high learning speed. The second part employs a five-layer network to generate all prosodic parameters by directly using the prosodic fuzzy rules inferred from the first part as well as other important features of syllables. The TTS system combined with the proposed method can behave not only sandhi rules but also the other prosodic phenomena existing in the traditional TTS systems. Moreover, the proposed scheme can even find out some new rules about prosodic phrase structure. The performance of the proposed RFNN-based prosodic model is verified by imbedding it into a Chinese TTS system with a Chinese monosyllable database based on the time-domain pitch synchronous overlap add (TD-PSOLA) method. Our experimental results show that the proposed RFNN can generate proper prosodic parameters including pitch means, pitch shapes, maximum energy levels, syllable duration, and pause duration. Some synthetic sounds are online available for demonstration.

  15. On-line training of recurrent neural networks with continuous topology adaptation.

    PubMed

    Obradovic, D

    1996-01-01

    This paper presents an online procedure for training dynamic neural networks with input-output recurrences whose topology is continuously adjusted to the complexity of the target system dynamics. This is accomplished by changing the number of the elements of the network hidden layer whenever the existing topology cannot capture the dynamics presented by the new data. The training mechanism is based on the suitably altered extended Kalman filter (EKF) algorithm which is simultaneously used for the network parameter adjustment and for its state estimation. The network consists of a single hidden layer with Gaussian radial basis functions (GRBF), and a linear output layer. The choice of the GRBF is induced by the requirements of the online learning. The latter implies the network architecture which permits only local influence of the new data point in order not to forget the previously learned dynamics. The continuous topology adaptation is implemented in our algorithm to avoid memory and computational problems of using a regular grid of GRBF'S which covers the network input space. Furthermore, we show that the resulting parameter increase can be handled "smoothly" without interfering with the already acquired information. If the target system dynamics are changing over time, we show that a suitable forgetting factor can be used to "unlearn" the no longer-relevant dynamics. The quality of the recurrent network training algorithm is demonstrated on the identification of nonlinear dynamic systems.

  16. Long-Term Memory Stabilized by Noise-Induced Rehearsal

    PubMed Central

    Wei, Yi

    2014-01-01

    Cortical networks can maintain memories for decades despite the short lifetime of synaptic strengths. Can a neural network store long-lasting memories in unstable synapses? Here, we study the effects of ongoing spike-timing-dependent plasticity (STDP) on the stability of memory patterns stored in synapses of an attractor neural network. We show that certain classes of STDP rules can stabilize all stored memory patterns despite a short lifetime of synapses. In our model, unstructured neural noise, after passing through the recurrent network connections, carries the imprint of all memory patterns in temporal correlations. STDP, combined with these correlations, leads to reinforcement of all stored patterns, even those that are never explicitly visited. Our findings may provide the functional reason for irregular spiking displayed by cortical neurons and justify models of system memory consolidation. Therefore, we propose that irregular neural activity is the feature that helps cortical networks maintain stable connections. PMID:25411507

  17. Satisfiability of logic programming based on radial basis function neural networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamadneh, Nawaf; Sathasivam, Saratha; Tilahun, Surafel Luleseged

    2014-07-10

    In this paper, we propose a new technique to test the Satisfiability of propositional logic programming and quantified Boolean formula problem in radial basis function neural networks. For this purpose, we built radial basis function neural networks to represent the proportional logic which has exactly three variables in each clause. We used the Prey-predator algorithm to calculate the output weights of the neural networks, while the K-means clustering algorithm is used to determine the hidden parameters (the centers and the widths). Mean of the sum squared error function is used to measure the activity of the two algorithms. We appliedmore » the developed technique with the recurrent radial basis function neural networks to represent the quantified Boolean formulas. The new technique can be applied to solve many applications such as electronic circuits and NP-complete problems.« less

  18. A recurrent neural network for solving bilevel linear programming problem.

    PubMed

    He, Xing; Li, Chuandong; Huang, Tingwen; Li, Chaojie; Huang, Junjian

    2014-04-01

    In this brief, based on the method of penalty functions, a recurrent neural network (NN) modeled by means of a differential inclusion is proposed for solving the bilevel linear programming problem (BLPP). Compared with the existing NNs for BLPP, the model has the least number of state variables and simple structure. Using nonsmooth analysis, the theory of differential inclusions, and Lyapunov-like method, the equilibrium point sequence of the proposed NNs can approximately converge to an optimal solution of BLPP under certain conditions. Finally, the numerical simulations of a supply chain distribution model have shown excellent performance of the proposed recurrent NNs.

  19. Optogenetic perturbations reveal the dynamics of an oculomotor integrator

    PubMed Central

    Gonçalves, Pedro J.; Arrenberg, Aristides B.; Hablitzel, Bastian; Baier, Herwig; Machens, Christian K.

    2014-01-01

    Many neural systems can store short-term information in persistently firing neurons. Such persistent activity is believed to be maintained by recurrent feedback among neurons. This hypothesis has been fleshed out in detail for the oculomotor integrator (OI) for which the so-called “line attractor” network model can explain a large set of observations. Here we show that there is a plethora of such models, distinguished by the relative strength of recurrent excitation and inhibition. In each model, the firing rates of the neurons relax toward the persistent activity states. The dynamics of relaxation can be quite different, however, and depend on the levels of recurrent excitation and inhibition. To identify the correct model, we directly measure these relaxation dynamics by performing optogenetic perturbations in the OI of zebrafish expressing halorhodopsin or channelrhodopsin. We show that instantaneous, inhibitory stimulations of the OI lead to persistent, centripetal eye position changes ipsilateral to the stimulation. Excitatory stimulations similarly cause centripetal eye position changes, yet only contralateral to the stimulation. These results show that the dynamics of the OI are organized around a central attractor state—the null position of the eyes—which stabilizes the system against random perturbations. Our results pose new constraints on the circuit connectivity of the system and provide new insights into the mechanisms underlying persistent activity. PMID:24616666

  20. Low-complexity nonlinear adaptive filter based on a pipelined bilinear recurrent neural network.

    PubMed

    Zhao, Haiquan; Zeng, Xiangping; He, Zhengyou

    2011-09-01

    To reduce the computational complexity of the bilinear recurrent neural network (BLRNN), a novel low-complexity nonlinear adaptive filter with a pipelined bilinear recurrent neural network (PBLRNN) is presented in this paper. The PBLRNN, inheriting the modular architectures of the pipelined RNN proposed by Haykin and Li, comprises a number of BLRNN modules that are cascaded in a chained form. Each module is implemented by a small-scale BLRNN with internal dynamics. Since those modules of the PBLRNN can be performed simultaneously in a pipelined parallelism fashion, it would result in a significant improvement of computational efficiency. Moreover, due to nesting module, the performance of the PBLRNN can be further improved. To suit for the modular architectures, a modified adaptive amplitude real-time recurrent learning algorithm is derived on the gradient descent approach. Extensive simulations are carried out to evaluate the performance of the PBLRNN on nonlinear system identification, nonlinear channel equalization, and chaotic time series prediction. Experimental results show that the PBLRNN provides considerably better performance compared to the single BLRNN and RNN models.

  1. A one-layer recurrent neural network for constrained pseudoconvex optimization and its application for dynamic portfolio optimization.

    PubMed

    Liu, Qingshan; Guo, Zhishan; Wang, Jun

    2012-02-01

    In this paper, a one-layer recurrent neural network is proposed for solving pseudoconvex optimization problems subject to linear equality and bound constraints. Compared with the existing neural networks for optimization (e.g., the projection neural networks), the proposed neural network is capable of solving more general pseudoconvex optimization problems with equality and bound constraints. Moreover, it is capable of solving constrained fractional programming problems as a special case. The convergence of the state variables of the proposed neural network to achieve solution optimality is guaranteed as long as the designed parameters in the model are larger than the derived lower bounds. Numerical examples with simulation results illustrate the effectiveness and characteristics of the proposed neural network. In addition, an application for dynamic portfolio optimization is discussed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  2. Identification and Control of Aircrafts using Multiple Models and Adaptive Critics

    NASA Technical Reports Server (NTRS)

    Principe, Jose C.

    2007-01-01

    We compared two possible implementations of local linear models for control: one approach is based on a self-organizing map (SOM) to cluster the dynamics followed by a set of linear models operating at each cluster. Therefore the gating function is hard (a single local model will represent the regional dynamics). This simplifies the controller design since there is a one to one mapping between controllers and local models. The second approach uses a soft gate using a probabilistic framework based on a Gaussian Mixture Model (also called a dynamic mixture of experts). In this approach several models may be active at a given time, we can expect a smaller number of models, but the controller design is more involved, with potentially better noise rejection characteristics. Our experiments showed that the SOM provides overall best performance in high SNRs, but the performance degrades faster than with the GMM for the same noise conditions. The SOM approach required about an order of magnitude more models than the GMM, so in terms of implementation cost, the GMM is preferable. The design of the SOM is straight forward, while the design of the GMM controllers, although still reasonable, is more involved and needs more care in the selection of the parameters. Either one of these locally linear approaches outperform global nonlinear controllers based on neural networks, such as the time delay neural network (TDNN). Therefore, in essence the local model approach warrants practical implementations. In order to call the attention of the control community for this design methodology we extended successfully the multiple model approach to PID controllers (still today the most widely used control scheme in the industry), and wrote a paper on this subject. The echo state network (ESN) is a recurrent neural network with the special characteristics that only the output parameters are trained. The recurrent connections are preset according to the problem domain and are fixed. In a nutshell, the states of the reservoir of recurrent processing elements implement a projection space, where the desired response is optimally projected. This architecture trades training efficiency by a large increase in the dimension of the recurrent layer. However, the power of the recurrent neural networks can be brought to bear on practical difficult problems. Our goal was to implement an adaptive critic architecture implementing Bellman s approach to optimal control. However, we could only characterize the ESN performance as a critic in value function evaluation, which is just one of the pieces of the overall adaptive critic controller. The results were very convincing, and the simplicity of the implementation was unparalleled.

  3. Oscillatory activity in neocortical networks during tactile discrimination near the limit of spatial acuity.

    PubMed

    Adhikari, Bhim M; Sathian, K; Epstein, Charles M; Lamichhane, Bidhan; Dhamala, Mukesh

    2014-05-01

    Oscillatory interactions within functionally specialized but distributed brain regions are believed to be central to perceptual and cognitive functions. Here, using human scalp electroencephalography (EEG) recordings combined with source reconstruction techniques, we study how oscillatory activity functionally organizes different neocortical regions during a tactile discrimination task near the limit of spatial acuity. While undergoing EEG recordings, blindfolded participants felt a linear three-dot array presented electromechanically, under computer control, and reported whether the central dot was offset to the left or right. The average brain response differed significantly for trials with correct and incorrect perceptual responses in the timeframe approximately between 130 and 175ms. During trials with correct responses, source-level peak activity appeared in the left primary somatosensory cortex (SI) at around 45ms, in the right lateral occipital complex (LOC) at 130ms, in the right posterior intraparietal sulcus (pIPS) at 160ms, and finally in the left dorsolateral prefrontal cortex (dlPFC) at 175ms. Spectral interdependency analysis of activity in these nodes showed two distinct distributed networks, a dominantly feedforward network in the beta band (12-30Hz) that included all four nodes and a recurrent network in the gamma band (30-100Hz) that linked SI, pIPS and dlPFC. Measures of network activity in both bands were correlated with the accuracy of task performance. These findings suggest that beta and gamma band oscillatory networks coordinate activity between neocortical regions mediating sensory and cognitive processing to arrive at tactile perceptual decisions. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  4. Artificial intelligence for predicting recurrence-free probability of non-invasive high-grade urothelial bladder cell carcinoma.

    PubMed

    Cai, Tommaso; Conti, Gloria; Nesi, Gabriella; Lorenzini, Matteo; Mondaini, Nicola; Bartoletti, Riccardo

    2007-10-01

    The objective of our study was to define a neural network for predicting recurrence and progression-free probability in patients affected by recurrent pTaG3 urothelial bladder cancer to use in everyday clinical practice. Among all patients who had undergone transurethral resection for bladder tumors, 143 were finally selected and enrolled. Four follow-ups for recurrence, progression or survival were performed at 6, 9, 12 and 108 months. The data were analyzed by using the commercially available software program NeuralWorks Predict. These data were compared with univariate and multivariate analysis results. The use of Artificial Neural Networks (ANN) in recurrent pTaG3 patients showed a sensitivity of 81.67% and specificity of 95.87% in predicting recurrence-free status after transurethral resection of bladder tumor at 12 months follow-up. Statistical and ANN analyses allowed selection of the number of lesions (multiple, HR=3.31, p=0.008) and the previous recurrence rate (>or=2/year, HR=3.14, p=0.003) as the most influential variables affecting the output decision in predicting the natural history of recurrent pTaG3 urothelial bladder cancer. ANN applications also included selection of the previous adjuvant therapy. We demonstrated the feasibility and reliability of ANN applications in everyday clinical practice, reporting a good recurrence predicting performance. The study identified a single subgroup of pTaG3 patients with multiple lesions, >or=2/year recurrence rate and without any response to previous Bacille Calmette-Guérin adjuvant therapy, that seem to be at high risk of recurrence.

  5. Dynamic stability analysis of fractional order leaky integrator echo state neural networks

    NASA Astrophysics Data System (ADS)

    Pahnehkolaei, Seyed Mehdi Abedi; Alfi, Alireza; Tenreiro Machado, J. A.

    2017-06-01

    The Leaky integrator echo state neural network (Leaky-ESN) is an improved model of the recurrent neural network (RNN) and adopts an interconnected recurrent grid of processing neurons. This paper presents a new proof for the convergence of a Lyapunov candidate function to zero when time tends to infinity by means of the Caputo fractional derivative with order lying in the range (0, 1). The stability of Fractional-Order Leaky-ESN (FO Leaky-ESN) is then analyzed, and the existence, uniqueness and stability of the equilibrium point are provided. A numerical example demonstrates the feasibility of the proposed method.

  6. Growth dynamics explain the development of spatiotemporal burst activity of young cultured neuronal networks in detail.

    PubMed

    Gritsun, Taras A; le Feber, Joost; Rutten, Wim L C

    2012-01-01

    A typical property of isolated cultured neuronal networks of dissociated rat cortical cells is synchronized spiking, called bursting, starting about one week after plating, when the dissociated cells have sufficiently sent out their neurites and formed enough synaptic connections. This paper is the third in a series of three on simulation models of cultured networks. Our two previous studies [26], [27] have shown that random recurrent network activity models generate intra- and inter-bursting patterns similar to experimental data. The networks were noise or pacemaker-driven and had Izhikevich-neuronal elements with only short-term plastic (STP) synapses (so, no long-term potentiation, LTP, or depression, LTD, was included). However, elevated pre-phases (burst leaders) and after-phases of burst main shapes, that usually arise during the development of the network, were not yet simulated in sufficient detail. This lack of detail may be due to the fact that the random models completely missed network topology .and a growth model. Therefore, the present paper adds, for the first time, a growth model to the activity model, to give the network a time dependent topology and to explain burst shapes in more detail. Again, without LTP or LTD mechanisms. The integrated growth-activity model yielded realistic bursting patterns. The automatic adjustment of various mutually interdependent network parameters is one of the major advantages of our current approach. Spatio-temporal bursting activity was validated against experiment. Depending on network size, wave reverberation mechanisms were seen along the network boundaries, which may explain the generation of phases of elevated firing before and after the main phase of the burst shape.In summary, the results show that adding topology and growth explain burst shapes in great detail and suggest that young networks still lack/do not need LTP or LTD mechanisms.

  7. Multi-Temporal Land Cover Classification with Sequential Recurrent Encoders

    NASA Astrophysics Data System (ADS)

    Rußwurm, Marc; Körner, Marco

    2018-03-01

    Earth observation (EO) sensors deliver data with daily or weekly temporal resolution. Most land use and land cover (LULC) approaches, however, expect cloud-free and mono-temporal observations. The increasing temporal capabilities of today's sensors enables the use of temporal, along with spectral and spatial features. Domains, such as speech recognition or neural machine translation, work with inherently temporal data and, today, achieve impressive results using sequential encoder-decoder structures. Inspired by these sequence-to-sequence models, we adapt an encoder structure with convolutional recurrent layers in order to approximate a phenological model for vegetation classes based on a temporal sequence of Sentinel 2 (S2) images. In our experiments, we visualize internal activations over a sequence of cloudy and non-cloudy images and find several recurrent cells, which reduce the input activity for cloudy observations. Hence, we assume that our network has learned cloud-filtering schemes solely from input data, which could alleviate the need for tedious cloud-filtering as a preprocessing step for many EO approaches. Moreover, using unfiltered temporal series of top-of-atmosphere (TOA) reflectance data, we achieved in our experiments state-of-the-art classification accuracies on a large number of crop classes with minimal preprocessing compared to other classification approaches.

  8. Differential Modulation of Spontaneous and Evoked Thalamocortical Network Activity by Acetylcholine Level In Vitro

    PubMed Central

    Wester, Jason C.

    2013-01-01

    Different levels of cholinergic neuromodulatory tone have been hypothesized to set the state of cortical circuits either to one dominated by local cortical recurrent activity (low ACh) or to one dependent on thalamic input (high ACh). High ACh levels depress intracortical but facilitate thalamocortical synapses, whereas low levels potentiate intracortical synapses. Furthermore, recent work has implicated the thalamus in controlling cortical network state during waking and attention, when ACh levels are highest. To test this hypothesis, we used rat thalamocortical slices maintained in medium to generate spontaneous up- and down-states and applied different ACh concentrations to slices in which thalamocortical connections were either maintained or severed. The effects on spontaneous and evoked up-states were measured using voltage-sensitive dye imaging, intracellular recordings, local field potentials, and single/multiunit activity. We found that high ACh can increase the frequency of spontaneous up-states, but reduces their duration in slices with intact thalamocortical connections. Strikingly, when thalamic connections are severed, high ACh instead greatly reduces or abolishes spontaneous up-states. Furthermore, high ACh reduces the spatial propagation, velocity, and depolarization amplitude of evoked up-states. In contrast, low ACh dramatically increases up-state frequency regardless of the presence or absence of intact thalamocortical connections and does not reduce the duration, spatial propagation, or velocity of evoked up-states. Therefore, our data support the hypothesis that strong cholinergic modulation increases the influence, and thus the signal-to-noise ratio, of afferent input over local cortical activity and that lower cholinergic tone enhances recurrent cortical activity regardless of thalamic input. PMID:24198382

  9. Learning Orthographic Structure With Sequential Generative Neural Networks.

    PubMed

    Testolin, Alberto; Stoianov, Ivilin; Sperduti, Alessandro; Zorzi, Marco

    2016-04-01

    Learning the structure of event sequences is a ubiquitous problem in cognition and particularly in language. One possible solution is to learn a probabilistic generative model of sequences that allows making predictions about upcoming events. Though appealing from a neurobiological standpoint, this approach is typically not pursued in connectionist modeling. Here, we investigated a sequential version of the restricted Boltzmann machine (RBM), a stochastic recurrent neural network that extracts high-order structure from sensory data through unsupervised generative learning and can encode contextual information in the form of internal, distributed representations. We assessed whether this type of network can extract the orthographic structure of English monosyllables by learning a generative model of the letter sequences forming a word training corpus. We show that the network learned an accurate probabilistic model of English graphotactics, which can be used to make predictions about the letter following a given context as well as to autonomously generate high-quality pseudowords. The model was compared to an extended version of simple recurrent networks, augmented with a stochastic process that allows autonomous generation of sequences, and to non-connectionist probabilistic models (n-grams and hidden Markov models). We conclude that sequential RBMs and stochastic simple recurrent networks are promising candidates for modeling cognition in the temporal domain. Copyright © 2015 Cognitive Science Society, Inc.

  10. Generalised Transfer Functions of Neural Networks

    NASA Astrophysics Data System (ADS)

    Fung, C. F.; Billings, S. A.; Zhang, H.

    1997-11-01

    When artificial neural networks are used to model non-linear dynamical systems, the system structure which can be extremely useful for analysis and design, is buried within the network architecture. In this paper, explicit expressions for the frequency response or generalised transfer functions of both feedforward and recurrent neural networks are derived in terms of the network weights. The derivation of the algorithm is established on the basis of the Taylor series expansion of the activation functions used in a particular neural network. This leads to a representation which is equivalent to the non-linear recursive polynomial model and enables the derivation of the transfer functions to be based on the harmonic expansion method. By mapping the neural network into the frequency domain information about the structure of the underlying non-linear system can be recovered. Numerical examples are included to demonstrate the application of the new algorithm. These examples show that the frequency response functions appear to be highly sensitive to the network topology and training, and that the time domain properties fail to reveal deficiencies in the trained network structure.

  11. Collective stochastic coherence in recurrent neuronal networks

    NASA Astrophysics Data System (ADS)

    Sancristóbal, Belén; Rebollo, Beatriz; Boada, Pol; Sanchez-Vives, Maria V.; Garcia-Ojalvo, Jordi

    2016-09-01

    Recurrent networks of dynamic elements frequently exhibit emergent collective oscillations, which can show substantial regularity even when the individual elements are considerably noisy. How noise-induced dynamics at the local level coexists with regular oscillations at the global level is still unclear. Here we show that a combination of stochastic recurrence-based initiation with deterministic refractoriness in an excitable network can reconcile these two features, leading to maximum collective coherence for an intermediate noise level. We report this behaviour in the slow oscillation regime exhibited by a cerebral cortex network under dynamical conditions resembling slow-wave sleep and anaesthesia. Computational analysis of a biologically realistic network model reveals that an intermediate level of background noise leads to quasi-regular dynamics. We verify this prediction experimentally in cortical slices subject to varying amounts of extracellular potassium, which modulates neuronal excitability and thus synaptic noise. The model also predicts that this effectively regular state should exhibit noise-induced memory of the spatial propagation profile of the collective oscillations, which is also verified experimentally. Taken together, these results allow us to construe the high regularity observed experimentally in the brain as an instance of collective stochastic coherence.

  12. Primacy Versus Recency in a Quantitative Model: Activity Is the Critical Distinction

    PubMed Central

    Greene, Anthony J.; Prepscius, Colin; Levy, William B.

    2000-01-01

    Behavioral and neurobiological evidence shows that primacy and recency are subserved by memory systems for intermediate- and short-term memory, respectively. A widely accepted explanation of recency is that in short-term memory, new learning overwrites old learning. Primacy is not as well understood, but many hypotheses contend that initial items are better encoded into long-term memory because they have had more opportunity to be rehearsed. A simple, biologically motivated neural network model supports an alternative hypothesis of the distinct processing requirements for primacy and recency given single-trial learning without rehearsal. Simulations of the model exhibit either primacy or recency, but not both simultaneously. The incompatibility of primacy and recency clarifies possible reasons for two neurologically distinct systems. Inhibition, and its control of activity, determines those list items that are acquired and retained. Activity levels that are too low do not provide sufficient connections for learning to occur, while higher activity diminishes capacity. High recurrent inhibition, and progressively diminishing activity, allows acquisition and retention of early items, while later items are never acquired. Conversely, low recurrent inhibition, and the resulting high activity, allows continuous acquisition such that acquisition of later items eventually interferes with the retention of early items. PMID:10706602

  13. Evaluation of selected recurrence measures in discriminating pre-ictal and inter-ictal periods from epileptic EEG data

    NASA Astrophysics Data System (ADS)

    Ngamga, Eulalie Joelle; Bialonski, Stephan; Marwan, Norbert; Kurths, Jürgen; Geier, Christian; Lehnertz, Klaus

    2016-04-01

    We investigate the suitability of selected measures of complexity based on recurrence quantification analysis and recurrence networks for an identification of pre-seizure states in multi-day, multi-channel, invasive electroencephalographic recordings from five epilepsy patients. We employ several statistical techniques to avoid spurious findings due to various influencing factors and due to multiple comparisons and observe precursory structures in three patients. Our findings indicate a high congruence among measures in identifying seizure precursors and emphasize the current notion of seizure generation in large-scale epileptic networks. A final judgment of the suitability for field studies, however, requires evaluation on a larger database.

  14. Learning Universal Computations with Spikes

    PubMed Central

    Thalmeier, Dominik; Uhlmann, Marvin; Kappen, Hilbert J.; Memmesheimer, Raoul-Martin

    2016-01-01

    Providing the neurobiological basis of information processing in higher animals, spiking neural networks must be able to learn a variety of complicated computations, including the generation of appropriate, possibly delayed reactions to inputs and the self-sustained generation of complex activity patterns, e.g. for locomotion. Many such computations require previous building of intrinsic world models. Here we show how spiking neural networks may solve these different tasks. Firstly, we derive constraints under which classes of spiking neural networks lend themselves to substrates of powerful general purpose computing. The networks contain dendritic or synaptic nonlinearities and have a constrained connectivity. We then combine such networks with learning rules for outputs or recurrent connections. We show that this allows to learn even difficult benchmark tasks such as the self-sustained generation of desired low-dimensional chaotic dynamics or memory-dependent computations. Furthermore, we show how spiking networks can build models of external world systems and use the acquired knowledge to control them. PMID:27309381

  15. The mechanics of state dependent neural correlations

    PubMed Central

    Doiron, Brent; Litwin-Kumar, Ashok; Rosenbaum, Robert; Ocker, Gabriel K.; Josić, Krešimir

    2016-01-01

    Simultaneous recordings from large neural populations are becoming increasingly common. An important feature of the population activity are the trial-to-trial correlated fluctuations of the spike train outputs of recorded neuron pairs. Like the firing rate of single neurons, correlated activity can be modulated by a number of factors, from changes in arousal and attentional state to learning and task engagement. However, the network mechanisms that underlie these changes are not fully understood. We review recent theoretical results that identify three separate biophysical mechanisms that modulate spike train correlations: changes in input correlations, internal fluctuations, and the transfer function of single neurons. We first examine these mechanisms in feedforward pathways, and then show how the same approach can explain the modulation of correlations in recurrent networks. Such mechanistic constraints on the modulation of population activity will be important in statistical analyses of high dimensional neural data. PMID:26906505

  16. Estimating network parameters from combined dynamics of firing rate and irregularity of single neurons.

    PubMed

    Hamaguchi, Kosuke; Riehle, Alexa; Brunel, Nicolas

    2011-01-01

    High firing irregularity is a hallmark of cortical neurons in vivo, and modeling studies suggest a balance of excitation and inhibition is necessary to explain this high irregularity. Such a balance must be generated, at least partly, from local interconnected networks of excitatory and inhibitory neurons, but the details of the local network structure are largely unknown. The dynamics of the neural activity depends on the local network structure; this in turn suggests the possibility of estimating network structure from the dynamics of the firing statistics. Here we report a new method to estimate properties of the local cortical network from the instantaneous firing rate and irregularity (CV(2)) under the assumption that recorded neurons are a part of a randomly connected sparse network. The firing irregularity, measured in monkey motor cortex, exhibits two features; many neurons show relatively stable firing irregularity in time and across different task conditions; the time-averaged CV(2) is widely distributed from quasi-regular to irregular (CV(2) = 0.3-1.0). For each recorded neuron, we estimate the three parameters of a local network [balance of local excitation-inhibition, number of recurrent connections per neuron, and excitatory postsynaptic potential (EPSP) size] that best describe the dynamics of the measured firing rates and irregularities. Our analysis shows that optimal parameter sets form a two-dimensional manifold in the three-dimensional parameter space that is confined for most of the neurons to the inhibition-dominated region. High irregularity neurons tend to be more strongly connected to the local network, either in terms of larger EPSP and inhibitory PSP size or larger number of recurrent connections, compared with the low irregularity neurons, for a given excitatory/inhibitory balance. Incorporating either synaptic short-term depression or conductance-based synapses leads many low CV(2) neurons to move to the excitation-dominated region as well as to an increase of EPSP size.

  17. Video Super-Resolution via Bidirectional Recurrent Convolutional Networks.

    PubMed

    Huang, Yan; Wang, Wei; Wang, Liang

    2018-04-01

    Super resolving a low-resolution video, namely video super-resolution (SR), is usually handled by either single-image SR or multi-frame SR. Single-Image SR deals with each video frame independently, and ignores intrinsic temporal dependency of video frames which actually plays a very important role in video SR. Multi-Frame SR generally extracts motion information, e.g., optical flow, to model the temporal dependency, but often shows high computational cost. Considering that recurrent neural networks (RNNs) can model long-term temporal dependency of video sequences well, we propose a fully convolutional RNN named bidirectional recurrent convolutional network for efficient multi-frame SR. Different from vanilla RNNs, 1) the commonly-used full feedforward and recurrent connections are replaced with weight-sharing convolutional connections. So they can greatly reduce the large number of network parameters and well model the temporal dependency in a finer level, i.e., patch-based rather than frame-based, and 2) connections from input layers at previous timesteps to the current hidden layer are added by 3D feedforward convolutions, which aim to capture discriminate spatio-temporal patterns for short-term fast-varying motions in local adjacent frames. Due to the cheap convolutional operations, our model has a low computational complexity and runs orders of magnitude faster than other multi-frame SR methods. With the powerful temporal dependency modeling, our model can super resolve videos with complex motions and achieve well performance.

  18. Multiplex Recurrence Networks

    NASA Astrophysics Data System (ADS)

    Eroglu, Deniz; Marwan, Norbert

    2017-04-01

    The complex nature of a variety of phenomena in physical, biological, or earth sciences is driven by a large number of degrees of freedom which are strongly interconnected. Although the evolution of such systems is described by multivariate time series (MTS), so far research mostly focuses on analyzing these components one by one. Recurrence based analyses are powerful methods to understand the underlying dynamics of a dynamical system and have been used for many successful applications including examples from earth science, economics, or chemical reactions. The backbone of these techniques is creating the phase space of the system. However, increasing the dimension of a system requires increasing the length of the time series in order get significant and reliable results. This requirement is one of the challenges in many disciplines, in particular in palaeoclimate, thus, it is not easy to create a phase space from measured MTS due to the limited number of available obervations (samples). To overcome this problem, we suggest to create recurrence networks from each component of the system and combine them into a multiplex network structure, the multiplex recurrence network (MRN). We test the MRN by using prototypical mathematical models and demonstrate its use by studying high-dimensional palaeoclimate dynamics derived from pollen data from the Bear Lake (Utah, US). By using the MRN, we can distinguish typical climate transition events, e.g., such between Marine Isotope Stages.

  19. DEEP MOTIF DASHBOARD: VISUALIZING AND UNDERSTANDING GENOMIC SEQUENCES USING DEEP NEURAL NETWORKS.

    PubMed

    Lanchantin, Jack; Singh, Ritambhara; Wang, Beilun; Qi, Yanjun

    2017-01-01

    Deep neural network (DNN) models have recently obtained state-of-the-art prediction accuracy for the transcription factor binding (TFBS) site classification task. However, it remains unclear how these approaches identify meaningful DNA sequence signals and give insights as to why TFs bind to certain locations. In this paper, we propose a toolkit called the Deep Motif Dashboard (DeMo Dashboard) which provides a suite of visualization strategies to extract motifs, or sequence patterns from deep neural network models for TFBS classification. We demonstrate how to visualize and understand three important DNN models: convolutional, recurrent, and convolutional-recurrent networks. Our first visualization method is finding a test sequence's saliency map which uses first-order derivatives to describe the importance of each nucleotide in making the final prediction. Second, considering recurrent models make predictions in a temporal manner (from one end of a TFBS sequence to the other), we introduce temporal output scores, indicating the prediction score of a model over time for a sequential input. Lastly, a class-specific visualization strategy finds the optimal input sequence for a given TFBS positive class via stochastic gradient optimization. Our experimental results indicate that a convolutional-recurrent architecture performs the best among the three architectures. The visualization techniques indicate that CNN-RNN makes predictions by modeling both motifs as well as dependencies among them.

  20. Deep Motif Dashboard: Visualizing and Understanding Genomic Sequences Using Deep Neural Networks

    PubMed Central

    Lanchantin, Jack; Singh, Ritambhara; Wang, Beilun; Qi, Yanjun

    2018-01-01

    Deep neural network (DNN) models have recently obtained state-of-the-art prediction accuracy for the transcription factor binding (TFBS) site classification task. However, it remains unclear how these approaches identify meaningful DNA sequence signals and give insights as to why TFs bind to certain locations. In this paper, we propose a toolkit called the Deep Motif Dashboard (DeMo Dashboard) which provides a suite of visualization strategies to extract motifs, or sequence patterns from deep neural network models for TFBS classification. We demonstrate how to visualize and understand three important DNN models: convolutional, recurrent, and convolutional-recurrent networks. Our first visualization method is finding a test sequence’s saliency map which uses first-order derivatives to describe the importance of each nucleotide in making the final prediction. Second, considering recurrent models make predictions in a temporal manner (from one end of a TFBS sequence to the other), we introduce temporal output scores, indicating the prediction score of a model over time for a sequential input. Lastly, a class-specific visualization strategy finds the optimal input sequence for a given TFBS positive class via stochastic gradient optimization. Our experimental results indicate that a convolutional-recurrent architecture performs the best among the three architectures. The visualization techniques indicate that CNN-RNN makes predictions by modeling both motifs as well as dependencies among them. PMID:27896980

  1. Generative Recurrent Networks for De Novo Drug Design.

    PubMed

    Gupta, Anvita; Müller, Alex T; Huisman, Berend J H; Fuchs, Jens A; Schneider, Petra; Schneider, Gisbert

    2018-01-01

    Generative artificial intelligence models present a fresh approach to chemogenomics and de novo drug design, as they provide researchers with the ability to narrow down their search of the chemical space and focus on regions of interest. We present a method for molecular de novo design that utilizes generative recurrent neural networks (RNN) containing long short-term memory (LSTM) cells. This computational model captured the syntax of molecular representation in terms of SMILES strings with close to perfect accuracy. The learned pattern probabilities can be used for de novo SMILES generation. This molecular design concept eliminates the need for virtual compound library enumeration. By employing transfer learning, we fine-tuned the RNN's predictions for specific molecular targets. This approach enables virtual compound design without requiring secondary or external activity prediction, which could introduce error or unwanted bias. The results obtained advocate this generative RNN-LSTM system for high-impact use cases, such as low-data drug discovery, fragment based molecular design, and hit-to-lead optimization for diverse drug targets. © 2017 The Authors. Published by Wiley-VCH Verlag GmbH & Co. KGaA.

  2. Behavior control in the sensorimotor loop with short-term synaptic dynamics induced by self-regulating neurons.

    PubMed

    Toutounji, Hazem; Pasemann, Frank

    2014-01-01

    The behavior and skills of living systems depend on the distributed control provided by specialized and highly recurrent neural networks. Learning and memory in these systems is mediated by a set of adaptation mechanisms, known collectively as neuronal plasticity. Translating principles of recurrent neural control and plasticity to artificial agents has seen major strides, but is usually hampered by the complex interactions between the agent's body and its environment. One of the important standing issues is for the agent to support multiple stable states of behavior, so that its behavioral repertoire matches the requirements imposed by these interactions. The agent also must have the capacity to switch between these states in time scales that are comparable to those by which sensory stimulation varies. Achieving this requires a mechanism of short-term memory that allows the neurocontroller to keep track of the recent history of its input, which finds its biological counterpart in short-term synaptic plasticity. This issue is approached here by deriving synaptic dynamics in recurrent neural networks. Neurons are introduced as self-regulating units with a rich repertoire of dynamics. They exhibit homeostatic properties for certain parameter domains, which result in a set of stable states and the required short-term memory. They can also operate as oscillators, which allow them to surpass the level of activity imposed by their homeostatic operation conditions. Neural systems endowed with the derived synaptic dynamics can be utilized for the neural behavior control of autonomous mobile agents. The resulting behavior depends also on the underlying network structure, which is either engineered or developed by evolutionary techniques. The effectiveness of these self-regulating units is demonstrated by controlling locomotion of a hexapod with 18 degrees of freedom, and obstacle-avoidance of a wheel-driven robot.

  3. Democratic population decisions result in robust policy-gradient learning: a parametric study with GPU simulations.

    PubMed

    Richmond, Paul; Buesing, Lars; Giugliano, Michele; Vasilaki, Eleni

    2011-05-04

    High performance computing on the Graphics Processing Unit (GPU) is an emerging field driven by the promise of high computational power at a low cost. However, GPU programming is a non-trivial task and moreover architectural limitations raise the question of whether investing effort in this direction may be worthwhile. In this work, we use GPU programming to simulate a two-layer network of Integrate-and-Fire neurons with varying degrees of recurrent connectivity and investigate its ability to learn a simplified navigation task using a policy-gradient learning rule stemming from Reinforcement Learning. The purpose of this paper is twofold. First, we want to support the use of GPUs in the field of Computational Neuroscience. Second, using GPU computing power, we investigate the conditions under which the said architecture and learning rule demonstrate best performance. Our work indicates that networks featuring strong Mexican-Hat-shaped recurrent connections in the top layer, where decision making is governed by the formation of a stable activity bump in the neural population (a "non-democratic" mechanism), achieve mediocre learning results at best. In absence of recurrent connections, where all neurons "vote" independently ("democratic") for a decision via population vector readout, the task is generally learned better and more robustly. Our study would have been extremely difficult on a desktop computer without the use of GPU programming. We present the routines developed for this purpose and show that a speed improvement of 5x up to 42x is provided versus optimised Python code. The higher speed is achieved when we exploit the parallelism of the GPU in the search of learning parameters. This suggests that efficient GPU programming can significantly reduce the time needed for simulating networks of spiking neurons, particularly when multiple parameter configurations are investigated.

  4. Behavior control in the sensorimotor loop with short-term synaptic dynamics induced by self-regulating neurons

    PubMed Central

    Toutounji, Hazem; Pasemann, Frank

    2014-01-01

    The behavior and skills of living systems depend on the distributed control provided by specialized and highly recurrent neural networks. Learning and memory in these systems is mediated by a set of adaptation mechanisms, known collectively as neuronal plasticity. Translating principles of recurrent neural control and plasticity to artificial agents has seen major strides, but is usually hampered by the complex interactions between the agent's body and its environment. One of the important standing issues is for the agent to support multiple stable states of behavior, so that its behavioral repertoire matches the requirements imposed by these interactions. The agent also must have the capacity to switch between these states in time scales that are comparable to those by which sensory stimulation varies. Achieving this requires a mechanism of short-term memory that allows the neurocontroller to keep track of the recent history of its input, which finds its biological counterpart in short-term synaptic plasticity. This issue is approached here by deriving synaptic dynamics in recurrent neural networks. Neurons are introduced as self-regulating units with a rich repertoire of dynamics. They exhibit homeostatic properties for certain parameter domains, which result in a set of stable states and the required short-term memory. They can also operate as oscillators, which allow them to surpass the level of activity imposed by their homeostatic operation conditions. Neural systems endowed with the derived synaptic dynamics can be utilized for the neural behavior control of autonomous mobile agents. The resulting behavior depends also on the underlying network structure, which is either engineered or developed by evolutionary techniques. The effectiveness of these self-regulating units is demonstrated by controlling locomotion of a hexapod with 18 degrees of freedom, and obstacle-avoidance of a wheel-driven robot. PMID:24904403

  5. Breeding novel solutions in the brain: a model of Darwinian neurodynamics.

    PubMed

    Szilágyi, András; Zachar, István; Fedor, Anna; de Vladar, Harold P; Szathmáry, Eörs

    2016-01-01

    Background : The fact that surplus connections and neurons are pruned during development is well established. We complement this selectionist picture by a proof-of-principle model of evolutionary search in the brain, that accounts for new variations in theory space. We present a model for Darwinian evolutionary search for candidate solutions in the brain. Methods : We combine known components of the brain - recurrent neural networks (acting as attractors), the action selection loop and implicit working memory - to provide the appropriate Darwinian architecture. We employ a population of attractor networks with palimpsest memory. The action selection loop is employed with winners-share-all dynamics to select for candidate solutions that are transiently stored in implicit working memory. Results : We document two processes: selection of stored solutions and evolutionary search for novel solutions. During the replication of candidate solutions attractor networks occasionally produce recombinant patterns, increasing variation on which selection can act. Combinatorial search acts on multiplying units (activity patterns) with hereditary variation and novel variants appear due to (i) noisy recall of patterns from the attractor networks, (ii) noise during transmission of candidate solutions as messages between networks, and, (iii) spontaneously generated, untrained patterns in spurious attractors. Conclusions : Attractor dynamics of recurrent neural networks can be used to model Darwinian search. The proposed architecture can be used for fast search among stored solutions (by selection) and for evolutionary search when novel candidate solutions are generated in successive iterations. Since all the suggested components are present in advanced nervous systems, we hypothesize that the brain could implement a truly evolutionary combinatorial search system, capable of generating novel variants.

  6. Effects of anodal transcranial direct current stimulation over the leg motor area on lumbar spinal network excitability in healthy subjects

    PubMed Central

    Roche, N; Lackmy, A; Achache, V; Bussel, B; Katz, R

    2011-01-01

    Abstract In recent years, two techniques have become available for the non-invasive stimulation of human motor cortex: transcranial magnetic stimulation (TMS) and transcranial direct current stimulation (tDCS). The effects of TMS and tDCS when applied over motor cortex should be considered with regard not only to cortical circuits but also to spinal motor circuits. The different modes of action and specificity of TMS and tDCS suggest that their effects on spinal network excitability may be different from that in the cortex. Until now, the effects of tDCS on lumbar spinal network excitability have never been studied. In this series of experiments, on healthy subjects, we studied the effects of anodal tDCS over the lower limb motor cortex on (i) reciprocal Ia inhibition projecting from the tibialis anterior muscle (TA) to the soleus (SOL), (ii) presynaptic inhibition of SOL Ia terminals, (iii) homonymous SOL recurrent inhibition, and (iv) SOL H-reflex recruitment curves. The results show that anodal tDCS decreases reciprocal Ia inhibition, increases recurrent inhibition and induces no modification of presynaptic inhibition of SOL Ia terminals and of SOL-H reflex recruitment curves. Our results indicate therefore that the effects of tDCS are the opposite of those previously described for TMS on spinal network excitability. They also indicate that anodal tDCS induces effects on spinal network excitability similar to those observed during co-contraction suggesting that anodal tDCS activates descending corticospinal projections mainly involved in co-contractions. PMID:21502292

  7. Scalability of Asynchronous Networks Is Limited by One-to-One Mapping between Effective Connectivity and Correlations

    PubMed Central

    van Albada, Sacha Jennifer; Helias, Moritz; Diesmann, Markus

    2015-01-01

    Network models are routinely downscaled compared to nature in terms of numbers of nodes or edges because of a lack of computational resources, often without explicit mention of the limitations this entails. While reliable methods have long existed to adjust parameters such that the first-order statistics of network dynamics are conserved, here we show that limitations already arise if also second-order statistics are to be maintained. The temporal structure of pairwise averaged correlations in the activity of recurrent networks is determined by the effective population-level connectivity. We first show that in general the converse is also true and explicitly mention degenerate cases when this one-to-one relationship does not hold. The one-to-one correspondence between effective connectivity and the temporal structure of pairwise averaged correlations implies that network scalings should preserve the effective connectivity if pairwise averaged correlations are to be held constant. Changes in effective connectivity can even push a network from a linearly stable to an unstable, oscillatory regime and vice versa. On this basis, we derive conditions for the preservation of both mean population-averaged activities and pairwise averaged correlations under a change in numbers of neurons or synapses in the asynchronous regime typical of cortical networks. We find that mean activities and correlation structure can be maintained by an appropriate scaling of the synaptic weights, but only over a range of numbers of synapses that is limited by the variance of external inputs to the network. Our results therefore show that the reducibility of asynchronous networks is fundamentally limited. PMID:26325661

  8. Selective functional interactions between excitatory and inhibitory cortical neurons and differential contribution to persistent activity of the slow oscillation.

    PubMed

    Tahvildari, Babak; Wölfel, Markus; Duque, Alvaro; McCormick, David A

    2012-08-29

    The neocortex depends upon a relative balance of recurrent excitation and inhibition for its operation. During spontaneous Up states, cortical pyramidal cells receive proportional barrages of excitatory and inhibitory synaptic potentials. Many of these synaptic potentials arise from the activity of nearby neurons, although the identity of these cells is relatively unknown, especially for those underlying the generation of inhibitory synaptic events. To address these fundamental questions, we developed an in vitro submerged slice preparation of the mouse entorhinal cortex that generates robust and regular spontaneous recurrent network activity in the form of the slow oscillation. By performing whole-cell recordings from multiple cell types identified with green fluorescent protein expression and electrophysiological and/or morphological properties, we show that distinct functional subpopulations of neurons exist in the entorhinal cortex, with large variations in contribution to the generation of balanced excitation and inhibition during the slow oscillation. The most active neurons during the slow oscillation are excitatory pyramidal and inhibitory fast spiking interneurons, receiving robust barrages of both excitatory and inhibitory synaptic potentials. Weak action potential activity was observed in stellate excitatory neurons and somatostatin-containing interneurons. In contrast, interneurons containing neuropeptide Y, vasoactive intestinal peptide, or the 5-hydroxytryptamine (serotonin) 3a receptor, were silent. Our data demonstrate remarkable functional specificity in the interactions between different excitatory and inhibitory cortical neuronal subtypes, and suggest that it is the large recurrent interaction between pyramidal neurons and fast spiking interneurons that is responsible for the generation of persistent activity that characterizes the depolarized states of the cortex.

  9. Bankruptcy prediction based on financial ratios using Jordan Recurrent Neural Networks: a case study in Polish companies

    NASA Astrophysics Data System (ADS)

    Hardinata, Lingga; Warsito, Budi; Suparti

    2018-05-01

    Complexity of bankruptcy causes the accurate models of bankruptcy prediction difficult to be achieved. Various prediction models have been developed to improve the accuracy of bankruptcy predictions. Machine learning has been widely used to predict because of its adaptive capabilities. Artificial Neural Networks (ANN) is one of machine learning which proved able to complete inference tasks such as prediction and classification especially in data mining. In this paper, we propose the implementation of Jordan Recurrent Neural Networks (JRNN) to classify and predict corporate bankruptcy based on financial ratios. Feedback interconnection in JRNN enable to make the network keep important information well allowing the network to work more effectively. The result analysis showed that JRNN works very well in bankruptcy prediction with average success rate of 81.3785%.

  10. Studying the Relationship between High-Latitude Geomagnetic Activity and Parameters of Interplanetary Magnetic Clouds with the Use of Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Barkhatov, N. A.; Revunov, S. E.; Vorobjev, V. G.; Yagodkina, O. I.

    2018-03-01

    The cause-and-effect relations of the dynamics of high-latitude geomagnetic activity (in terms of the AL index) and the type of the magnetic cloud of the solar wind are studied with the use of artificial neural networks. A recurrent neural network model has been created based on the search for the optimal physically coupled input and output parameters characterizing the action of a plasma flux belonging to a certain magnetic cloud type on the magnetosphere. It has been shown that, with IMF components as input parameters of neural networks with allowance for a 90-min prehistory, it is possible to retrieve the AL sequence with an accuracy to 80%. The successful retrieval of the AL dynamics by the used data indicates the presence of a close nonlinear connection of the AL index with cloud parameters. The created neural network models can be applied with high efficiency to retrieve the AL index, both in periods of isolated magnetospheric substorms and in periods of the interaction between the Earth's magnetosphere and magnetic clouds of different types. The developed model of AL index retrieval can be used to detect magnetic clouds.

  11. Oscillation, Conduction Delays, and Learning Cooperate to Establish Neural Competition in Recurrent Networks

    PubMed Central

    Kato, Hideyuki; Ikeguchi, Tohru

    2016-01-01

    Specific memory might be stored in a subnetwork consisting of a small population of neurons. To select neurons involved in memory formation, neural competition might be essential. In this paper, we show that excitable neurons are competitive and organize into two assemblies in a recurrent network with spike timing-dependent synaptic plasticity (STDP) and axonal conduction delays. Neural competition is established by the cooperation of spontaneously induced neural oscillation, axonal conduction delays, and STDP. We also suggest that the competition mechanism in this paper is one of the basic functions required to organize memory-storing subnetworks into fine-scale cortical networks. PMID:26840529

  12. Seismic activity prediction using computational intelligence techniques in northern Pakistan

    NASA Astrophysics Data System (ADS)

    Asim, Khawaja M.; Awais, Muhammad; Martínez-Álvarez, F.; Iqbal, Talat

    2017-10-01

    Earthquake prediction study is carried out for the region of northern Pakistan. The prediction methodology includes interdisciplinary interaction of seismology and computational intelligence. Eight seismic parameters are computed based upon the past earthquakes. Predictive ability of these eight seismic parameters is evaluated in terms of information gain, which leads to the selection of six parameters to be used in prediction. Multiple computationally intelligent models have been developed for earthquake prediction using selected seismic parameters. These models include feed-forward neural network, recurrent neural network, random forest, multi layer perceptron, radial basis neural network, and support vector machine. The performance of every prediction model is evaluated and McNemar's statistical test is applied to observe the statistical significance of computational methodologies. Feed-forward neural network shows statistically significant predictions along with accuracy of 75% and positive predictive value of 78% in context of northern Pakistan.

  13. General visual robot controller networks via artificial evolution

    NASA Astrophysics Data System (ADS)

    Cliff, David; Harvey, Inman; Husbands, Philip

    1993-08-01

    We discuss recent results from our ongoing research concerning the application of artificial evolution techniques (i.e., an extended form of genetic algorithm) to the problem of developing `neural' network controllers for visually guided robots. The robot is a small autonomous vehicle with extremely low-resolution vision, employing visual sensors which could readily be constructed from discrete analog components. In addition to visual sensing, the robot is equipped with a small number of mechanical tactile sensors. Activity from the sensors is fed to a recurrent dynamical artificial `neural' network, which acts as the robot controller, providing signals to motors governing the robot's motion. Prior to presentation of new results, this paper summarizes our rationale and past work, which has demonstrated that visually guided control networks can arise without any explicit specification that visual processing should be employed: the evolutionary process opportunistically makes use of visual information if it is available.

  14. RM-SORN: a reward-modulated self-organizing recurrent neural network.

    PubMed

    Aswolinskiy, Witali; Pipa, Gordon

    2015-01-01

    Neural plasticity plays an important role in learning and memory. Reward-modulation of plasticity offers an explanation for the ability of the brain to adapt its neural activity to achieve a rewarded goal. Here, we define a neural network model that learns through the interaction of Intrinsic Plasticity (IP) and reward-modulated Spike-Timing-Dependent Plasticity (STDP). IP enables the network to explore possible output sequences and STDP, modulated by reward, reinforces the creation of the rewarded output sequences. The model is tested on tasks for prediction, recall, non-linear computation, pattern recognition, and sequence generation. It achieves performance comparable to networks trained with supervised learning, while using simple, biologically motivated plasticity rules, and rewarding strategies. The results confirm the importance of investigating the interaction of several plasticity rules in the context of reward-modulated learning and whether reward-modulated self-organization can explain the amazing capabilities of the brain.

  15. The modeling and simulation of visuospatial working memory

    PubMed Central

    Liang, Lina; Zhang, Zhikang

    2010-01-01

    Camperi and Wang (Comput Neurosci 5:383–405, 1998) presented a network model for working memory that combines intrinsic cellular bistability with the recurrent network architecture of the neocortex. While Fall and Rinzel (Comput Neurosci 20:97–107, 2006) replaced this intrinsic bistability with a biological mechanism-Ca2+ release subsystem. In this study, we aim to further expand the above work. We integrate the traditional firing-rate network with Ca2+ subsystem-induced bistability, amend the synaptic weights and suggest that Ca2+ concentration only increase the efficacy of synaptic input but has nothing to do with the external input for the transient cue. We found that our network model maintained the persistent activity in response to a brief transient stimulus like that of the previous two models and the working memory performance was resistant to noise and distraction stimulus if Ca2+ subsystem was tuned to be bistable. PMID:22132045

  16. [Advances in Acupuncture Mechanism Research on the Changes of Synaptic Plasticity: "Pain Memory" for Chronic Pain].

    PubMed

    Yang, Yi-Ling; Huang, Jian-Peng; Jiang, Li; Liu, Jian-Hua

    2017-12-25

    Previous studies have shown that there are many common structures between the neural network of pain and memory, and the main structure in the pain network is also part of the memory network. Chronic pain is characterized by recurrent attacks and is associated with persistent ectopic impulse, which causes changes in synaptic structure and function based on nerve activity. These changes may induce long-term potentiation of synaptic transmission, and ultimately lead to changes in the central nervous system to produce "pain memory". Acupuncture is an effective method in treating chronic pain. It has been proven that acupuncture can affect the spinal cord dorsal horn, hippocampus, cingulate gyrus and other related areas. The possible mechanisms of action include opioid-induced analgesia, activation of glial cells, and the expression of brain derived neurotrophic factor (BDNF). In this study, we systematically review the brain structures, stage of "pain memory" and the mechanisms of acupuncture on synaptic plasticity in chronic pain.

  17. Figure-ground segregation in a recurrent network architecture.

    PubMed

    Roelfsema, Pieter R; Lamme, Victor A F; Spekreijse, Henk; Bosch, Holger

    2002-05-15

    Here we propose a model of how the visual brain segregates textured scenes into figures and background. During texture segregation, locations where the properties of texture elements change abruptly are assigned to boundaries, whereas image regions that are relatively homogeneous are grouped together. Boundary detection and grouping of image regions require different connection schemes, which are accommodated in a single network architecture by implementing them in different layers. As a result, all units carry signals related to boundary detection as well as grouping of image regions, in accordance with cortical physiology. Boundaries yield an early enhancement of network responses, but at a later point, an entire figural region is grouped together, because units that respond to it are labeled with enhanced activity. The model predicts which image regions are preferentially perceived as figure or as background and reproduces the spatio-temporal profile of neuronal activity in the visual cortex during texture segregation in intact animals, as well as in animals with cortical lesions.

  18. Synaptic and Network Mechanisms of Sparse and Reliable Visual Cortical Activity during Nonclassical Receptive Field Stimulation

    PubMed Central

    Haider, Bilal; Krause, Matthew R.; Duque, Alvaro; Yu, Yuguo; Touryan, Jonathan; Mazer, James A.; McCormick, David A.

    2011-01-01

    SUMMARY During natural vision, the entire visual field is stimulated by images rich in spatiotemporal structure. Although many visual system studies restrict stimuli to the classical receptive field (CRF), it is known that costimulation of the CRF and the surrounding nonclassical receptive field (nCRF) increases neuronal response sparseness. The cellular and network mechanisms underlying increased response sparseness remain largely unexplored. Here we show that combined CRF + nCRF stimulation increases the sparseness, reliability, and precision of spiking and membrane potential responses in classical regular spiking (RSC) pyramidal neurons of cat primary visual cortex. Conversely, fast-spiking interneurons exhibit increased activity and decreased selectivity during CRF + nCRF stimulation. The increased sparseness and reliability of RSC neuron spiking is associated with increased inhibitory barrages and narrower visually evoked synaptic potentials. Our experimental observations were replicated with a simple computational model, suggesting that network interactions among neuronal subtypes ultimately sharpen recurrent excitation, producing specific and reliable visual responses. PMID:20152117

  19. Systematic review and network meta-analysis comparing clinical outcomes and effectiveness of surgical treatments for haemorrhoids.

    PubMed

    Simillis, C; Thoukididou, S N; Slesser, A A P; Rasheed, S; Tan, E; Tekkis, P P

    2015-12-01

    The aim was to compare the clinical outcomes and effectiveness of surgical treatments for haemorrhoids. Randomized clinical trials were identified by means of a systematic review. A Bayesian network meta-analysis was performed using the Markov chain Monte Carlo method in WinBUGS. Ninety-eight trials were included with 7827 participants and 11 surgical treatments for grade III and IV haemorrhoids. Open, closed and radiofrequency haemorrhoidectomies resulted in significantly more postoperative complications than transanal haemorrhoidal dearterialization (THD), LigaSure™ and Harmonic® haemorrhoidectomies. THD had significantly less postoperative bleeding than open and stapled procedures, and resulted in significantly fewer emergency reoperations than open, closed, stapled and LigaSure™ haemorrhoidectomies. Open and closed haemorrhoidectomies resulted in more pain on postoperative day 1 than stapled, THD, LigaSure™ and Harmonic® procedures. After stapled, LigaSure™ and Harmonic® haemorrhoidectomies patients resumed normal daily activities earlier than after open and closed procedures. THD provided the earliest time to first bowel movement. The stapled and THD groups had significantly higher haemorrhoid recurrence rates than the open, closed and LigaSure™ groups. Recurrence of haemorrhoidal symptoms was more common after stapled haemorrhoidectomy than after open and LigaSure™ operations. No significant difference was identified between treatments for anal stenosis, incontinence and perianal skin tags. Open and closed haemorrhoidectomies resulted in more postoperative complications and slower recovery, but fewer haemorrhoid recurrences. THD and stapled haemorrhoidectomies were associated with decreased postoperative pain and faster recovery, but higher recurrence rates. The advantages and disadvantages of each surgical treatment should be discussed with the patient before surgery to allow an informed decision to be made. © 2015 BJS Society Ltd Published by John Wiley & Sons Ltd.

  20. Synconset Waves and Chains: Spiking Onsets in Synchronous Populations Predict and Are Predicted by Network Structure

    PubMed Central

    Raghavan, Mohan; Amrutur, Bharadwaj; Narayanan, Rishikesh; Sikdar, Sujit Kumar

    2013-01-01

    Synfire waves are propagating spike packets in synfire chains, which are feedforward chains embedded in random networks. Although synfire waves have proved to be effective quantification for network activity with clear relations to network structure, their utilities are largely limited to feedforward networks with low background activity. To overcome these shortcomings, we describe a novel generalisation of synfire waves, and define ‘synconset wave’ as a cascade of first spikes within a synchronisation event. Synconset waves would occur in ‘synconset chains’, which are feedforward chains embedded in possibly heavily recurrent networks with heavy background activity. We probed the utility of synconset waves using simulation of single compartment neuron network models with biophysically realistic conductances, and demonstrated that the spread of synconset waves directly follows from the network connectivity matrix and is modulated by top-down inputs and the resultant oscillations. Such synconset profiles lend intuitive insights into network organisation in terms of connection probabilities between various network regions rather than an adjacency matrix. To test this intuition, we develop a Bayesian likelihood function that quantifies the probability that an observed synfire wave was caused by a given network. Further, we demonstrate it's utility in the inverse problem of identifying the network that caused a given synfire wave. This method was effective even in highly subsampled networks where only a small subset of neurons were accessible, thus showing it's utility in experimental estimation of connectomes in real neuronal-networks. Together, we propose synconset chains/waves as an effective framework for understanding the impact of network structure on function, and as a step towards developing physiology-driven network identification methods. Finally, as synconset chains extend the utilities of synfire chains to arbitrary networks, we suggest utilities of our framework to several aspects of network physiology including cell assemblies, population codes, and oscillatory synchrony. PMID:24116018

  1. Predictors of incident and recurrent participation in the sale or delivery of drugs for profit amongst young methamphetamine users in Chiang Mai Province, Thailand, 2005-2006.

    PubMed

    Latimore, Amanda D; Rudolph, Abby; German, Danielle; Sherman, Susan G; Srirojn, Bangorn; Aramrattana, Apinun; Celentano, David D

    2011-07-01

    Despite Thailand's war on drugs, methamphetamine ("yaba" in Thai) use and the drug economy both thrive. This analysis identifies predictors of incident and recurrent involvement in the sale or delivery of drugs for profit amongst young Thai yaba users. Between April 2005 and June 2006, 983 yaba users, ages 18-25, were enrolled in a randomized behavioural intervention in Chiang Mai Province (415 index and 568 of their drug network members). Questionnaires administered at baseline, 3-, 6-, 9-, and 12-month follow-up visits assessed socio-demographic factors, current and prior drug use, social network characteristics, sexual risk behaviours and drug use norms. Exposures were lagged by three months (prior visit). Outcomes included incident and recurrent drug economy involvement. Generalized linear mixed models were fit using GLIMMIX (SASv9.1). Incident drug economy involvement was predicted by yaba use frequency (adjusted odds ratio [AOR]: 1.05; 95% confidence interval [CI]: 1.01, 1.10), recent incarceration (AOR: 2.37; 95% CI: 1.07, 5.25) and the proportion of yaba-using networks who quit recently (AOR: .34; 95% CI: .15, .78). Recurrent drug economy involvement was predicted by age (AOR: 0.81; 95% CI: 0.68, 0.96), frequency of yaba use (AOR: 1.06; 95% CI: 1.02, 1.09), drug economy involvement at the previous visit (AOR: 2.61; CI: 1.59, 4.28), incarceration in the prior three months (AOR: 2.29; 95% CI: 1.07, 4.86), and the proportion of yaba-users in his/her network who quit recently (AOR: .38; 95% CI: .20, .71). Individual drug use, drug use in social networks and recent incarceration were predictors of incident and recurrent involvement in the drug economy. These results suggest that interrupting drug use and/or minimizing the influence of drug-using networks may help prevent further involvement in the drug economy. The emergence of recent incarceration as a predictor for both models highlights the need for more appropriate drug rehabilitation programmes and demonstrates that continued criminalization of drug users may fuel Thailand's yaba epidemic. Copyright © 2011 Elsevier B.V. All rights reserved.

  2. Intelligent fault diagnosis of rolling bearings using an improved deep recurrent neural network

    NASA Astrophysics Data System (ADS)

    Jiang, Hongkai; Li, Xingqiu; Shao, Haidong; Zhao, Ke

    2018-06-01

    Traditional intelligent fault diagnosis methods for rolling bearings heavily depend on manual feature extraction and feature selection. For this purpose, an intelligent deep learning method, named the improved deep recurrent neural network (DRNN), is proposed in this paper. Firstly, frequency spectrum sequences are used as inputs to reduce the input size and ensure good robustness. Secondly, DRNN is constructed by the stacks of the recurrent hidden layer to automatically extract the features from the input spectrum sequences. Thirdly, an adaptive learning rate is adopted to improve the training performance of the constructed DRNN. The proposed method is verified with experimental rolling bearing data, and the results confirm that the proposed method is more effective than traditional intelligent fault diagnosis methods.

  3. Levelling Profiles and a GPS Network to Monitor the Active Folding and Faulting Deformation in the Campo de Dalias (Betic Cordillera, Southeastern Spain)

    PubMed Central

    Marín-Lechado, Carlos; Galindo-Zaldívar, Jesús; Gil, Antonio José; Borque, María Jesús; de Lacy, María Clara; Pedrera, Antonio; López-Garrido, Angel Carlos; Alfaro, Pedro; García-Tortosa, Francisco; Ramos, Maria Isabel; Rodríguez-Caderot, Gracia; Rodríguez-Fernández, José; Ruiz-Constán, Ana; de Galdeano-Equiza, Carlos Sanz

    2010-01-01

    The Campo de Dalias is an area with relevant seismicity associated to the active tectonic deformations of the southern boundary of the Betic Cordillera. A non-permanent GPS network was installed to monitor, for the first time, the fault- and fold-related activity. In addition, two high precision levelling profiles were measured twice over a one-year period across the Balanegra Fault, one of the most active faults recognized in the area. The absence of significant movement of the main fault surface suggests seismogenic behaviour. The possible recurrence interval may be between 100 and 300 y. The repetitive GPS and high precision levelling monitoring of the fault surface during a long time period may help us to determine future fault behaviour with regard to the existence (or not) of a creep component, the accumulation of elastic deformation before faulting, and implications of the fold-fault relationship. PMID:22319309

  4. Recurrent myocardial infarction: Mechanisms of free-floating adaptation and autonomic derangement in networked cardiac neural control.

    PubMed

    Kember, Guy; Ardell, Jeffrey L; Shivkumar, Kalyanam; Armour, J Andrew

    2017-01-01

    The cardiac nervous system continuously controls cardiac function whether or not pathology is present. While myocardial infarction typically has a major and catastrophic impact, population studies have shown that longer-term risk for recurrent myocardial infarction and the related potential for sudden cardiac death depends mainly upon standard atherosclerotic variables and autonomic nervous system maladaptations. Investigative neurocardiology has demonstrated that autonomic control of cardiac function includes local circuit neurons for networked control within the peripheral nervous system. The structural and adaptive characteristics of such networked interactions define the dynamics and a new normal for cardiac control that results in the aftermath of recurrent myocardial infarction and/or unstable angina that may or may not precipitate autonomic derangement. These features are explored here via a mathematical model of cardiac regulation. A main observation is that the control environment during pathology is an extrapolation to a setting outside prior experience. Although global bounds guarantee stability, the resulting closed-loop dynamics exhibited while the network adapts during pathology are aptly described as 'free-floating' in order to emphasize their dependence upon details of the network structure. The totality of the results provide a mechanistic reasoning that validates the clinical practice of reducing sympathetic efferent neuronal tone while aggressively targeting autonomic derangement in the treatment of ischemic heart disease.

  5. Nonlinear dynamic systems identification using recurrent interval type-2 TSK fuzzy neural network - A novel structure.

    PubMed

    El-Nagar, Ahmad M

    2018-01-01

    In this study, a novel structure of a recurrent interval type-2 Takagi-Sugeno-Kang (TSK) fuzzy neural network (FNN) is introduced for nonlinear dynamic and time-varying systems identification. It combines the type-2 fuzzy sets (T2FSs) and a recurrent FNN to avoid the data uncertainties. The fuzzy firing strengths in the proposed structure are returned to the network input as internal variables. The interval type-2 fuzzy sets (IT2FSs) is used to describe the antecedent part for each rule while the consequent part is a TSK-type, which is a linear function of the internal variables and the external inputs with interval weights. All the type-2 fuzzy rules for the proposed RIT2TSKFNN are learned on-line based on structure and parameter learning, which are performed using the type-2 fuzzy clustering. The antecedent and consequent parameters of the proposed RIT2TSKFNN are updated based on the Lyapunov function to achieve network stability. The obtained results indicate that our proposed network has a small root mean square error (RMSE) and a small integral of square error (ISE) with a small number of rules and a small computation time compared with other type-2 FNNs. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  6. Long-term memory stabilized by noise-induced rehearsal.

    PubMed

    Wei, Yi; Koulakov, Alexei A

    2014-11-19

    Cortical networks can maintain memories for decades despite the short lifetime of synaptic strengths. Can a neural network store long-lasting memories in unstable synapses? Here, we study the effects of ongoing spike-timing-dependent plasticity (STDP) on the stability of memory patterns stored in synapses of an attractor neural network. We show that certain classes of STDP rules can stabilize all stored memory patterns despite a short lifetime of synapses. In our model, unstructured neural noise, after passing through the recurrent network connections, carries the imprint of all memory patterns in temporal correlations. STDP, combined with these correlations, leads to reinforcement of all stored patterns, even those that are never explicitly visited. Our findings may provide the functional reason for irregular spiking displayed by cortical neurons and justify models of system memory consolidation. Therefore, we propose that irregular neural activity is the feature that helps cortical networks maintain stable connections. Copyright © 2014 the authors 0270-6474/14/3415804-12$15.00/0.

  7. Towards representation of a perceptual color manifold using associative memory for color constancy.

    PubMed

    Seow, Ming-Jung; Asari, Vijayan K

    2009-01-01

    In this paper, we propose the concept of a manifold of color perception through empirical observation that the center-surround properties of images in a perceptually similar environment define a manifold in the high dimensional space. Such a manifold representation can be learned using a novel recurrent neural network based learning algorithm. Unlike the conventional recurrent neural network model in which the memory is stored in an attractive fixed point at discrete locations in the state space, the dynamics of the proposed learning algorithm represent memory as a nonlinear line of attraction. The region of convergence around the nonlinear line is defined by the statistical characteristics of the training data. This learned manifold can then be used as a basis for color correction of the images having different color perception to the learned color perception. Experimental results show that the proposed recurrent neural network learning algorithm is capable of color balance the lighting variations in images captured in different environments successfully.

  8. Indirect adaptive fuzzy wavelet neural network with self- recurrent consequent part for AC servo system.

    PubMed

    Hou, Runmin; Wang, Li; Gao, Qiang; Hou, Yuanglong; Wang, Chao

    2017-09-01

    This paper proposes a novel indirect adaptive fuzzy wavelet neural network (IAFWNN) to control the nonlinearity, wide variations in loads, time-variation and uncertain disturbance of the ac servo system. In the proposed approach, the self-recurrent wavelet neural network (SRWNN) is employed to construct an adaptive self-recurrent consequent part for each fuzzy rule of TSK fuzzy model. For the IAFWNN controller, the online learning algorithm is based on back propagation (BP) algorithm. Moreover, an improved particle swarm optimization (IPSO) is used to adapt the learning rate. The aid of an adaptive SRWNN identifier offers the real-time gradient information to the adaptive fuzzy wavelet neural controller to overcome the impact of parameter variations, load disturbances and other uncertainties effectively, and has a good dynamic. The asymptotical stability of the system is guaranteed by using the Lyapunov method. The result of the simulation and the prototype test prove that the proposed are effective and suitable. Copyright © 2017. Published by Elsevier Ltd.

  9. Deep RNNs for video denoising

    NASA Astrophysics Data System (ADS)

    Chen, Xinyuan; Song, Li; Yang, Xiaokang

    2016-09-01

    Video denoising can be described as the problem of mapping from a specific length of noisy frames to clean one. We propose a deep architecture based on Recurrent Neural Network (RNN) for video denoising. The model learns a patch-based end-to-end mapping between the clean and noisy video sequences. It takes the corrupted video sequences as the input and outputs the clean one. Our deep network, which we refer to as deep Recurrent Neural Networks (deep RNNs or DRNNs), stacks RNN layers where each layer receives the hidden state of the previous layer as input. Experiment shows (i) the recurrent architecture through temporal domain extracts motion information and does favor to video denoising, and (ii) deep architecture have large enough capacity for expressing mapping relation between corrupted videos as input and clean videos as output, furthermore, (iii) the model has generality to learned different mappings from videos corrupted by different types of noise (e.g., Poisson-Gaussian noise). By training on large video databases, we are able to compete with some existing video denoising methods.

  10. YoTube: Searching Action Proposal Via Recurrent and Static Regression Networks

    NASA Astrophysics Data System (ADS)

    Zhu, Hongyuan; Vial, Romain; Lu, Shijian; Peng, Xi; Fu, Huazhu; Tian, Yonghong; Cao, Xianbin

    2018-06-01

    In this paper, we present YoTube-a novel network fusion framework for searching action proposals in untrimmed videos, where each action proposal corresponds to a spatialtemporal video tube that potentially locates one human action. Our method consists of a recurrent YoTube detector and a static YoTube detector, where the recurrent YoTube explores the regression capability of RNN for candidate bounding boxes predictions using learnt temporal dynamics and the static YoTube produces the bounding boxes using rich appearance cues in a single frame. Both networks are trained using rgb and optical flow in order to fully exploit the rich appearance, motion and temporal context, and their outputs are fused to produce accurate and robust proposal boxes. Action proposals are finally constructed by linking these boxes using dynamic programming with a novel trimming method to handle the untrimmed video effectively and efficiently. Extensive experiments on the challenging UCF-101 and UCF-Sports datasets show that our proposed technique obtains superior performance compared with the state-of-the-art.

  11. Deforestation and Industrial Forest Patterns in Colombia: a Case Study

    NASA Astrophysics Data System (ADS)

    Huo, L. Z.; Boschetti, L.; Sparks, A. M.; Clerici, N.

    2017-12-01

    The recent peace agreement between the government and the Revolutionary Armed Forces of Colombia (FARC) offers new opportunities for peaceful and sustainable development, but at the same time requires a timely effort to protect biological resources, and ecosystem services (Clerici et al., 2016). In this context, we use the 2001-2017 Landsat data record to prototype a methodology to establish a baseline of deforestation, afforestation and industrial forest practices (i.e. establishment and harvest of forest plantations), and to monitor future changes. Two study areas, which have seen considerable deforestation in recent years, were selected: one in the South of the country, at the edge of the Amazon Forest (WRS path 008 row 059) and one in the center, in mixed forest (WRS path 008 row 055). The time series of all the available cloud free Landsat 5, Landsat 7 and Landsat 8 data was classified into a sequence of binary forest/non forest maps using a deep learning model, successfully used in the natural language processing field, trained to detect forest transitions. Recurrent Neural Networks (RNN) is a class of artificial neural network that extends the conventional neural network with loops in the connections (Graves et al., 2013). Unlike a feed-forward neural network, an RNN is able to process the sequential inputs by having a recurrent hidden state whose activation at each step depends on that of the previous steps. In this manner, the RNN provides a good framework to dynamically model time series data, and has been successfully applied to natural language processing in Google (Sutskever et al., 2014). The sequence of forest cover state maps was subsequently post-processed to differentiate between deforestation (e.g. transition from forest to non forest land use) and industrial forest harvest (i.e. timber harvest followed by regrowth), by integrating the detection of temporal patterns, and spatial patterns. References Clerici, N., et al., (2016). Colombia: Dealing in conservation. Science, 354(6309), 190-190. Sutskever I.,et al. (2014). Sequence to sequence learning with neural networks. Advances in neural information processing systems, 3104-3112. Graves A., et al. (2013). Speech recognition with deep recurrent neural networks. In Proc. IEEE Int. Conf. Acoust. Speech Signal Process., 6645-6649.

  12. Potential microRNA-mediated oncogenic intercellular communication revealed by pan-cancer analysis

    NASA Astrophysics Data System (ADS)

    Li, Yue; Zhang, Zhaolei

    2014-11-01

    Carcinogenesis consists of oncogenesis and metastasis, and intriguingly microRNAs (miRNAs) are involved in both processes. Although aberrant miRNA activities are prevalent in diverse tumor types, the exact mechanisms for how they regulate cancerous processes are not always clear. To this end, we performed a large-scale pan-cancer analysis via a novel probabilistic approach to infer recurrent miRNA-target interactions implicated in 12 cancer types using data from The Cancer Genome Atlas. We discovered ~20,000 recurrent miRNA regulations, which are enriched for cancer-related miRNAs/genes. Notably, miRNA 200 family (miR-200/141/429) is among the most prominent miRNA regulators, which is known to be involved in metastasis. Importantly, the recurrent miRNA regulatory network is not only enriched for cancer pathways but also for extracellular matrix (ECM) organization and ECM-receptor interactions. The results suggest an intriguing cancer mechanism involving miRNA-mediated cell-to-cell communication, which possibly involves delivery of tumorigenic miRNA messengers to adjacent cells via exosomes. Finally, survival analysis revealed 414 recurrent-prognostic associations, where both gene and miRNA involved in each interaction conferred significant prognostic power in one or more cancer types. Together, our comprehensive pan-cancer analysis provided not only biological insights into metastasis but also brought to bear the clinical relevance of the proposed recurrent miRNA-gene associations.

  13. Stimulus-specific adaptation in a recurrent network model of primary auditory cortex

    PubMed Central

    2017-01-01

    Stimulus-specific adaptation (SSA) occurs when neurons decrease their responses to frequently-presented (standard) stimuli but not, or not as much, to other, rare (deviant) stimuli. SSA is present in all mammalian species in which it has been tested as well as in birds. SSA confers short-term memory to neuronal responses, and may lie upstream of the generation of mismatch negativity (MMN), an important human event-related potential. Previously published models of SSA mostly rely on synaptic depression of the feedforward, thalamocortical input. Here we study SSA in a recurrent neural network model of primary auditory cortex. When the recurrent, intracortical synapses display synaptic depression, the network generates population spikes (PSs). SSA occurs in this network when deviants elicit a PS but standards do not, and we demarcate the regions in parameter space that allow SSA. While SSA based on PSs does not require feedforward depression, we identify feedforward depression as a mechanism for expanding the range of parameters that support SSA. We provide predictions for experiments that could help differentiate between SSA due to synaptic depression of feedforward connections and SSA due to synaptic depression of recurrent connections. Similar to experimental data, the magnitude of SSA in the model depends on the frequency difference between deviant and standard, probability of the deviant, inter-stimulus interval and input amplitude. In contrast to models based on feedforward depression, our model shows true deviance sensitivity as found in experiments. PMID:28288158

  14. Computational Account of Spontaneous Activity as a Signature of Predictive Coding

    PubMed Central

    Koren, Veronika

    2017-01-01

    Spontaneous activity is commonly observed in a variety of cortical states. Experimental evidence suggested that neural assemblies undergo slow oscillations with Up ad Down states even when the network is isolated from the rest of the brain. Here we show that these spontaneous events can be generated by the recurrent connections within the network and understood as signatures of neural circuits that are correcting their internal representation. A noiseless spiking neural network can represent its input signals most accurately when excitatory and inhibitory currents are as strong and as tightly balanced as possible. However, in the presence of realistic neural noise and synaptic delays, this may result in prohibitively large spike counts. An optimal working regime can be found by considering terms that control firing rates in the objective function from which the network is derived and then minimizing simultaneously the coding error and the cost of neural activity. In biological terms, this is equivalent to tuning neural thresholds and after-spike hyperpolarization. In suboptimal working regimes, we observe spontaneous activity even in the absence of feed-forward inputs. In an all-to-all randomly connected network, the entire population is involved in Up states. In spatially organized networks with local connectivity, Up states spread through local connections between neurons of similar selectivity and take the form of a traveling wave. Up states are observed for a wide range of parameters and have similar statistical properties in both active and quiescent state. In the optimal working regime, Up states are vanishing, leaving place to asynchronous activity, suggesting that this working regime is a signature of maximally efficient coding. Although they result in a massive increase in the firing activity, the read-out of spontaneous Up states is in fact orthogonal to the stimulus representation, therefore interfering minimally with the network function. PMID:28114353

  15. Short-Term Memory in Orthogonal Neural Networks

    NASA Astrophysics Data System (ADS)

    White, Olivia L.; Lee, Daniel D.; Sompolinsky, Haim

    2004-04-01

    We study the ability of linear recurrent networks obeying discrete time dynamics to store long temporal sequences that are retrievable from the instantaneous state of the network. We calculate this temporal memory capacity for both distributed shift register and random orthogonal connectivity matrices. We show that the memory capacity of these networks scales with system size.

  16. Neural Dynamics as Sampling: A Model for Stochastic Computation in Recurrent Networks of Spiking Neurons

    PubMed Central

    Buesing, Lars; Bill, Johannes; Nessler, Bernhard; Maass, Wolfgang

    2011-01-01

    The organization of computations in networks of spiking neurons in the brain is still largely unknown, in particular in view of the inherently stochastic features of their firing activity and the experimentally observed trial-to-trial variability of neural systems in the brain. In principle there exists a powerful computational framework for stochastic computations, probabilistic inference by sampling, which can explain a large number of macroscopic experimental data in neuroscience and cognitive science. But it has turned out to be surprisingly difficult to create a link between these abstract models for stochastic computations and more detailed models of the dynamics of networks of spiking neurons. Here we create such a link and show that under some conditions the stochastic firing activity of networks of spiking neurons can be interpreted as probabilistic inference via Markov chain Monte Carlo (MCMC) sampling. Since common methods for MCMC sampling in distributed systems, such as Gibbs sampling, are inconsistent with the dynamics of spiking neurons, we introduce a different approach based on non-reversible Markov chains that is able to reflect inherent temporal processes of spiking neuronal activity through a suitable choice of random variables. We propose a neural network model and show by a rigorous theoretical analysis that its neural activity implements MCMC sampling of a given distribution, both for the case of discrete and continuous time. This provides a step towards closing the gap between abstract functional models of cortical computation and more detailed models of networks of spiking neurons. PMID:22096452

  17. Promotion of Tumor-Initiating Cells in Primary and Recurrent Breast Tumors

    DTIC Science & Technology

    2013-07-01

    regulation of expression of genes which confer stemness . We hypothesize that inhibition of IKK/NF-κB will reduce or eliminate breast camcer TICs...Merkhofer et al., 2010). Baldwin, Albert S. W81XWH-12-1-0176 8 --Demonstrated that NF-κB is preferentially activated in breast cancer stem ...Breast cancer stem cells, cytokine networks, and the tumor microenvironment. J Clin Invest. 2011 Oct;121(10):3804-9. doi: 10.1172/JCI57099. Epub

  18. Recurrent network dynamics reconciles visual motion segmentation and integration.

    PubMed

    Medathati, N V Kartheek; Rankin, James; Meso, Andrew I; Kornprobst, Pierre; Masson, Guillaume S

    2017-09-12

    In sensory systems, a range of computational rules are presumed to be implemented by neuronal subpopulations with different tuning functions. For instance, in primate cortical area MT, different classes of direction-selective cells have been identified and related either to motion integration, segmentation or transparency. Still, how such different tuning properties are constructed is unclear. The dominant theoretical viewpoint based on a linear-nonlinear feed-forward cascade does not account for their complex temporal dynamics and their versatility when facing different input statistics. Here, we demonstrate that a recurrent network model of visual motion processing can reconcile these different properties. Using a ring network, we show how excitatory and inhibitory interactions can implement different computational rules such as vector averaging, winner-take-all or superposition. The model also captures ordered temporal transitions between these behaviors. In particular, depending on the inhibition regime the network can switch from motion integration to segmentation, thus being able to compute either a single pattern motion or to superpose multiple inputs as in motion transparency. We thus demonstrate that recurrent architectures can adaptively give rise to different cortical computational regimes depending upon the input statistics, from sensory flow integration to segmentation.

  19. Centralized and decentralized global outer-synchronization of asymmetric recurrent time-varying neural network by data-sampling.

    PubMed

    Lu, Wenlian; Zheng, Ren; Chen, Tianping

    2016-03-01

    In this paper, we discuss outer-synchronization of the asymmetrically connected recurrent time-varying neural networks. By using both centralized and decentralized discretization data sampling principles, we derive several sufficient conditions based on three vector norms to guarantee that the difference of any two trajectories starting from different initial values of the neural network converges to zero. The lower bounds of the common time intervals between data samples in centralized and decentralized principles are proved to be positive, which guarantees exclusion of Zeno behavior. A numerical example is provided to illustrate the efficiency of the theoretical results. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Familiarity Detection is an Intrinsic Property of Cortical Microcircuits with Bidirectional Synaptic Plasticity.

    PubMed

    Zhang, Xiaoyu; Ju, Han; Penney, Trevor B; VanDongen, Antonius M J

    2017-01-01

    Humans instantly recognize a previously seen face as "familiar." To deepen our understanding of familiarity-novelty detection, we simulated biologically plausible neural network models of generic cortical microcircuits consisting of spiking neurons with random recurrent synaptic connections. NMDA receptor (NMDAR)-dependent synaptic plasticity was implemented to allow for unsupervised learning and bidirectional modifications. Network spiking activity evoked by sensory inputs consisting of face images altered synaptic efficacy, which resulted in the network responding more strongly to a previously seen face than a novel face. Network size determined how many faces could be accurately recognized as familiar. When the simulated model became sufficiently complex in structure, multiple familiarity traces could be retained in the same network by forming partially-overlapping subnetworks that differ slightly from each other, thereby resulting in a high storage capacity. Fisher's discriminant analysis was applied to identify critical neurons whose spiking activity predicted familiar input patterns. Intriguingly, as sensory exposure was prolonged, the selected critical neurons tended to appear at deeper layers of the network model, suggesting recruitment of additional circuits in the network for incremental information storage. We conclude that generic cortical microcircuits with bidirectional synaptic plasticity have an intrinsic ability to detect familiar inputs. This ability does not require a specialized wiring diagram or supervision and can therefore be expected to emerge naturally in developing cortical circuits.

  1. Familiarity Detection is an Intrinsic Property of Cortical Microcircuits with Bidirectional Synaptic Plasticity

    PubMed Central

    2017-01-01

    Abstract Humans instantly recognize a previously seen face as “familiar.” To deepen our understanding of familiarity-novelty detection, we simulated biologically plausible neural network models of generic cortical microcircuits consisting of spiking neurons with random recurrent synaptic connections. NMDA receptor (NMDAR)-dependent synaptic plasticity was implemented to allow for unsupervised learning and bidirectional modifications. Network spiking activity evoked by sensory inputs consisting of face images altered synaptic efficacy, which resulted in the network responding more strongly to a previously seen face than a novel face. Network size determined how many faces could be accurately recognized as familiar. When the simulated model became sufficiently complex in structure, multiple familiarity traces could be retained in the same network by forming partially-overlapping subnetworks that differ slightly from each other, thereby resulting in a high storage capacity. Fisher’s discriminant analysis was applied to identify critical neurons whose spiking activity predicted familiar input patterns. Intriguingly, as sensory exposure was prolonged, the selected critical neurons tended to appear at deeper layers of the network model, suggesting recruitment of additional circuits in the network for incremental information storage. We conclude that generic cortical microcircuits with bidirectional synaptic plasticity have an intrinsic ability to detect familiar inputs. This ability does not require a specialized wiring diagram or supervision and can therefore be expected to emerge naturally in developing cortical circuits. PMID:28534043

  2. Brain Dynamics in Predicting Driving Fatigue Using a Recurrent Self-Evolving Fuzzy Neural Network.

    PubMed

    Liu, Yu-Ting; Lin, Yang-Yin; Wu, Shang-Lin; Chuang, Chun-Hsiang; Lin, Chin-Teng

    2016-02-01

    This paper proposes a generalized prediction system called a recurrent self-evolving fuzzy neural network (RSEFNN) that employs an on-line gradient descent learning rule to address the electroencephalography (EEG) regression problem in brain dynamics for driving fatigue. The cognitive states of drivers significantly affect driving safety; in particular, fatigue driving, or drowsy driving, endangers both the individual and the public. For this reason, the development of brain-computer interfaces (BCIs) that can identify drowsy driving states is a crucial and urgent topic of study. Many EEG-based BCIs have been developed as artificial auxiliary systems for use in various practical applications because of the benefits of measuring EEG signals. In the literature, the efficacy of EEG-based BCIs in recognition tasks has been limited by low resolutions. The system proposed in this paper represents the first attempt to use the recurrent fuzzy neural network (RFNN) architecture to increase adaptability in realistic EEG applications to overcome this bottleneck. This paper further analyzes brain dynamics in a simulated car driving task in a virtual-reality environment. The proposed RSEFNN model is evaluated using the generalized cross-subject approach, and the results indicate that the RSEFNN is superior to competing models regardless of the use of recurrent or nonrecurrent structures.

  3. Microelectrode array recordings of cultured hippocampal networks reveal a simple model for transcription and protein synthesis-dependent plasticity

    PubMed Central

    Arnold, Fiona JL; Hofmann, Frank; Bengtson, C. Peter; Wittmann, Malte; Vanhoutte, Peter; Bading, Hilmar

    2005-01-01

    A simplified cell culture system was developed to study neuronal plasticity. As changes in synaptic strength may alter network activity patterns, we grew hippocampal neurones on a microelectrode array (MEA) and monitored their collective behaviour with 60 electrodes simultaneously. We found that exposure of the network for 15 min to the GABAA receptor antagonist bicuculline induced an increase in synaptic efficacy at excitatory synapses that was associated with an increase in the frequency of miniature AMPA receptor-mediated EPSCs and a change in network activity from uncoordinated firing of neurones (lacking any recognizable pattern) to a highly organized, periodic and synchronous burst pattern. Induction of recurrent synchronous bursting was dependent on NMDA receptor activation and required extracellular signal-regulated kinase (ERK)1/2 signalling and translation of pre-existing mRNAs. Once induced, the burst pattern persisted for several days; its maintenance phase (> 4 h) was dependent on gene transcription taking place in a critical period of 120 min following induction. Thus, cultured hippocampal neurones display a simple, transcription and protein synthesis-dependent form of plasticity. The non-invasive nature of MEA recordings provides a significant advantage over traditional assays for synaptic connectivity (i.e. long-term potentiation in brain slices) and facilitates the search for activity-regulated genes critical for late-phase plasticity. PMID:15618268

  4. Microelectrode array recordings of cultured hippocampal networks reveal a simple model for transcription and protein synthesis-dependent plasticity.

    PubMed

    Arnold, Fiona J L; Hofmann, Frank; Bengtson, C Peter; Wittmann, Malte; Vanhoutte, Peter; Bading, Hilmar

    2005-04-01

    A simplified cell culture system was developed to study neuronal plasticity. As changes in synaptic strength may alter network activity patterns, we grew hippocampal neurones on a microelectrode array (MEA) and monitored their collective behaviour with 60 electrodes simultaneously. We found that exposure of the network for 15 min to the GABA(A) receptor antagonist bicuculline induced an increase in synaptic efficacy at excitatory synapses that was associated with an increase in the frequency of miniature AMPA receptor-mediated EPSCs and a change in network activity from uncoordinated firing of neurones (lacking any recognizable pattern) to a highly organized, periodic and synchronous burst pattern. Induction of recurrent synchronous bursting was dependent on NMDA receptor activation and required extracellular signal-regulated kinase (ERK)1/2 signalling and translation of pre-existing mRNAs. Once induced, the burst pattern persisted for several days; its maintenance phase (> 4 h) was dependent on gene transcription taking place in a critical period of 120 min following induction. Thus, cultured hippocampal neurones display a simple, transcription and protein synthesis-dependent form of plasticity. The non-invasive nature of MEA recordings provides a significant advantage over traditional assays for synaptic connectivity (i.e. long-term potentiation in brain slices) and facilitates the search for activity-regulated genes critical for late-phase plasticity.

  5. Global Synchronization of Multiple Recurrent Neural Networks With Time Delays via Impulsive Interactions.

    PubMed

    Yang, Shaofu; Guo, Zhenyuan; Wang, Jun

    2017-07-01

    In this paper, new results on the global synchronization of multiple recurrent neural networks (NNs) with time delays via impulsive interactions are presented. Impulsive interaction means that a number of NNs communicate with each other at impulse instants only, while they are independent at the remaining time. The communication topology among NNs is not required to be always connected and can switch ON and OFF at different impulse instants. By using the concept of sequential connectivity and the properties of stochastic matrices, a set of sufficient conditions depending on time delays is derived to ascertain global synchronization of multiple continuous-time recurrent NNs. In addition, a counterpart on the global synchronization of multiple discrete-time NNs is also discussed. Finally, two examples are presented to illustrate the results.

  6. Characterization of dynamical systems under noise using recurrence networks: Application to simulated and EEG data

    NASA Astrophysics Data System (ADS)

    Puthanmadam Subramaniyam, Narayan; Hyttinen, Jari

    2014-10-01

    In this letter, we study the influence of observational noise on recurrence network (RN) measures, the global clustering coefficient (C) and average path length (L) using the Rössler system and propose the application of RN measures to analyze the structural properties of electroencephalographic (EEG) data. We find that for an appropriate recurrence rate (RR>0.02) the influence of noise on C can be minimized while L is independent of RR for increasing levels of noise. Indications of structural complexity were found for healthy EEG, but to a lesser extent than epileptic EEG. Furthermore, C performed better than L in case of epileptic EEG. Our results show that RN measures can provide insights into the structural properties of EEG in normal and pathological states.

  7. Nonlinear Motion Tracking by Deep Learning Architecture

    NASA Astrophysics Data System (ADS)

    Verma, Arnav; Samaiya, Devesh; Gupta, Karunesh K.

    2018-03-01

    In the world of Artificial Intelligence, object motion tracking is one of the major problems. The extensive research is being carried out to track people in crowd. This paper presents a unique technique for nonlinear motion tracking in the absence of prior knowledge of nature of nonlinear path that the object being tracked may follow. We achieve this by first obtaining the centroid of the object and then using the centroid as the current example for a recurrent neural network trained using real-time recurrent learning. We have tweaked the standard algorithm slightly and have accumulated the gradient for few previous iterations instead of using just the current iteration as is the norm. We show that for a single object, such a recurrent neural network is highly capable of approximating the nonlinearity of its path.

  8. Investigation of Back-off Based Interpolation Between Recurrent Neural Network and N-gram Language Models (Author’s Manuscript)

    DTIC Science & Technology

    2016-02-11

    INVESTIGATION OF BACK-OFF BASED INTERPOLATION BETWEEN RECURRENT NEURAL NETWORK AND N- GRAM LANGUAGE MODELS X. Chen, X. Liu, M. J. F. Gales, and P. C...As the gener- alization patterns of RNNLMs and n- gram LMs are inherently dif- ferent, RNNLMs are usually combined with n- gram LMs via a fixed...RNNLMs and n- gram LMs as n- gram level changes. In order to fully exploit the detailed n- gram level comple- mentary attributes between the two LMs, a

  9. An Asynchronous Recurrent Network of Cellular Automaton-Based Neurons and Its Reproduction of Spiking Neural Network Activities.

    PubMed

    Matsubara, Takashi; Torikai, Hiroyuki

    2016-04-01

    Modeling and implementation approaches for the reproduction of input-output relationships in biological nervous tissues contribute to the development of engineering and clinical applications. However, because of high nonlinearity, the traditional modeling and implementation approaches encounter difficulties in terms of generalization ability (i.e., performance when reproducing an unknown data set) and computational resources (i.e., computation time and circuit elements). To overcome these difficulties, asynchronous cellular automaton-based neuron (ACAN) models, which are described as special kinds of cellular automata that can be implemented as small asynchronous sequential logic circuits have been proposed. This paper presents a novel type of such ACAN and a theoretical analysis of its excitability. This paper also presents a novel network of such neurons, which can mimic input-output relationships of biological and nonlinear ordinary differential equation model neural networks. Numerical analyses confirm that the presented network has a higher generalization ability than other major modeling and implementation approaches. In addition, Field-Programmable Gate Array-implementations confirm that the presented network requires lower computational resources.

  10. Recall Performance for Content-Addressable Memory Using Adiabatic Quantum Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Imam, Neena; Humble, Travis S.; McCaskey, Alex

    A content-addressable memory (CAM) stores key-value associations such that the key is recalled by providing its associated value. While CAM recall is traditionally performed using recurrent neural network models, we show how to solve this problem using adiabatic quantum optimization. Our approach maps the recurrent neural network to a commercially available quantum processing unit by taking advantage of the common underlying Ising spin model. We then assess the accuracy of the quantum processor to store key-value associations by quantifying recall performance against an ensemble of problem sets. We observe that different learning rules from the neural network community influence recallmore » accuracy but performance appears to be limited by potential noise in the processor. The strong connection established between quantum processors and neural network problems supports the growing intersection of these two ideas.« less

  11. Probability and volume of potential postwildfire debris flows in the 2012 High Park Burn Area near Fort Collins, Colorado

    USGS Publications Warehouse

    Verdin, Kristine L.; Dupree, Jean A.; Elliott, John G.

    2012-01-01

    This report presents a preliminary emergency assessment of the debris-flow hazards from drainage basins burned by the 2012 High Park fire near Fort Collins in Larimer County, Colorado. Empirical models derived from statistical evaluation of data collected from recently burned basins throughout the intermountain western United States were used to estimate the probability of debris-flow occurrence and volume of debris flows along the burned area drainage network and to estimate the same for 44 selected drainage basins along State Highway 14 and the perimeter of the burned area. Input data for the models included topographic parameters, soil characteristics, burn severity, and rainfall totals and intensities for a (1) 2-year-recurrence, 1-hour-duration rainfall (25 millimeters); (2) 10-year-recurrence, 1-hour-duration rainfall (43 millimeters); and (3) 25-year-recurrence, 1-hour-duration rainfall (51 millimeters). Estimated debris-flow probabilities along the drainage network and throughout the drainage basins of interest ranged from 1 to 84 percent in response to the 2-year-recurrence, 1-hour-duration rainfall; from 2 to 95 percent in response to the 10-year-recurrence, 1-hour-duration rainfall; and from 3 to 97 in response to the 25-year-recurrence, 1-hour-duration rainfall. Basins and drainage networks with the highest probabilities tended to be those on the eastern edge of the burn area where soils have relatively high clay contents and gradients are steep. Estimated debris-flow volumes range from a low of 1,600 cubic meters to a high of greater than 100,000 cubic meters. Estimated debris-flow volumes increase with basin size and distance along the drainage network, but some smaller drainages were also predicted to produce substantial volumes of material. The predicted probabilities and some of the volumes predicted for the modeled storms indicate a potential for substantial debris-flow impacts on structures, roads, bridges, and culverts located both within and immediately downstream from the burned area. Colorado State Highway 14 is also susceptible to impacts from debris flows.

  12. Recurrent myocardial infarction: Mechanisms of free-floating adaptation and autonomic derangement in networked cardiac neural control

    PubMed Central

    Ardell, Jeffrey L.; Shivkumar, Kalyanam; Armour, J. Andrew

    2017-01-01

    The cardiac nervous system continuously controls cardiac function whether or not pathology is present. While myocardial infarction typically has a major and catastrophic impact, population studies have shown that longer-term risk for recurrent myocardial infarction and the related potential for sudden cardiac death depends mainly upon standard atherosclerotic variables and autonomic nervous system maladaptations. Investigative neurocardiology has demonstrated that autonomic control of cardiac function includes local circuit neurons for networked control within the peripheral nervous system. The structural and adaptive characteristics of such networked interactions define the dynamics and a new normal for cardiac control that results in the aftermath of recurrent myocardial infarction and/or unstable angina that may or may not precipitate autonomic derangement. These features are explored here via a mathematical model of cardiac regulation. A main observation is that the control environment during pathology is an extrapolation to a setting outside prior experience. Although global bounds guarantee stability, the resulting closed-loop dynamics exhibited while the network adapts during pathology are aptly described as ‘free-floating’ in order to emphasize their dependence upon details of the network structure. The totality of the results provide a mechanistic reasoning that validates the clinical practice of reducing sympathetic efferent neuronal tone while aggressively targeting autonomic derangement in the treatment of ischemic heart disease. PMID:28692680

  13. Fitting of dynamic recurrent neural network models to sensory stimulus-response data.

    PubMed

    Doruk, R Ozgur; Zhang, Kechen

    2018-06-02

    We present a theoretical study aiming at model fitting for sensory neurons. Conventional neural network training approaches are not applicable to this problem due to lack of continuous data. Although the stimulus can be considered as a smooth time-dependent variable, the associated response will be a set of neural spike timings (roughly the instants of successive action potential peaks) that have no amplitude information. A recurrent neural network model can be fitted to such a stimulus-response data pair by using the maximum likelihood estimation method where the likelihood function is derived from Poisson statistics of neural spiking. The universal approximation feature of the recurrent dynamical neuron network models allows us to describe excitatory-inhibitory characteristics of an actual sensory neural network with any desired number of neurons. The stimulus data are generated by a phased cosine Fourier series having a fixed amplitude and frequency but a randomly shot phase. Various values of amplitude, stimulus component size, and sample size are applied in order to examine the effect of the stimulus to the identification process. Results are presented in tabular and graphical forms at the end of this text. In addition, to demonstrate the success of this research, a study involving the same model, nominal parameters and stimulus structure, and another study that works on different models are compared to that of this research.

  14. Synaptic Scaling in Combination with Many Generic Plasticity Mechanisms Stabilizes Circuit Connectivity

    PubMed Central

    Tetzlaff, Christian; Kolodziejski, Christoph; Timme, Marc; Wörgötter, Florentin

    2011-01-01

    Synaptic scaling is a slow process that modifies synapses, keeping the firing rate of neural circuits in specific regimes. Together with other processes, such as conventional synaptic plasticity in the form of long term depression and potentiation, synaptic scaling changes the synaptic patterns in a network, ensuring diverse, functionally relevant, stable, and input-dependent connectivity. How synaptic patterns are generated and stabilized, however, is largely unknown. Here we formally describe and analyze synaptic scaling based on results from experimental studies and demonstrate that the combination of different conventional plasticity mechanisms and synaptic scaling provides a powerful general framework for regulating network connectivity. In addition, we design several simple models that reproduce experimentally observed synaptic distributions as well as the observed synaptic modifications during sustained activity changes. These models predict that the combination of plasticity with scaling generates globally stable, input-controlled synaptic patterns, also in recurrent networks. Thus, in combination with other forms of plasticity, synaptic scaling can robustly yield neuronal circuits with high synaptic diversity, which potentially enables robust dynamic storage of complex activation patterns. This mechanism is even more pronounced when considering networks with a realistic degree of inhibition. Synaptic scaling combined with plasticity could thus be the basis for learning structured behavior even in initially random networks. PMID:22203799

  15. Cellular mechanisms underlying spatiotemporal features of cholinergic retinal waves

    PubMed Central

    Ford, Kevin J.; Félix, Aude L.; Feller, Marla B.

    2012-01-01

    Prior to vision, a transient network of recurrently connected cholinergic interneurons, called starburst amacrine cells (SACs), generates spontaneous retinal waves. Despite an absence of robust inhibition, cholinergic retinal waves initiate infrequently and propagate within finite boundaries. Here we combine a variety of electrophysiological and imaging techniques and computational modeling to elucidate the mechanisms underlying these spatial and temporal properties of waves in developing mouse retina. Waves initiate via rare spontaneous depolarizations of SACs. Waves propagate through recurrent cholinergic connections between SACs and volume release of ACh as demonstrated using paired recordings and a cell-based ACh optical sensor. Perforated patch recordings and two-photon calcium imaging reveal that individual SACs have slow afterhyperpolarizations that induce SACs to have variable depolarizations during sequential waves. Using a computational model in which the properties of SACs are based on these physiological measurements, we reproduce the slow frequency, speed, and finite size of recorded waves. This study represents a detailed description of the circuit that mediates cholinergic retinal waves and indicates that variability of the interneurons that generate this network activity may be critical for the robustness of waves across different species and stages of development. PMID:22262883

  16. A neural network for intermale aggression to establish social hierarchy.

    PubMed

    Stagkourakis, Stefanos; Spigolon, Giada; Williams, Paul; Protzmann, Jil; Fisone, Gilberto; Broberger, Christian

    2018-06-01

    Intermale aggression is used to establish social rank. Several neuronal populations have been implicated in aggression, but the circuit mechanisms that shape this innate behavior and coordinate its different components (including attack execution and reward) remain elusive. We show that dopamine transporter-expressing neurons in the hypothalamic ventral premammillary nucleus (PMv DAT neurons) organize goal-oriented aggression in male mice. Activation of PMv DAT neurons triggers attack behavior; silencing these neurons interrupts attacks. Regenerative PMv DAT membrane conductances interacting with recurrent and reciprocal excitation explain how a brief trigger can elicit a long-lasting response (hysteresis). PMv DAT projections to the ventrolateral part of the ventromedial hypothalamic and the supramammillary nuclei control attack execution and aggression reward, respectively. Brief manipulation of PMv DAT activity switched the dominance relationship between males, an effect persisting for weeks. These results identify a network structure anchored in PMv DAT neurons that organizes aggressive behavior and, as a consequence, determines intermale hierarchy.

  17. New Insights on Temporal Lobe Epilepsy Based on Plasticity-Related Network Changes and High-Order Statistics.

    PubMed

    Kinjo, Erika Reime; Rodríguez, Pedro Xavier Royero; Dos Santos, Bianca Araújo; Higa, Guilherme Shigueto Vilar; Ferraz, Mariana Sacrini Ayres; Schmeltzer, Christian; Rüdiger, Sten; Kihara, Alexandre Hiroaki

    2018-05-01

    Epilepsy is a disorder of the brain characterized by the predisposition to generate recurrent unprovoked seizures, which involves reshaping of neuronal circuitries based on intense neuronal activity. In this review, we first detailed the regulation of plasticity-associated genes, such as ARC, GAP-43, PSD-95, synapsin, and synaptophysin. Indeed, reshaping of neuronal connectivity after the primary, acute epileptogenesis event increases the excitability of the temporal lobe. Herein, we also discussed the heterogeneity of neuronal populations regarding the number of synaptic connections, which in the theoretical field is commonly referred as degree. Employing integrate-and-fire neuronal model, we determined that in addition to increased synaptic strength, degree correlations might play essential and unsuspected roles in the control of network activity. Indeed, assortativity, which can be described as a condition where high-degree correlations are observed, increases the excitability of neural networks. In this review, we summarized recent topics in the field, and data were discussed according to newly developed or unusual tools, as provided by mathematical graph analysis and high-order statistics. With this, we were able to present new foundations for the pathological activity observed in temporal lobe epilepsy.

  18. Bit-serial neuroprocessor architecture

    NASA Technical Reports Server (NTRS)

    Tawel, Raoul (Inventor)

    2001-01-01

    A neuroprocessor architecture employs a combination of bit-serial and serial-parallel techniques for implementing the neurons of the neuroprocessor. The neuroprocessor architecture includes a neural module containing a pool of neurons, a global controller, a sigmoid activation ROM look-up-table, a plurality of neuron state registers, and a synaptic weight RAM. The neuroprocessor reduces the number of neurons required to perform the task by time multiplexing groups of neurons from a fixed pool of neurons to achieve the successive hidden layers of a recurrent network topology.

  19. Modular, Hierarchical Learning By Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Baldi, Pierre F.; Toomarian, Nikzad

    1996-01-01

    Modular and hierarchical approach to supervised learning by artificial neural networks leads to neural networks more structured than neural networks in which all neurons fully interconnected. These networks utilize general feedforward flow of information and sparse recurrent connections to achieve dynamical effects. The modular organization, sparsity of modular units and connections, and fact that learning is much more circumscribed are all attractive features for designing neural-network hardware. Learning streamlined by imitating some aspects of biological neural networks.

  20. Physiological modules for generating discrete and rhythmic movements: action identification by a dynamic recurrent neural network.

    PubMed

    Bengoetxea, Ana; Leurs, Françoise; Hoellinger, Thomas; Cebolla, Ana M; Dan, Bernard; McIntyre, Joseph; Cheron, Guy

    2014-01-01

    In this study we employed a dynamic recurrent neural network (DRNN) in a novel fashion to reveal characteristics of control modules underlying the generation of muscle activations when drawing figures with the outstretched arm. We asked healthy human subjects to perform four different figure-eight movements in each of two workspaces (frontal plane and sagittal plane). We then trained a DRNN to predict the movement of the wrist from information in the EMG signals from seven different muscles. We trained different instances of the same network on a single movement direction, on all four movement directions in a single movement plane, or on all eight possible movement patterns and looked at the ability of the DRNN to generalize and predict movements for trials that were not included in the training set. Within a single movement plane, a DRNN trained on one movement direction was not able to predict movements of the hand for trials in the other three directions, but a DRNN trained simultaneously on all four movement directions could generalize across movement directions within the same plane. Similarly, the DRNN was able to reproduce the kinematics of the hand for both movement planes, but only if it was trained on examples performed in each one. As we will discuss, these results indicate that there are important dynamical constraints on the mapping of EMG to hand movement that depend on both the time sequence of the movement and on the anatomical constraints of the musculoskeletal system. In a second step, we injected EMG signals constructed from different synergies derived by the PCA in order to identify the mechanical significance of each of these components. From these results, one can surmise that discrete-rhythmic movements may be constructed from three different fundamental modules, one regulating the co-activation of all muscles over the time span of the movement and two others elliciting patterns of reciprocal activation operating in orthogonal directions.

  1. The up and down states of cortical networks

    NASA Astrophysics Data System (ADS)

    Ghorbani, Maryam; Levine, Alex J.; Mehta, Mayank; Bruinsma, Robijn

    2011-03-01

    The cortical networks show a collective activity of alternating active and silent states known as up and down states during slow wave sleep or anesthesia. The mechanism of this spontaneous activity as well as the anesthesia or sleep are still not clear. Here, using a mean field approach, we present a simple model to study the spontaneous activity of a homogenous cortical network of excitatory and inhibitory neurons that are recurrently connected. A key new ingredient in this model is that the activity-dependant synaptic depression is considered only for the excitatory neurons. We find depending on the strength of the synaptic depression and synaptic efficacies, the phase space contains strange attractors or stable fixed points at active or quiescent regimes. At the strange attractor phase, we can have oscillations similar to up and down states with flat and noisy up states. Moreover, we show that by increasing the synaptic efficacy corresponding to the connections between the excitatory neurons, the characteristics of the up and down states change in agreement with the changes that we observe in the intracellular recordings of the membrane potential from the entorhinal cortex by varying the depth of anesthesia. Thus, we propose that by measuring the value of this synaptic efficacy, one can quantify the depth of anesthesia which is clinically very important. These findings provide a simple, analytical understanding of the spontaneous cortical dynamics.

  2. Reinforced two-step-ahead weight adjustment technique for online training of recurrent neural networks.

    PubMed

    Chang, Li-Chiu; Chen, Pin-An; Chang, Fi-John

    2012-08-01

    A reliable forecast of future events possesses great value. The main purpose of this paper is to propose an innovative learning technique for reinforcing the accuracy of two-step-ahead (2SA) forecasts. The real-time recurrent learning (RTRL) algorithm for recurrent neural networks (RNNs) can effectively model the dynamics of complex processes and has been used successfully in one-step-ahead forecasts for various time series. A reinforced RTRL algorithm for 2SA forecasts using RNNs is proposed in this paper, and its performance is investigated by two famous benchmark time series and a streamflow during flood events in Taiwan. Results demonstrate that the proposed reinforced 2SA RTRL algorithm for RNNs can adequately forecast the benchmark (theoretical) time series, significantly improve the accuracy of flood forecasts, and effectively reduce time-lag effects.

  3. Pattern reverberation in networks of excitable systems with connection delays

    NASA Astrophysics Data System (ADS)

    Lücken, Leonhard; Rosin, David P.; Worlitzer, Vasco M.; Yanchuk, Serhiy

    2017-01-01

    We consider the recurrent pulse-coupled networks of excitable elements with delayed connections, which are inspired by the biological neural networks. If the delays are tuned appropriately, the network can either stay in the steady resting state, or alternatively, exhibit a desired spiking pattern. It is shown that such a network can be used as a pattern-recognition system. More specifically, the application of the correct pattern as an external input to the network leads to a self-sustained reverberation of the encoded pattern. In terms of the coupling structure, the tolerance and the refractory time of the individual systems, we determine the conditions for the uniqueness of the sustained activity, i.e., for the functionality of the network as an unambiguous pattern detector. We point out the relation of the considered systems with cyclic polychronous groups and show how the assumed delay configurations may arise in a self-organized manner when a spike-time dependent plasticity of the connection delays is assumed. As excitable elements, we employ the simplistic coincidence detector models as well as the Hodgkin-Huxley neuron models. Moreover, the system is implemented experimentally on a Field-Programmable Gate Array.

  4. When do correlations increase with firing rates in recurrent networks?

    PubMed Central

    2017-01-01

    A central question in neuroscience is to understand how noisy firing patterns are used to transmit information. Because neural spiking is noisy, spiking patterns are often quantified via pairwise correlations, or the probability that two cells will spike coincidentally, above and beyond their baseline firing rate. One observation frequently made in experiments, is that correlations can increase systematically with firing rate. Theoretical studies have determined that stimulus-dependent correlations that increase with firing rate can have beneficial effects on information coding; however, we still have an incomplete understanding of what circuit mechanisms do, or do not, produce this correlation-firing rate relationship. Here, we studied the relationship between pairwise correlations and firing rates in recurrently coupled excitatory-inhibitory spiking networks with conductance-based synapses. We found that with stronger excitatory coupling, a positive relationship emerged between pairwise correlations and firing rates. To explain these findings, we used linear response theory to predict the full correlation matrix and to decompose correlations in terms of graph motifs. We then used this decomposition to explain why covariation of correlations with firing rate—a relationship previously explained in feedforward networks driven by correlated input—emerges in some recurrent networks but not in others. Furthermore, when correlations covary with firing rate, this relationship is reflected in low-rank structure in the correlation matrix. PMID:28448499

  5. A novel word spotting method based on recurrent neural networks.

    PubMed

    Frinken, Volkmar; Fischer, Andreas; Manmatha, R; Bunke, Horst

    2012-02-01

    Keyword spotting refers to the process of retrieving all instances of a given keyword from a document. In the present paper, a novel keyword spotting method for handwritten documents is described. It is derived from a neural network-based system for unconstrained handwriting recognition. As such it performs template-free spotting, i.e., it is not necessary for a keyword to appear in the training set. The keyword spotting is done using a modification of the CTC Token Passing algorithm in conjunction with a recurrent neural network. We demonstrate that the proposed systems outperform not only a classical dynamic time warping-based approach but also a modern keyword spotting system, based on hidden Markov models. Furthermore, we analyze the performance of the underlying neural networks when using them in a recognition task followed by keyword spotting on the produced transcription. We point out the advantages of keyword spotting when compared to classic text line recognition.

  6. Dynamics of feature categorization.

    PubMed

    Martí, Daniel; Rinzel, John

    2013-01-01

    In visual and auditory scenes, we are able to identify shared features among sensory objects and group them according to their similarity. This grouping is preattentive and fast and is thought of as an elementary form of categorization by which objects sharing similar features are clustered in some abstract perceptual space. It is unclear what neuronal mechanisms underlie this fast categorization. Here we propose a neuromechanistic model of fast feature categorization based on the framework of continuous attractor networks. The mechanism for category formation does not rely on learning and is based on biologically plausible assumptions, for example, the existence of populations of neurons tuned to feature values, feature-specific interactions, and subthreshold-evoked responses upon the presentation of single objects. When the network is presented with a sequence of stimuli characterized by some feature, the network sums the evoked responses and provides a running estimate of the distribution of features in the input stream. If the distribution of features is structured into different components or peaks (i.e., is multimodal), recurrent excitation amplifies the response of activated neurons, and categories are singled out as emerging localized patterns of elevated neuronal activity (bumps), centered at the centroid of each cluster. The emergence of bump states through sequential, subthreshold activation and the dependence on input statistics is a novel application of attractor networks. We show that the extraction and representation of multiple categories are facilitated by the rich attractor structure of the network, which can sustain multiple stable activity patterns for a robust range of connectivity parameters compatible with cortical physiology.

  7. Seizures beget seizures in temporal lobe epilepsies: the boomerang effects of newly formed aberrant kainatergic synapses.

    PubMed

    Ben-Ari, Yehezkel; Crepel, Valérie; Represa, Alfonso

    2008-01-01

    Do temporal lobe epilepsy (TLE) seizures in adults promote further seizures? Clinical and experimental data suggest that new synapses are formed after an initial episode of status epilepticus, however their contribution to the transformation of a naive network to an epileptogenic one has been debated. Recent experimental data show that newly formed aberrant excitatory synapses on the granule cells of the fascia dentate operate by means of kainate receptor-operated signals that are not present on naive granule cells. Therefore, genuine epileptic networks rely on signaling cascades that differentiate them from naive networks. Recurrent limbic seizures generated by the activation of kainate receptors and synapses in naive animals lead to the formation of novel synapses that facilitate the emergence of further seizures. This negative, vicious cycle illustrates the central role of reactive plasticity in neurological disorders.

  8. High activity iodine 125 endocurietherapy for recurrent skull base tumors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumar, P.P.; Good, R.R.; Leibrock, L.G.

    1988-04-15

    Experience with endocurietherapy of skull base tumors is reviewed. We present our cases of recurrent pituitary hemangiopericytoma, radiation-induced recurrent meningioma, recurrent clival chordoma, recurrent nasopharyngeal cancer involving the cavernous sinus, and recurrent parotid carcinoma of the skull base which were all successfully retreated with high-activity 125-iodine (I-125) permanent implantation.76 references.

  9. New baseline correction algorithm for text-line recognition with bidirectional recurrent neural networks

    NASA Astrophysics Data System (ADS)

    Morillot, Olivier; Likforman-Sulem, Laurence; Grosicki, Emmanuèle

    2013-04-01

    Many preprocessing techniques have been proposed for isolated word recognition. However, recently, recognition systems have dealt with text blocks and their compound text lines. In this paper, we propose a new preprocessing approach to efficiently correct baseline skew and fluctuations. Our approach is based on a sliding window within which the vertical position of the baseline is estimated. Segmentation of text lines into subparts is, thus, avoided. Experiments conducted on a large publicly available database (Rimes), with a BLSTM (bidirectional long short-term memory) recurrent neural network recognition system, show that our baseline correction approach highly improves performance.

  10. A nonlinear dynamical system for combustion instability in a pulse model combustor

    NASA Astrophysics Data System (ADS)

    Takagi, Kazushi; Gotoda, Hiroshi

    2016-11-01

    We theoretically and numerically study the bifurcation phenomena of nonlinear dynamical system describing combustion instability in a pulse model combustor on the basis of dynamical system theory and complex network theory. The dynamical behavior of pressure fluctuations undergoes a significant transition from steady-state to deterministic chaos via the period-doubling cascade process known as Feigenbaum scenario with decreasing the characteristic flow time. Recurrence plots and recurrence networks analysis we adopted in this study can quantify the significant changes in dynamic behavior of combustion instability that cannot be captured in the bifurcation diagram.

  11. INDIRECT INTELLIGENT SLIDING MODE CONTROL OF A SHAPE MEMORY ALLOY ACTUATED FLEXIBLE BEAM USING HYSTERETIC RECURRENT NEURAL NETWORKS.

    PubMed

    Hannen, Jennifer C; Crews, John H; Buckner, Gregory D

    2012-08-01

    This paper introduces an indirect intelligent sliding mode controller (IISMC) for shape memory alloy (SMA) actuators, specifically a flexible beam deflected by a single offset SMA tendon. The controller manipulates applied voltage, which alters SMA tendon temperature to track reference bending angles. A hysteretic recurrent neural network (HRNN) captures the nonlinear, hysteretic relationship between SMA temperature and bending angle. The variable structure control strategy provides robustness to model uncertainties and parameter variations, while effectively compensating for system nonlinearities, achieving superior tracking compared to an optimized PI controller.

  12. Automatic construction of a recurrent neural network based classifier for vehicle passage detection

    NASA Astrophysics Data System (ADS)

    Burnaev, Evgeny; Koptelov, Ivan; Novikov, German; Khanipov, Timur

    2017-03-01

    Recurrent Neural Networks (RNNs) are extensively used for time-series modeling and prediction. We propose an approach for automatic construction of a binary classifier based on Long Short-Term Memory RNNs (LSTM-RNNs) for detection of a vehicle passage through a checkpoint. As an input to the classifier we use multidimensional signals of various sensors that are installed on the checkpoint. Obtained results demonstrate that the previous approach to handcrafting a classifier, consisting of a set of deterministic rules, can be successfully replaced by an automatic RNN training on an appropriately labelled data.

  13. Local community detection as pattern restoration by attractor dynamics of recurrent neural networks.

    PubMed

    Okamoto, Hiroshi

    2016-08-01

    Densely connected parts in networks are referred to as "communities". Community structure is a hallmark of a variety of real-world networks. Individual communities in networks form functional modules of complex systems described by networks. Therefore, finding communities in networks is essential to approaching and understanding complex systems described by networks. In fact, network science has made a great deal of effort to develop effective and efficient methods for detecting communities in networks. Here we put forward a type of community detection, which has been little examined so far but will be practically useful. Suppose that we are given a set of source nodes that includes some (but not all) of "true" members of a particular community; suppose also that the set includes some nodes that are not the members of this community (i.e., "false" members of the community). We propose to detect the community from this "imperfect" and "inaccurate" set of source nodes using attractor dynamics of recurrent neural networks. Community detection by the proposed method can be viewed as restoration of the original pattern from a deteriorated pattern, which is analogous to cue-triggered recall of short-term memory in the brain. We demonstrate the effectiveness of the proposed method using synthetic networks and real social networks for which correct communities are known. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  14. Comparison of RF spectrum prediction methods for dynamic spectrum access

    NASA Astrophysics Data System (ADS)

    Kovarskiy, Jacob A.; Martone, Anthony F.; Gallagher, Kyle A.; Sherbondy, Kelly D.; Narayanan, Ram M.

    2017-05-01

    Dynamic spectrum access (DSA) refers to the adaptive utilization of today's busy electromagnetic spectrum. Cognitive radio/radar technologies require DSA to intelligently transmit and receive information in changing environments. Predicting radio frequency (RF) activity reduces sensing time and energy consumption for identifying usable spectrum. Typical spectrum prediction methods involve modeling spectral statistics with Hidden Markov Models (HMM) or various neural network structures. HMMs describe the time-varying state probabilities of Markov processes as a dynamic Bayesian network. Neural Networks model biological brain neuron connections to perform a wide range of complex and often non-linear computations. This work compares HMM, Multilayer Perceptron (MLP), and Recurrent Neural Network (RNN) algorithms and their ability to perform RF channel state prediction. Monte Carlo simulations on both measured and simulated spectrum data evaluate the performance of these algorithms. Generalizing spectrum occupancy as an alternating renewal process allows Poisson random variables to generate simulated data while energy detection determines the occupancy state of measured RF spectrum data for testing. The results suggest that neural networks achieve better prediction accuracy and prove more adaptable to changing spectral statistics than HMMs given sufficient training data.

  15. Resonant spatiotemporal learning in large random recurrent networks.

    PubMed

    Daucé, Emmanuel; Quoy, Mathias; Doyon, Bernard

    2002-09-01

    Taking a global analogy with the structure of perceptual biological systems, we present a system composed of two layers of real-valued sigmoidal neurons. The primary layer receives stimulating spatiotemporal signals, and the secondary layer is a fully connected random recurrent network. This secondary layer spontaneously displays complex chaotic dynamics. All connections have a constant time delay. We use for our experiments a Hebbian (covariance) learning rule. This rule slowly modifies the weights under the influence of a periodic stimulus. The effect of learning is twofold: (i) it simplifies the secondary-layer dynamics, which eventually stabilizes to a periodic orbit; and (ii) it connects the secondary layer to the primary layer, and realizes a feedback from the secondary to the primary layer. This feedback signal is added to the incoming signal, and matches it (i.e., the secondary layer performs a one-step prediction of the forthcoming stimulus). After learning, a resonant behavior can be observed: the system resonates with familiar stimuli, which activates a feedback signal. In particular, this resonance allows the recognition and retrieval of partial signals, and dynamic maintenance of the memory of past stimuli. This resonance is highly sensitive to the temporal relationships and to the periodicity of the presented stimuli. When we present stimuli which do not match in time or space, the feedback remains silent. The number of different stimuli for which resonant behavior can be learned is analyzed. As with Hopfield networks, the capacity is proportional to the size of the second, recurrent layer. Moreover, the high capacity displayed allows the implementation of our model on real-time systems interacting with their environment. Such an implementation is reported in the case of a simple behavior-based recognition task on a mobile robot. Finally, we present some functional analogies with biological systems in terms of autonomy and dynamic binding, and present some hypotheses on the computational role of feedback connections.

  16. Democratic Population Decisions Result in Robust Policy-Gradient Learning: A Parametric Study with GPU Simulations

    PubMed Central

    Richmond, Paul; Buesing, Lars; Giugliano, Michele; Vasilaki, Eleni

    2011-01-01

    High performance computing on the Graphics Processing Unit (GPU) is an emerging field driven by the promise of high computational power at a low cost. However, GPU programming is a non-trivial task and moreover architectural limitations raise the question of whether investing effort in this direction may be worthwhile. In this work, we use GPU programming to simulate a two-layer network of Integrate-and-Fire neurons with varying degrees of recurrent connectivity and investigate its ability to learn a simplified navigation task using a policy-gradient learning rule stemming from Reinforcement Learning. The purpose of this paper is twofold. First, we want to support the use of GPUs in the field of Computational Neuroscience. Second, using GPU computing power, we investigate the conditions under which the said architecture and learning rule demonstrate best performance. Our work indicates that networks featuring strong Mexican-Hat-shaped recurrent connections in the top layer, where decision making is governed by the formation of a stable activity bump in the neural population (a “non-democratic” mechanism), achieve mediocre learning results at best. In absence of recurrent connections, where all neurons “vote” independently (“democratic”) for a decision via population vector readout, the task is generally learned better and more robustly. Our study would have been extremely difficult on a desktop computer without the use of GPU programming. We present the routines developed for this purpose and show that a speed improvement of 5x up to 42x is provided versus optimised Python code. The higher speed is achieved when we exploit the parallelism of the GPU in the search of learning parameters. This suggests that efficient GPU programming can significantly reduce the time needed for simulating networks of spiking neurons, particularly when multiple parameter configurations are investigated. PMID:21572529

  17. The HER2 Signaling Network in Breast Cancer--Like a Spider in its Web.

    PubMed

    Dittrich, A; Gautrey, H; Browell, D; Tyson-Capper, A

    2014-12-01

    The human epidermal growth factor receptor 2 (HER2) is a major player in the survival and proliferation of tumour cells and is overexpressed in up to 30 % of breast cancer cases. A considerable amount of work has been undertaken to unravel the activity and function of HER2 to try and develop effective therapies that impede its action in HER2 positive breast tumours. Research has focused on exploring the HER2 activated phosphoinositide-3-kinase (PI3K)/AKT and rat sarcoma/mitogen-activated protein kinase (RAS/MAPK) pathways for therapies. Despite the advances, cases of drug resistance and recurrence of disease still remain a challenge to overcome. An important aspect for drug resistance is the complexity of the HER2 signaling network. This includes the crosstalk between HER2 and hormone receptors; its function as a transcription factor; the regulation of HER2 by protein-tyrosine phosphatases and a complex network of positive and negative feedback-loops. This review summarises the current knowledge of many different HER2 interactions to illustrate the complexity of the HER2 network from the transcription of HER2 to the effect of its downstream targets. Exploring the novel avenues of the HER2 signaling could yield a better understanding of treatment resistance and give rise to developing new and more effective therapies.

  18. Short-term memory in networks of dissociated cortical neurons.

    PubMed

    Dranias, Mark R; Ju, Han; Rajaram, Ezhilarasan; VanDongen, Antonius M J

    2013-01-30

    Short-term memory refers to the ability to store small amounts of stimulus-specific information for a short period of time. It is supported by both fading and hidden memory processes. Fading memory relies on recurrent activity patterns in a neuronal network, whereas hidden memory is encoded using synaptic mechanisms, such as facilitation, which persist even when neurons fall silent. We have used a novel computational and optogenetic approach to investigate whether these same memory processes hypothesized to support pattern recognition and short-term memory in vivo, exist in vitro. Electrophysiological activity was recorded from primary cultures of dissociated rat cortical neurons plated on multielectrode arrays. Cultures were transfected with ChannelRhodopsin-2 and optically stimulated using random dot stimuli. The pattern of neuronal activity resulting from this stimulation was analyzed using classification algorithms that enabled the identification of stimulus-specific memories. Fading memories for different stimuli, encoded in ongoing neural activity, persisted and could be distinguished from each other for as long as 1 s after stimulation was terminated. Hidden memories were detected by altered responses of neurons to additional stimulation, and this effect persisted longer than 1 s. Interestingly, network bursts seem to eliminate hidden memories. These results are similar to those that have been reported from similar experiments in vivo and demonstrate that mechanisms of information processing and short-term memory can be studied using cultured neuronal networks, thereby setting the stage for therapeutic applications using this platform.

  19. Exact solutions for rate and synchrony in recurrent networks of coincidence detectors.

    PubMed

    Mikula, Shawn; Niebur, Ernst

    2008-11-01

    We provide analytical solutions for mean firing rates and cross-correlations of coincidence detector neurons in recurrent networks with excitatory or inhibitory connectivity, with rate-modulated steady-state spiking inputs. We use discrete-time finite-state Markov chains to represent network state transition probabilities, which are subsequently used to derive exact analytical solutions for mean firing rates and cross-correlations. As illustrated in several examples, the method can be used for modeling cortical microcircuits and clarifying single-neuron and population coding mechanisms. We also demonstrate that increasing firing rates do not necessarily translate into increasing cross-correlations, though our results do support the contention that firing rates and cross-correlations are likely to be coupled. Our analytical solutions underscore the complexity of the relationship between firing rates and cross-correlations.

  20. Improving protein disorder prediction by deep bidirectional long short-term memory recurrent neural networks.

    PubMed

    Hanson, Jack; Yang, Yuedong; Paliwal, Kuldip; Zhou, Yaoqi

    2017-03-01

    Capturing long-range interactions between structural but not sequence neighbors of proteins is a long-standing challenging problem in bioinformatics. Recently, long short-term memory (LSTM) networks have significantly improved the accuracy of speech and image classification problems by remembering useful past information in long sequential events. Here, we have implemented deep bidirectional LSTM recurrent neural networks in the problem of protein intrinsic disorder prediction. The new method, named SPOT-Disorder, has steadily improved over a similar method using a traditional, window-based neural network (SPINE-D) in all datasets tested without separate training on short and long disordered regions. Independent tests on four other datasets including the datasets from critical assessment of structure prediction (CASP) techniques and >10 000 annotated proteins from MobiDB, confirmed SPOT-Disorder as one of the best methods in disorder prediction. Moreover, initial studies indicate that the method is more accurate in predicting functional sites in disordered regions. These results highlight the usefulness combining LSTM with deep bidirectional recurrent neural networks in capturing non-local, long-range interactions for bioinformatics applications. SPOT-disorder is available as a web server and as a standalone program at: http://sparks-lab.org/server/SPOT-disorder/index.php . j.hanson@griffith.edu.au or yuedong.yang@griffith.edu.au or yaoqi.zhou@griffith.edu.au. Supplementary data is available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  1. Time patterns of recurrences and factors predisposing for a higher risk of recurrence of ocular toxoplasmosis.

    PubMed

    Reich, Michael; Ruppenstein, Mira; Becker, Matthias D; Mackensen, Friederike

    2015-04-01

    To ascertain time patterns of recurrences and factors predisposing for a higher risk of recurrence of ocular toxoplasmosis. Retrospective observational case series with follow-up examination. Database of 4,381 patients with uveitis was used. Data of 84 patients with ocular toxoplasmosis (sample group) could be included. Two hundred and eighty active lesions in the first affected eye were detected. The mean number of recurrences per year was 0.29 (standard deviation, 0.24). Median recurrence-free survival time was 2.52 years (95% confidence interval, 2.03-3.02 years). Risk of recurrence was highest in the first year after the most recent episode (26%) implying a decrease with increasing recurrence-free interval. The risk of recurrence decreased with the duration of disease (P < 0.001). Treatment of the first active lesion influenced the risk of recurrence (P = 0.048). Furthermore, the risk of recurrence was influenced by patient age at the time of the first active lesion (P = 0.021) and the most recent episode (P = 0.002). A secondary antibiotic prophylaxis could be discussed 1) during the first year after an active lesion has occurred, especially in case of the first active lesion of ocular toxoplasmosis, and 2) in older patients, especially if primarily infected with Toxoplasma gondii at an older age.

  2. Construction of Gene Regulatory Networks Using Recurrent Neural Networks and Swarm Intelligence.

    PubMed

    Khan, Abhinandan; Mandal, Sudip; Pal, Rajat Kumar; Saha, Goutam

    2016-01-01

    We have proposed a methodology for the reverse engineering of biologically plausible gene regulatory networks from temporal genetic expression data. We have used established information and the fundamental mathematical theory for this purpose. We have employed the Recurrent Neural Network formalism to extract the underlying dynamics present in the time series expression data accurately. We have introduced a new hybrid swarm intelligence framework for the accurate training of the model parameters. The proposed methodology has been first applied to a small artificial network, and the results obtained suggest that it can produce the best results available in the contemporary literature, to the best of our knowledge. Subsequently, we have implemented our proposed framework on experimental (in vivo) datasets. Finally, we have investigated two medium sized genetic networks (in silico) extracted from GeneNetWeaver, to understand how the proposed algorithm scales up with network size. Additionally, we have implemented our proposed algorithm with half the number of time points. The results indicate that a reduction of 50% in the number of time points does not have an effect on the accuracy of the proposed methodology significantly, with a maximum of just over 15% deterioration in the worst case.

  3. Sharpening of Hierarchical Visual Feature Representations of Blurred Images.

    PubMed

    Abdelhack, Mohamed; Kamitani, Yukiyasu

    2018-01-01

    The robustness of the visual system lies in its ability to perceive degraded images. This is achieved through interacting bottom-up, recurrent, and top-down pathways that process the visual input in concordance with stored prior information. The interaction mechanism by which they integrate visual input and prior information is still enigmatic. We present a new approach using deep neural network (DNN) representation to reveal the effects of such integration on degraded visual inputs. We transformed measured human brain activity resulting from viewing blurred images to the hierarchical representation space derived from a feedforward DNN. Transformed representations were found to veer toward the original nonblurred image and away from the blurred stimulus image. This indicated deblurring or sharpening in the neural representation, and possibly in our perception. We anticipate these results will help unravel the interplay mechanism between bottom-up, recurrent, and top-down pathways, leading to more comprehensive models of vision.

  4. From cognitive networks to seizures: Stimulus evoked dynamics in a coupled cortical network

    NASA Astrophysics Data System (ADS)

    Lee, Jaejin; Ermentrout, Bard; Bodner, Mark

    2013-12-01

    Epilepsy is one of the most common neuropathologies worldwide. Seizures arising in epilepsy or in seizure disorders are characterized generally by uncontrolled spread of excitation and electrical activity to a limited region or even over the entire cortex. While it is generally accepted that abnormal excessive firing and synchronization of neuron populations lead to seizures, little is known about the precise mechanisms underlying human epileptic seizures, the mechanisms of transitions from normal to paroxysmal activity, or about how seizures spread. Further complication arises in that seizures do not occur with a single type of dynamics but as many different phenotypes and genotypes with a range of patterns, synchronous oscillations, and time courses. The concept of preventing, terminating, or modulating seizures and/or paroxysmal activity through stimulation of brain has also received considerable attention. The ability of such stimulation to prevent or modulate such pathological activity may depend on identifiable parameters. In this work, firing rate networks with inhibitory and excitatory populations were modeled. Network parameters were chosen to model normal working memory behaviors. Two different models of cognitive activity were developed. The first model consists of a single network corresponding to a local area of the brain. The second incorporates two networks connected through sparser recurrent excitatory connectivity with transmission delays ranging from approximately 3 ms within local populations to 15 ms between populations residing in different cortical areas. The effect of excitatory stimulation to activate working memory behavior through selective persistent activation of populations is examined in the models, and the conditions and transition mechanisms through which that selective activation breaks down producing spreading paroxysmal activity and seizure states are characterized. Specifically, we determine critical parameters and architectural changes that produce the different seizure dynamics in the networks. This provides possible mechanisms for seizure generation. Because seizures arise as attractors in a multi-state system, the system may possibly be returned to its baseline state through some particular stimulation. The ability of stimulation to terminate seizure dynamics in the local and distributed models is studied. We systematically examine when this may occur and the form of the stimulation necessary for the range of seizure dynamics. In both the local and distributed network models, termination is possible for all seizure types observed by stimulation possessing some particular configuration of spatial and temporal characteristics.

  5. Dynamics of Multistable States during Ongoing and Evoked Cortical Activity

    PubMed Central

    Mazzucato, Luca

    2015-01-01

    Single-trial analyses of ensemble activity in alert animals demonstrate that cortical circuits dynamics evolve through temporal sequences of metastable states. Metastability has been studied for its potential role in sensory coding, memory, and decision-making. Yet, very little is known about the network mechanisms responsible for its genesis. It is often assumed that the onset of state sequences is triggered by an external stimulus. Here we show that state sequences can be observed also in the absence of overt sensory stimulation. Analysis of multielectrode recordings from the gustatory cortex of alert rats revealed ongoing sequences of states, where single neurons spontaneously attain several firing rates across different states. This single-neuron multistability represents a challenge to existing spiking network models, where typically each neuron is at most bistable. We present a recurrent spiking network model that accounts for both the spontaneous generation of state sequences and the multistability in single-neuron firing rates. Each state results from the activation of neural clusters with potentiated intracluster connections, with the firing rate in each cluster depending on the number of active clusters. Simulations show that the model's ensemble activity hops among the different states, reproducing the ongoing dynamics observed in the data. When probed with external stimuli, the model predicts the quenching of single-neuron multistability into bistability and the reduction of trial-by-trial variability. Both predictions were confirmed in the data. Together, these results provide a theoretical framework that captures both ongoing and evoked network dynamics in a single mechanistic model. PMID:26019337

  6. A feedback model of figure-ground assignment.

    PubMed

    Domijan, Drazen; Setić, Mia

    2008-05-30

    A computational model is proposed in order to explain how bottom-up and top-down signals are combined into a unified perception of figure and background. The model is based on the interaction between the ventral and the dorsal stream. The dorsal stream computes saliency based on boundary signals provided by the simple and the complex cortical cells. Output from the dorsal stream is projected to the surface network which serves as a blackboard on which the surface representation is formed. The surface network is a recurrent network which segregates different surfaces by assigning different firing rates to them. The figure is labeled by the maximal firing rate. Computer simulations showed that the model correctly assigns figural status to the surface with a smaller size, a greater contrast, convexity, surroundedness, horizontal-vertical orientation and a higher spatial frequency content. The simple gradient of activity in the dorsal stream enables the simulation of the new principles of the lower region and the top-bottom polarity. The model also explains how the exogenous attention and the endogenous attention may reverse the figural assignment. Due to the local excitation in the surface network, neural activity at the cued region will spread over the whole surface representation. Therefore, the model implements the object-based attentional selection.

  7. Multiple μ-stability of neural networks with unbounded time-varying delays.

    PubMed

    Wang, Lili; Chen, Tianping

    2014-05-01

    In this paper, we are concerned with a class of recurrent neural networks with unbounded time-varying delays. Based on the geometrical configuration of activation functions, the phase space R(n) can be divided into several Φη-type subsets. Accordingly, a new set of regions Ωη are proposed, and rigorous mathematical analysis is provided to derive the existence of equilibrium point and its local μ-stability in each Ωη. It concludes that the n-dimensional neural networks can exhibit at least 3(n) equilibrium points and 2(n) of them are μ-stable. Furthermore, due to the compatible property, a set of new conditions are presented to address the dynamics in the remaining 3(n)-2(n) subset regions. As direct applications of these results, we can get some criteria on the multiple exponential stability, multiple power stability, multiple log-stability, multiple log-log-stability and so on. In addition, the approach and results can also be extended to the neural networks with K-level nonlinear activation functions and unbounded time-varying delays, in which there can store (2K+1)(n) equilibrium points, (K+1)(n) of them are locally μ-stable. Numerical examples are given to illustrate the effectiveness of our results. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. Robust Working Memory in an Asynchronously Spiking Neural Network Realized with Neuromorphic VLSI

    PubMed Central

    Giulioni, Massimiliano; Camilleri, Patrick; Mattia, Maurizio; Dante, Vittorio; Braun, Jochen; Del Giudice, Paolo

    2011-01-01

    We demonstrate bistable attractor dynamics in a spiking neural network implemented with neuromorphic VLSI hardware. The on-chip network consists of three interacting populations (two excitatory, one inhibitory) of leaky integrate-and-fire (LIF) neurons. One excitatory population is distinguished by strong synaptic self-excitation, which sustains meta-stable states of “high” and “low”-firing activity. Depending on the overall excitability, transitions to the “high” state may be evoked by external stimulation, or may occur spontaneously due to random activity fluctuations. In the former case, the “high” state retains a “working memory” of a stimulus until well after its release. In the latter case, “high” states remain stable for seconds, three orders of magnitude longer than the largest time-scale implemented in the circuitry. Evoked and spontaneous transitions form a continuum and may exhibit a wide range of latencies, depending on the strength of external stimulation and of recurrent synaptic excitation. In addition, we investigated “corrupted” “high” states comprising neurons of both excitatory populations. Within a “basin of attraction,” the network dynamics “corrects” such states and re-establishes the prototypical “high” state. We conclude that, with effective theoretical guidance, full-fledged attractor dynamics can be realized with comparatively small populations of neuromorphic hardware neurons. PMID:22347151

  9. Robust Working Memory in an Asynchronously Spiking Neural Network Realized with Neuromorphic VLSI.

    PubMed

    Giulioni, Massimiliano; Camilleri, Patrick; Mattia, Maurizio; Dante, Vittorio; Braun, Jochen; Del Giudice, Paolo

    2011-01-01

    We demonstrate bistable attractor dynamics in a spiking neural network implemented with neuromorphic VLSI hardware. The on-chip network consists of three interacting populations (two excitatory, one inhibitory) of leaky integrate-and-fire (LIF) neurons. One excitatory population is distinguished by strong synaptic self-excitation, which sustains meta-stable states of "high" and "low"-firing activity. Depending on the overall excitability, transitions to the "high" state may be evoked by external stimulation, or may occur spontaneously due to random activity fluctuations. In the former case, the "high" state retains a "working memory" of a stimulus until well after its release. In the latter case, "high" states remain stable for seconds, three orders of magnitude longer than the largest time-scale implemented in the circuitry. Evoked and spontaneous transitions form a continuum and may exhibit a wide range of latencies, depending on the strength of external stimulation and of recurrent synaptic excitation. In addition, we investigated "corrupted" "high" states comprising neurons of both excitatory populations. Within a "basin of attraction," the network dynamics "corrects" such states and re-establishes the prototypical "high" state. We conclude that, with effective theoretical guidance, full-fledged attractor dynamics can be realized with comparatively small populations of neuromorphic hardware neurons.

  10. Processing speed in recurrent visual networks correlates with general intelligence.

    PubMed

    Jolij, Jacob; Huisman, Danielle; Scholte, Steven; Hamel, Ronald; Kemner, Chantal; Lamme, Victor A F

    2007-01-08

    Studies on the neural basis of general fluid intelligence strongly suggest that a smarter brain processes information faster. Different brain areas, however, are interconnected by both feedforward and feedback projections. Whether both types of connections or only one of the two types are faster in smarter brains remains unclear. Here we show, by measuring visual evoked potentials during a texture discrimination task, that general fluid intelligence shows a strong correlation with processing speed in recurrent visual networks, while there is no correlation with speed of feedforward connections. The hypothesis that a smarter brain runs faster may need to be refined: a smarter brain's feedback connections run faster.

  11. Stability of discrete time recurrent neural networks and nonlinear optimization problems.

    PubMed

    Singh, Jayant; Barabanov, Nikita

    2016-02-01

    We consider the method of Reduction of Dissipativity Domain to prove global Lyapunov stability of Discrete Time Recurrent Neural Networks. The standard and advanced criteria for Absolute Stability of these essentially nonlinear systems produce rather weak results. The method mentioned above is proved to be more powerful. It involves a multi-step procedure with maximization of special nonconvex functions over polytopes on every step. We derive conditions which guarantee an existence of at most one point of local maximum for such functions over every hyperplane. This nontrivial result is valid for wide range of neuron transfer functions. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Identification of serial number on bank card using recurrent neural network

    NASA Astrophysics Data System (ADS)

    Liu, Li; Huang, Linlin; Xue, Jian

    2018-04-01

    Identification of serial number on bank card has many applications. Due to the different number printing mode, complex background, distortion in shape, etc., it is quite challenging to achieve high identification accuracy. In this paper, we propose a method using Normalization-Cooperated Gradient Feature (NCGF) and Recurrent Neural Network (RNN) based on Long Short-Term Memory (LSTM) for serial number identification. The NCGF maps the gradient direction elements of original image to direction planes such that the RNN with direction planes as input can recognize numbers more accurately. Taking the advantages of NCGF and RNN, we get 90%digit string recognition accuracy.

  13. Deep recurrent neural network reveals a hierarchy of process memory during dynamic natural vision.

    PubMed

    Shi, Junxing; Wen, Haiguang; Zhang, Yizhen; Han, Kuan; Liu, Zhongming

    2018-05-01

    The human visual cortex extracts both spatial and temporal visual features to support perception and guide behavior. Deep convolutional neural networks (CNNs) provide a computational framework to model cortical representation and organization for spatial visual processing, but unable to explain how the brain processes temporal information. To overcome this limitation, we extended a CNN by adding recurrent connections to different layers of the CNN to allow spatial representations to be remembered and accumulated over time. The extended model, or the recurrent neural network (RNN), embodied a hierarchical and distributed model of process memory as an integral part of visual processing. Unlike the CNN, the RNN learned spatiotemporal features from videos to enable action recognition. The RNN better predicted cortical responses to natural movie stimuli than the CNN, at all visual areas, especially those along the dorsal stream. As a fully observable model of visual processing, the RNN also revealed a cortical hierarchy of temporal receptive window, dynamics of process memory, and spatiotemporal representations. These results support the hypothesis of process memory, and demonstrate the potential of using the RNN for in-depth computational understanding of dynamic natural vision. © 2018 Wiley Periodicals, Inc.

  14. Analysis of recurrent neural networks for short-term energy load forecasting

    NASA Astrophysics Data System (ADS)

    Di Persio, Luca; Honchar, Oleksandr

    2017-11-01

    Short-term forecasts have recently gained an increasing attention because of the rise of competitive electricity markets. In fact, short-terms forecast of possible future loads turn out to be fundamental to build efficient energy management strategies as well as to avoid energy wastage. Such type of challenges are difficult to tackle both from a theoretical and applied point of view. Latter tasks require sophisticated methods to manage multidimensional time series related to stochastic phenomena which are often highly interconnected. In the present work we first review novel approaches to energy load forecasting based on recurrent neural network, focusing our attention on long/short term memory architectures (LSTMs). Such type of artificial neural networks have been widely applied to problems dealing with sequential data such it happens, e.g., in socio-economics settings, for text recognition purposes, concerning video signals, etc., always showing their effectiveness to model complex temporal data. Moreover, we consider different novel variations of basic LSTMs, such as sequence-to-sequence approach and bidirectional LSTMs, aiming at providing effective models for energy load data. Last but not least, we test all the described algorithms on real energy load data showing not only that deep recurrent networks can be successfully applied to energy load forecasting, but also that this approach can be extended to other problems based on time series prediction.

  15. Lifelong learning of human actions with deep neural network self-organization.

    PubMed

    Parisi, German I; Tani, Jun; Weber, Cornelius; Wermter, Stefan

    2017-12-01

    Lifelong learning is fundamental in autonomous robotics for the acquisition and fine-tuning of knowledge through experience. However, conventional deep neural models for action recognition from videos do not account for lifelong learning but rather learn a batch of training data with a predefined number of action classes and samples. Thus, there is the need to develop learning systems with the ability to incrementally process available perceptual cues and to adapt their responses over time. We propose a self-organizing neural architecture for incrementally learning to classify human actions from video sequences. The architecture comprises growing self-organizing networks equipped with recurrent neurons for processing time-varying patterns. We use a set of hierarchically arranged recurrent networks for the unsupervised learning of action representations with increasingly large spatiotemporal receptive fields. Lifelong learning is achieved in terms of prediction-driven neural dynamics in which the growth and the adaptation of the recurrent networks are driven by their capability to reconstruct temporally ordered input sequences. Experimental results on a classification task using two action benchmark datasets show that our model is competitive with state-of-the-art methods for batch learning also when a significant number of sample labels are missing or corrupted during training sessions. Additional experiments show the ability of our model to adapt to non-stationary input avoiding catastrophic interference. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  16. Speed of feedforward and recurrent processing in multilayer networks of integrate-and-fire neurons.

    PubMed

    Panzeri, S; Rolls, E T; Battaglia, F; Lavis, R

    2001-11-01

    The speed of processing in the visual cortical areas can be fast, with for example the latency of neuronal responses increasing by only approximately 10 ms per area in the ventral visual system sequence V1 to V2 to V4 to inferior temporal visual cortex. This has led to the suggestion that rapid visual processing can only be based on the feedforward connections between cortical areas. To test this idea, we investigated the dynamics of information retrieval in multiple layer networks using a four-stage feedforward network modelled with continuous dynamics with integrate-and-fire neurons, and associative synaptic connections between stages with a synaptic time constant of 10 ms. Through the implementation of continuous dynamics, we found latency differences in information retrieval of only 5 ms per layer when local excitation was absent and processing was purely feedforward. However, information latency differences increased significantly when non-associative local excitation was included. We also found that local recurrent excitation through associatively modified synapses can contribute significantly to processing in as little as 15 ms per layer, including the feedforward and local feedback processing. Moreover, and in contrast to purely feed-forward processing, the contribution of local recurrent feedback was useful and approximately this rapid even when retrieval was made difficult by noise. These findings suggest that cortical information processing can benefit from recurrent circuits when the allowed processing time per cortical area is at least 15 ms long.

  17. Precise Synaptic Efficacy Alignment Suggests Potentiation Dominated Learning.

    PubMed

    Hartmann, Christoph; Miner, Daniel C; Triesch, Jochen

    2015-01-01

    Recent evidence suggests that parallel synapses from the same axonal branch onto the same dendritic branch have almost identical strength. It has been proposed that this alignment is only possible through learning rules that integrate activity over long time spans. However, learning mechanisms such as spike-timing-dependent plasticity (STDP) are commonly assumed to be temporally local. Here, we propose that the combination of temporally local STDP and a multiplicative synaptic normalization mechanism is sufficient to explain the alignment of parallel synapses. To address this issue, we introduce three increasingly complex models: First, we model the idealized interaction of STDP and synaptic normalization in a single neuron as a simple stochastic process and derive analytically that the alignment effect can be described by a so-called Kesten process. From this we can derive that synaptic efficacy alignment requires potentiation-dominated learning regimes. We verify these conditions in a single-neuron model with independent spiking activities but more realistic synapses. As expected, we only observe synaptic efficacy alignment for long-term potentiation-biased STDP. Finally, we explore how well the findings transfer to recurrent neural networks where the learning mechanisms interact with the correlated activity of the network. We find that due to the self-reinforcing correlations in recurrent circuits under STDP, alignment occurs for both long-term potentiation- and depression-biased STDP, because the learning will be potentiation dominated in both cases due to the potentiating events induced by correlated activity. This is in line with recent results demonstrating a dominance of potentiation over depression during waking and normalization during sleep. This leads us to predict that individual spine pairs will be more similar after sleep compared to after sleep deprivation. In conclusion, we show that synaptic normalization in conjunction with coordinated potentiation--in this case, from STDP in the presence of correlated pre- and post-synaptic activity--naturally leads to an alignment of parallel synapses.

  18. A deep learning approach for real time prostate segmentation in freehand ultrasound guided biopsy.

    PubMed

    Anas, Emran Mohammad Abu; Mousavi, Parvin; Abolmaesumi, Purang

    2018-06-01

    Targeted prostate biopsy, incorporating multi-parametric magnetic resonance imaging (mp-MRI) and its registration with ultrasound, is currently the state-of-the-art in prostate cancer diagnosis. The registration process in most targeted biopsy systems today relies heavily on accurate segmentation of ultrasound images. Automatic or semi-automatic segmentation is typically performed offline prior to the start of the biopsy procedure. In this paper, we present a deep neural network based real-time prostate segmentation technique during the biopsy procedure, hence paving the way for dynamic registration of mp-MRI and ultrasound data. In addition to using convolutional networks for extracting spatial features, the proposed approach employs recurrent networks to exploit the temporal information among a series of ultrasound images. One of the key contributions in the architecture is to use residual convolution in the recurrent networks to improve optimization. We also exploit recurrent connections within and across different layers of the deep networks to maximize the utilization of the temporal information. Furthermore, we perform dense and sparse sampling of the input ultrasound sequence to make the network robust to ultrasound artifacts. Our architecture is trained on 2,238 labeled transrectal ultrasound images, with an additional 637 and 1,017 unseen images used for validation and testing, respectively. We obtain a mean Dice similarity coefficient of 93%, a mean surface distance error of 1.10 mm and a mean Hausdorff distance error of 3.0 mm. A comparison of the reported results with those of a state-of-the-art technique indicates statistically significant improvement achieved by the proposed approach. Copyright © 2018 Elsevier B.V. All rights reserved.

  19. Reservoir computing on the hypersphere

    NASA Astrophysics Data System (ADS)

    Andrecut, M.

    Reservoir Computing (RC) refers to a Recurrent Neural Network (RNNs) framework, frequently used for sequence learning and time series prediction. The RC system consists of a random fixed-weight RNN (the input-hidden reservoir layer) and a classifier (the hidden-output readout layer). Here, we focus on the sequence learning problem, and we explore a different approach to RC. More specifically, we remove the nonlinear neural activation function, and we consider an orthogonal reservoir acting on normalized states on the unit hypersphere. Surprisingly, our numerical results show that the system’s memory capacity exceeds the dimensionality of the reservoir, which is the upper bound for the typical RC approach based on Echo State Networks (ESNs). We also show how the proposed system can be applied to symmetric cryptography problems, and we include a numerical implementation.

  20. Toward a new task assignment and path evolution (TAPE) for missile defense system (MDS) using intelligent adaptive SOM with recurrent neural networks (RNNs).

    PubMed

    Wang, Chi-Hsu; Chen, Chun-Yao; Hung, Kun-Neng

    2015-06-01

    In this paper, a new adaptive self-organizing map (SOM) with recurrent neural network (RNN) controller is proposed for task assignment and path evolution of missile defense system (MDS). We address the problem of N agents (defending missiles) and D targets (incoming missiles) in MDS. A new RNN controller is designed to force an agent (or defending missile) toward a target (or incoming missile), and a monitoring controller is also designed to reduce the error between RNN controller and ideal controller. A new SOM with RNN controller is then designed to dispatch agents to their corresponding targets by minimizing total damaging cost. This is actually an important application of the multiagent system. The SOM with RNN controller is the main controller. After task assignment, the weighting factors of our new SOM with RNN controller are activated to dispatch the agents toward their corresponding targets. Using the Lyapunov constraints, the weighting factors for the proposed SOM with RNN controller are updated to guarantee the stability of the path evolution (or planning) system. Excellent simulations are obtained using this new approach for MDS, which show that our RNN has the lowest average miss distance among the several techniques.

  1. A Tradeoff Between Accuracy and Flexibility in a Working Memory Circuit Endowed with Slow Feedback Mechanisms

    PubMed Central

    Pereira, Jacinto; Wang, Xiao-Jing

    2015-01-01

    Recent studies have shown that reverberation underlying mnemonic persistent activity must be slow, to ensure the stability of a working memory system and to give rise to long neural transients capable of accumulation of information over time. Is the slower the underlying process, the better? To address this question, we investigated 3 slow biophysical mechanisms that are activity-dependent and prominently present in the prefrontal cortex: Depolarization-induced suppression of inhibition (DSI), calcium-dependent nonspecific cationic current (ICAN), and short-term facilitation. Using a spiking network model for spatial working memory, we found that these processes enhance the memory accuracy by counteracting noise-induced drifts, heterogeneity-induced biases, and distractors. Furthermore, the incorporation of DSI and ICAN enlarges the range of network's parameter values required for working memory function. However, when a progressively slower process dominates the network, it becomes increasingly more difficult to erase a memory trace. We demonstrate this accuracy–flexibility tradeoff quantitatively and interpret it using a state-space analysis. Our results supports the scenario where N-methyl-d-aspartate receptor-dependent recurrent excitation is the workhorse for the maintenance of persistent activity, whereas slow synaptic or cellular processes contribute to the robustness of mnemonic function in a tradeoff that potentially can be adjusted according to behavioral demands. PMID:25253801

  2. Are Equilibrium Multichannel Networks Predictable? the Case of the Indus River, Pakistan

    NASA Astrophysics Data System (ADS)

    Darby, S. E.; Carling, P. A.

    2017-12-01

    Focusing on the specific case of the Indus River, we argue that the equilibrium planform network structure of large, multi-channel, rivers is predictable. Between Chashma and Taunsa, Pakistan, the Indus is a 264 km long multiple-channel reach. Remote sensing imagery, including a period of time that encompasses the occurrence of major floods in 2007 and 2010, shows that Indus has a minimum of two and a maximum of nine channels, with on average four active channels during the dry season and five during the monsoon. We show that the network structure, if not detailed planform, remains stable, even for the record 2010 flood (27,100 m3s-1; recurrence interval > 100 years). Bankline recession is negligible for discharges less than a peak annual discharge of 6,000 m3s-1 ( 80% of mean annual flow). Maximum Flow Efficiency (MFE) principle demonstrates the channel network is insensitive to the monsoon floods, which typically peak at 13,200 m3s-1. Rather, the network is in near-equilibrium with the mean annual flood (7,530 m3s-1). MFE principle indicates stable networks have three to four channels, thus the observed stability in the number of active channels accords with the presence of a near-equilibrium reach-scale channel network. Insensitivity to the annual hydrological cycle demonstrates that the time-scale for network adjustment is much longer than the time-scale of the monsoon hydrograph, with the annual excess water being stored on floodplains, rather than being conveyed in an enlarged channel network. The analysis explains the lack of significant channel adjustment following the largest flood in 40 years and the extensive Indus flooding experienced on an annual basis, with its substantial impacts on the populace and agricultural production.

  3. Recurrent neural network approach to quantum signal: coherent state restoration for continuous-variable quantum key distribution

    NASA Astrophysics Data System (ADS)

    Lu, Weizhao; Huang, Chunhui; Hou, Kun; Shi, Liting; Zhao, Huihui; Li, Zhengmei; Qiu, Jianfeng

    2018-05-01

    In continuous-variable quantum key distribution (CV-QKD), weak signal carrying information transmits from Alice to Bob; during this process it is easily influenced by unknown noise which reduces signal-to-noise ratio, and strongly impacts reliability and stability of the communication. Recurrent quantum neural network (RQNN) is an artificial neural network model which can perform stochastic filtering without any prior knowledge of the signal and noise. In this paper, a modified RQNN algorithm with expectation maximization algorithm is proposed to process the signal in CV-QKD, which follows the basic rule of quantum mechanics. After RQNN, noise power decreases about 15 dBm, coherent signal recognition rate of RQNN is 96%, quantum bit error rate (QBER) drops to 4%, which is 6.9% lower than original QBER, and channel capacity is notably enlarged.

  4. Exact Solutions for Rate and Synchrony in Recurrent Networks of Coincidence Detectors

    PubMed Central

    Mikula, Shawn; Niebur, Ernst

    2009-01-01

    We provide analytical solutions for mean firing rates and cross-correlations of coincidence detector neurons in recurrent networks with excitatory or inhibitory connectivity with rate-modulated steady-state spiking inputs. We use discrete-time finite-state Markov chains to represent network state transition probabilities, which are subsequently used to derive exact analytical solutions for mean firing rates and cross-correlations. As illustrated in several examples, the method can be used for modeling cortical microcircuits and clarifying single-neuron and population coding mechanisms. We also demonstrate that increasing firing rates do not necessarily translate into increasing cross-correlations, though our results do support the contention that firing rates and cross-correlations are likely to be coupled. Our analytical solutions underscore the complexity of the relationship between firing rates and cross-correlations. PMID:18439133

  5. Recurrent Neural Network Applications for Astronomical Time Series

    NASA Astrophysics Data System (ADS)

    Protopapas, Pavlos

    2017-06-01

    The benefits of good predictive models in astronomy lie in early event prediction systems and effective resource allocation. Current time series methods applicable to regular time series have not evolved to generalize for irregular time series. In this talk, I will describe two Recurrent Neural Network methods, Long Short-Term Memory (LSTM) and Echo State Networks (ESNs) for predicting irregular time series. Feature engineering along with a non-linear modeling proved to be an effective predictor. For noisy time series, the prediction is improved by training the network on error realizations using the error estimates from astronomical light curves. In addition to this, we propose a new neural network architecture to remove correlation from the residuals in order to improve prediction and compensate for the noisy data. Finally, I show how to set hyperparameters for a stable and performant solution correctly. In this work, we circumvent this obstacle by optimizing ESN hyperparameters using Bayesian optimization with Gaussian Process priors. This automates the tuning procedure, enabling users to employ the power of RNN without needing an in-depth understanding of the tuning procedure.

  6. Semantic disturbance in schizophrenia and its relationship to the cognitive neuroscience of attention

    PubMed Central

    Nestor, P.G.; Han, S.D.; Niznikiewicz, M.; Salisbury, D.; Spencer, K.; Shenton, M.E.; McCarley, R.W.

    2010-01-01

    We view schizophrenia as producing a failure of attentional modulation that leads to a breakdown in the selective enhancement or inhibition of semantic/lexical representations whose biological substrata are widely distributed across left (dominant) temporal and frontal lobes. Supporting behavioral evidence includes word recall studies that have pointed to a disturbance in connectivity (associative strength) but not network size (number of associates) in patients with schizophrenia. Paralleling these findings are recent neural network simulation studies of the abnormal connectivity effect in schizophrenia through ‘lesioning’ network connection weights while holding constant network size. Supporting evidence at the level of biology are in vitro studies examining N-methyl-d-aspartate (NMDA) receptor antagonists on recurrent inhibition; simulations in neural populations with realistically modeled biophysical properties show NMDA antagonists produce a schizophrenia-like disturbance in pattern association. We propose a similar failure of NMDA-mediated recurrent inhibition as a candidate biological substrate for attention and semantic anomalies of schizophrenia. PMID:11454433

  7. MicroRNA-mediated networks underlie immune response regulation in papillary thyroid carcinoma

    NASA Astrophysics Data System (ADS)

    Huang, Chen-Tsung; Oyang, Yen-Jen; Huang, Hsuan-Cheng; Juan, Hsueh-Fen

    2014-09-01

    Papillary thyroid carcinoma (PTC) is a common endocrine malignancy with low death rate but increased incidence and recurrence in recent years. MicroRNAs (miRNAs) are small non-coding RNAs with diverse regulatory capacities in eukaryotes and have been frequently implied in human cancer. Despite current progress, however, a panoramic overview concerning miRNA regulatory networks in PTC is still lacking. Here, we analyzed the expression datasets of PTC from The Cancer Genome Atlas (TCGA) Data Portal and demonstrate for the first time that immune responses are significantly enriched and under specific regulation in the direct miRNA-target network among distinctive PTC variants to different extents. Additionally, considering the unconventional properties of miRNAs, we explore the protein-coding competing endogenous RNA (ceRNA) and the modulatory networks in PTC and unexpectedly disclose concerted regulation of immune responses from these networks. Interestingly, miRNAs from these conventional and unconventional networks share general similarities and differences but tend to be disparate as regulatory activities increase, coordinately tuning the immune responses that in part account for PTC tumor biology. Together, our systematic results uncover the intensive regulation of immune responses underlain by miRNA-mediated networks in PTC, opening up new avenues in the management of thyroid cancer.

  8. Extracting recurrent scenarios from narrative texts using a Bayesian network: application to serious occupational accidents with movement disturbance.

    PubMed

    Abdat, F; Leclercq, S; Cuny, X; Tissot, C

    2014-09-01

    A probabilistic approach has been developed to extract recurrent serious Occupational Accident with Movement Disturbance (OAMD) scenarios from narrative texts within a prevention framework. Relevant data extracted from 143 accounts was initially coded as logical combinations of generic accident factors. A Bayesian Network (BN)-based model was then built for OAMDs using these data and expert knowledge. A data clustering process was subsequently performed to group the OAMDs into similar classes from generic factor occurrence and pattern standpoints. Finally, the Most Probable Explanation (MPE) was evaluated and identified as the associated recurrent scenario for each class. Using this approach, 8 scenarios were extracted to describe 143 OAMDs in the construction and metallurgy sectors. Their recurrent nature is discussed. Probable generic factor combinations provide a fair representation of particularly serious OAMDs, as described in narrative texts. This work represents a real contribution to raising company awareness of the variety of circumstances, in which these accidents occur, to progressing in the prevention of such accidents and to developing an analysis framework dedicated to this kind of accident. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Trainable Gene Regulation Networks with Applications to Drosophila Pattern Formation

    NASA Technical Reports Server (NTRS)

    Mjolsness, Eric

    2000-01-01

    This chapter will very briefly introduce and review some computational experiments in using trainable gene regulation network models to simulate and understand selected episodes in the development of the fruit fly, Drosophila melanogaster. For details the reader is referred to the papers introduced below. It will then introduce a new gene regulation network model which can describe promoter-level substructure in gene regulation. As described in chapter 2, gene regulation may be thought of as a combination of cis-acting regulation by the extended promoter of a gene (including all regulatory sequences) by way of the transcription complex, and of trans-acting regulation by the transcription factor products of other genes. If we simplify the cis-action by using a phenomenological model which can be tuned to data, such as a unit or other small portion of an artificial neural network, then the full transacting interaction between multiple genes during development can be modelled as a larger network which can again be tuned or trained to data. The larger network will in general need to have recurrent (feedback) connections since at least some real gene regulation networks do. This is the basic modeling approach taken, which describes how a set of recurrent neural networks can be used as a modeling language for multiple developmental processes including gene regulation within a single cell, cell-cell communication, and cell division. Such network models have been called "gene circuits", "gene regulation networks", or "genetic regulatory networks", sometimes without distinguishing the models from the actual modeled systems.

  10. A unified view on weakly correlated recurrent networks

    PubMed Central

    Grytskyy, Dmytro; Tetzlaff, Tom; Diesmann, Markus; Helias, Moritz

    2013-01-01

    The diversity of neuron models used in contemporary theoretical neuroscience to investigate specific properties of covariances in the spiking activity raises the question how these models relate to each other. In particular it is hard to distinguish between generic properties of covariances and peculiarities due to the abstracted model. Here we present a unified view on pairwise covariances in recurrent networks in the irregular regime. We consider the binary neuron model, the leaky integrate-and-fire (LIF) model, and the Hawkes process. We show that linear approximation maps each of these models to either of two classes of linear rate models (LRM), including the Ornstein–Uhlenbeck process (OUP) as a special case. The distinction between both classes is the location of additive noise in the rate dynamics, which is located on the output side for spiking models and on the input side for the binary model. Both classes allow closed form solutions for the covariance. For output noise it separates into an echo term and a term due to correlated input. The unified framework enables us to transfer results between models. For example, we generalize the binary model and the Hawkes process to the situation with synaptic conduction delays and simplify derivations for established results. Our approach is applicable to general network structures and suitable for the calculation of population averages. The derived averages are exact for fixed out-degree network architectures and approximate for fixed in-degree. We demonstrate how taking into account fluctuations in the linearization procedure increases the accuracy of the effective theory and we explain the class dependent differences between covariances in the time and the frequency domain. Finally we show that the oscillatory instability emerging in networks of LIF models with delayed inhibitory feedback is a model-invariant feature: the same structure of poles in the complex frequency plane determines the population power spectra. PMID:24151463

  11. Timescales and Mechanisms of Sigh-Like Bursting and Spiking in Models of Rhythmic Respiratory Neurons.

    PubMed

    Wang, Yangyang; Rubin, Jonathan E

    2017-12-01

    Neural networks generate a variety of rhythmic activity patterns, often involving different timescales. One example arises in the respiratory network in the pre-Bötzinger complex of the mammalian brainstem, which can generate the eupneic rhythm associated with normal respiration as well as recurrent low-frequency, large-amplitude bursts associated with sighing. Two competing hypotheses have been proposed to explain sigh generation: the recruitment of a neuronal population distinct from the eupneic rhythm-generating subpopulation or the reconfiguration of activity within a single population. Here, we consider two recent computational models, one of which represents each of the hypotheses. We use methods of dynamical systems theory, such as fast-slow decomposition, averaging, and bifurcation analysis, to understand the multiple-timescale mechanisms underlying sigh generation in each model. In the course of our analysis, we discover that a third timescale is required to generate sighs in both models. Furthermore, we identify the similarities of the underlying mechanisms in the two models and the aspects in which they differ.

  12. Epidemiology and biology of physical activity and cancer recurrence.

    PubMed

    Friedenreich, Christine M; Shaw, Eileen; Neilson, Heather K; Brenner, Darren R

    2017-10-01

    Physical activity is emerging from epidemiologic research as a lifestyle factor that may improve survival from colorectal, breast, and prostate cancers. However, there is considerably less evidence relating physical activity to cancer recurrence and the biologic mechanisms underlying this association remain unclear. Cancer patients are surviving longer than ever before, and fear of cancer recurrence is an important concern. Herein, we provide an overview of the current epidemiologic evidence relating physical activity to cancer recurrence. We review the biologic mechanisms most commonly researched in the context of physical activity and cancer outcomes, and, using the example of colorectal cancer, we explore hypothesized mechanisms through which physical activity might intervene in the colorectal recurrence pathway. Our review highlights the importance of considering pre-diagnosis and post-diagnosis activity, as well as cancer stage and timing of recurrence, in epidemiologic studies. In addition, more epidemiologic research is needed with cancer recurrence as a consistently defined outcome studied separately from survival. Future mechanistic research using randomized controlled trials, specifically those demonstrating the exercise responsiveness of hypothesized mechanisms in early stages of carcinogenesis, are needed to inform recommendations about when to exercise and to anticipate additive or synergistic effects with other preventive behaviors or treatments.

  13. Uninformative memories will prevail: the storage of correlated representations and its consequences.

    PubMed

    Kropff, Emilio; Treves, Alessandro

    2007-11-01

    Autoassociative networks were proposed in the 80's as simplified models of memory function in the brain, using recurrent connectivity with Hebbian plasticity to store patterns of neural activity that can be later recalled. This type of computation has been suggested to take place in the CA3 region of the hippocampus and at several levels in the cortex. One of the weaknesses of these models is their apparent inability to store correlated patterns of activity. We show, however, that a small and biologically plausible modification in the "learning rule" (associating to each neuron a plasticity threshold that reflects its popularity) enables the network to handle correlations. We study the stability properties of the resulting memories (in terms of their resistance to the damage of neurons or synapses), finding a novel property of autoassociative networks: not all memories are equally robust, and the most informative are also the most sensitive to damage. We relate these results to category-specific effects in semantic memory patients, where concepts related to "non-living things" are usually more resistant to brain damage than those related to "living things," a phenomenon suspected to be rooted in the correlation between representations of concepts in the cortex.

  14. Genesis of interictal spikes in the CA1: a computational investigation

    PubMed Central

    Ratnadurai-Giridharan, Shivakeshavan; Stefanescu, Roxana A.; Khargonekar, Pramod P.; Carney, Paul R.; Talathi, Sachin S.

    2014-01-01

    Interictal spikes (IISs) are spontaneous high amplitude, short time duration <400 ms events often observed in electroencephalographs (EEG) of epileptic patients. In vitro analysis of resected mesial temporal lobe tissue from patients with refractory temporal lobe epilepsy has revealed the presence of IIS in the CA1 subfield. In this paper, we develop a biophysically relevant network model of the CA1 subfield and investigate how changes in the network properties influence the susceptibility of CA1 to exhibit an IIS. We present a novel template based approach to identify conditions under which synchronization of paroxysmal depolarization shift (PDS) events evoked in CA1 pyramidal (Py) cells can trigger an IIS. The results from this analysis are used to identify the synaptic parameters of a minimal network model that is capable of generating PDS in response to afferent synaptic input. The minimal network model parameters are then incorporated into a detailed network model of the CA1 subfield in order to address the following questions: (1) How does the formation of an IIS in the CA1 depend on the degree of sprouting (recurrent connections) between the CA1 Py cells and the fraction of CA3 Shaffer collateral (SC) connections onto the CA1 Py cells? and (2) Is synchronous afferent input from the SC essential for the CA1 to exhibit IIS? Our results suggest that the CA1 subfield with low recurrent connectivity (absence of sprouting), mimicking the topology of a normal brain, has a very low probability of producing an IIS except when a large fraction of CA1 neurons (>80%) receives a barrage of quasi-synchronous afferent input (input occurring within a temporal window of ≤24 ms) via the SC. However, as we increase the recurrent connectivity of the CA1 (Psprout > 40); mimicking sprouting in a pathological CA1 network, the CA1 can exhibit IIS even in the absence of a barrage of quasi-synchronous afferents from the SC (input occurring within temporal window >80 ms) and a low fraction of CA1 Py cells (≈30%) receiving SC input. Furthermore, we find that in the presence of Poisson distributed random input via SC, the CA1 network is able to generate spontaneous periodic IISs (≈3 Hz) for high degrees of recurrent Py connectivity (Psprout > 70). We investigate the conditions necessary for this phenomenon and find that spontaneous IISs closely depend on the degree of the network's intrinsic excitability. PMID:24478636

  15. Genesis of interictal spikes in the CA1: a computational investigation.

    PubMed

    Ratnadurai-Giridharan, Shivakeshavan; Stefanescu, Roxana A; Khargonekar, Pramod P; Carney, Paul R; Talathi, Sachin S

    2014-01-01

    Interictal spikes (IISs) are spontaneous high amplitude, short time duration <400 ms events often observed in electroencephalographs (EEG) of epileptic patients. In vitro analysis of resected mesial temporal lobe tissue from patients with refractory temporal lobe epilepsy has revealed the presence of IIS in the CA1 subfield. In this paper, we develop a biophysically relevant network model of the CA1 subfield and investigate how changes in the network properties influence the susceptibility of CA1 to exhibit an IIS. We present a novel template based approach to identify conditions under which synchronization of paroxysmal depolarization shift (PDS) events evoked in CA1 pyramidal (Py) cells can trigger an IIS. The results from this analysis are used to identify the synaptic parameters of a minimal network model that is capable of generating PDS in response to afferent synaptic input. The minimal network model parameters are then incorporated into a detailed network model of the CA1 subfield in order to address the following questions: (1) How does the formation of an IIS in the CA1 depend on the degree of sprouting (recurrent connections) between the CA1 Py cells and the fraction of CA3 Shaffer collateral (SC) connections onto the CA1 Py cells? and (2) Is synchronous afferent input from the SC essential for the CA1 to exhibit IIS? Our results suggest that the CA1 subfield with low recurrent connectivity (absence of sprouting), mimicking the topology of a normal brain, has a very low probability of producing an IIS except when a large fraction of CA1 neurons (>80%) receives a barrage of quasi-synchronous afferent input (input occurring within a temporal window of ≤24 ms) via the SC. However, as we increase the recurrent connectivity of the CA1 (P sprout > 40); mimicking sprouting in a pathological CA1 network, the CA1 can exhibit IIS even in the absence of a barrage of quasi-synchronous afferents from the SC (input occurring within temporal window >80 ms) and a low fraction of CA1 Py cells (≈30%) receiving SC input. Furthermore, we find that in the presence of Poisson distributed random input via SC, the CA1 network is able to generate spontaneous periodic IISs (≈3 Hz) for high degrees of recurrent Py connectivity (P sprout > 70). We investigate the conditions necessary for this phenomenon and find that spontaneous IISs closely depend on the degree of the network's intrinsic excitability.

  16. Spike-train spectra and network response functions for non-linear integrate-and-fire neurons.

    PubMed

    Richardson, Magnus J E

    2008-11-01

    Reduced models have long been used as a tool for the analysis of the complex activity taking place in neurons and their coupled networks. Recent advances in experimental and theoretical techniques have further demonstrated the usefulness of this approach. Despite the often gross simplification of the underlying biophysical properties, reduced models can still present significant difficulties in their analysis, with the majority of exact and perturbative results available only for the leaky integrate-and-fire model. Here an elementary numerical scheme is demonstrated which can be used to calculate a number of biologically important properties of the general class of non-linear integrate-and-fire models. Exact results for the first-passage-time density and spike-train spectrum are derived, as well as the linear response properties and emergent states of recurrent networks. Given that the exponential integrate-fire model has recently been shown to agree closely with the experimentally measured response of pyramidal cells, the methodology presented here promises to provide a convenient tool to facilitate the analysis of cortical-network dynamics.

  17. Recurrence of IgA nephropathy after kidney transplantation in steroid continuation versus early steroid-withdrawal regimens: a retrospective analysis of the UNOS/OPTN database.

    PubMed

    Leeaphorn, Napat; Garg, Neetika; Khankin, Eliyahu V; Cardarelli, Francesca; Pavlakis, Martha

    2018-02-01

    In the past 20 years, there has been an increase in use of steroid-withdrawal regimens in kidney transplantation. However, steroid withdrawal may be associated with an increased risk of recurrent IgA nephropathy (IgAN). Using United Network of (Organ Sharing/Organ Procurement and Transplantation Network) UNOS/OPTN data, we analyzed adult patients with end-stage renal disease (ESRD) due to IgAN who received their first kidney transplant between 2000 and 2014. For the primary outcome, we used a competing risk analysis to compare the cumulative incidence of graft loss due to IgAN recurrence between early steroid-withdrawal (ESW) and steroid continuation groups. The secondary outcomes were patient survival and death-censored graft survival (DCGS). A total of 9690 recipients were included (2831 in ESW group and 6859 in steroid continuation group). In total, 1238 recipients experienced graft loss, of which 191 (15.43%) were due to IgAN recurrence. In multivariable analysis, steroid use was associated with a decreased risk of recurrence (subdistribution hazard ratio 0.666, 95% CI 0.482-0.921; P = 0.014). Patient survival and DCGS were not different between the two groups. In the USA, ESW in transplant for ESRD due to IgAN is associated with a higher risk of graft loss due to disease recurrence. Future prospective studies are warranted to further address which patients with IgAN would benefit from steroid continuation. © 2017 Steunstichting ESOT.

  18. Effect of dilution in asymmetric recurrent neural networks.

    PubMed

    Folli, Viola; Gosti, Giorgio; Leonetti, Marco; Ruocco, Giancarlo

    2018-04-16

    We study with numerical simulation the possible limit behaviors of synchronous discrete-time deterministic recurrent neural networks composed of N binary neurons as a function of a network's level of dilution and asymmetry. The network dilution measures the fraction of neuron couples that are connected, and the network asymmetry measures to what extent the underlying connectivity matrix is asymmetric. For each given neural network, we study the dynamical evolution of all the different initial conditions, thus characterizing the full dynamical landscape without imposing any learning rule. Because of the deterministic dynamics, each trajectory converges to an attractor, that can be either a fixed point or a limit cycle. These attractors form the set of all the possible limit behaviors of the neural network. For each network we then determine the convergence times, the limit cycles' length, the number of attractors, and the sizes of the attractors' basin. We show that there are two network structures that maximize the number of possible limit behaviors. The first optimal network structure is fully-connected and symmetric. On the contrary, the second optimal network structure is highly sparse and asymmetric. The latter optimal is similar to what observed in different biological neuronal circuits. These observations lead us to hypothesize that independently from any given learning model, an efficient and effective biologic network that stores a number of limit behaviors close to its maximum capacity tends to develop a connectivity structure similar to one of the optimal networks we found. Copyright © 2018 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  19. Rhythms of Consciousness: Binocular Rivalry Reveals Large-Scale Oscillatory Network Dynamics Mediating Visual Perception

    PubMed Central

    Doesburg, Sam M.; Green, Jessica J.; McDonald, John J.; Ward, Lawrence M.

    2009-01-01

    Consciousness has been proposed to emerge from functionally integrated large-scale ensembles of gamma-synchronous neural populations that form and dissolve at a frequency in the theta band. We propose that discrete moments of perceptual experience are implemented by transient gamma-band synchronization of relevant cortical regions, and that disintegration and reintegration of these assemblies is time-locked to ongoing theta oscillations. In support of this hypothesis we provide evidence that (1) perceptual switching during binocular rivalry is time-locked to gamma-band synchronizations which recur at a theta rate, indicating that the onset of new conscious percepts coincides with the emergence of a new gamma-synchronous assembly that is locked to an ongoing theta rhythm; (2) localization of the generators of these gamma rhythms reveals recurrent prefrontal and parietal sources; (3) theta modulation of gamma-band synchronization is observed between and within the activated brain regions. These results suggest that ongoing theta-modulated-gamma mechanisms periodically reintegrate a large-scale prefrontal-parietal network critical for perceptual experience. Moreover, activation and network inclusion of inferior temporal cortex and motor cortex uniquely occurs on the cycle immediately preceding responses signaling perceptual switching. This suggests that the essential prefrontal-parietal oscillatory network is expanded to include additional cortical regions relevant to tasks and perceptions furnishing consciousness at that moment, in this case image processing and response initiation, and that these activations occur within a time frame consistent with the notion that conscious processes directly affect behaviour. PMID:19582165

  20. Balanced Cortical Microcircuitry for Spatial Working Memory Based on Corrective Feedback Control

    PubMed Central

    2014-01-01

    A hallmark of working memory is the ability to maintain graded representations of both the spatial location and amplitude of a memorized stimulus. Previous work has identified a neural correlate of spatial working memory in the persistent maintenance of spatially specific patterns of neural activity. How such activity is maintained by neocortical circuits remains unknown. Traditional models of working memory maintain analog representations of either the spatial location or the amplitude of a stimulus, but not both. Furthermore, although most previous models require local excitation and lateral inhibition to maintain spatially localized persistent activity stably, the substrate for lateral inhibitory feedback pathways is unclear. Here, we suggest an alternative model for spatial working memory that is capable of maintaining analog representations of both the spatial location and amplitude of a stimulus, and that does not rely on long-range feedback inhibition. The model consists of a functionally columnar network of recurrently connected excitatory and inhibitory neural populations. When excitation and inhibition are balanced in strength but offset in time, drifts in activity trigger spatially specific negative feedback that corrects memory decay. The resulting networks can temporally integrate inputs at any spatial location, are robust against many commonly considered perturbations in network parameters, and, when implemented in a spiking model, generate irregular neural firing characteristic of that observed experimentally during persistent activity. This work suggests balanced excitatory–inhibitory memory circuits implementing corrective negative feedback as a substrate for spatial working memory. PMID:24828633

  1. Classification of epileptic seizures using wavelet packet log energy and norm entropies with recurrent Elman neural network classifier.

    PubMed

    Raghu, S; Sriraam, N; Kumar, G Pradeep

    2017-02-01

    Electroencephalogram shortly termed as EEG is considered as the fundamental segment for the assessment of the neural activities in the brain. In cognitive neuroscience domain, EEG-based assessment method is found to be superior due to its non-invasive ability to detect deep brain structure while exhibiting superior spatial resolutions. Especially for studying the neurodynamic behavior of epileptic seizures, EEG recordings reflect the neuronal activity of the brain and thus provide required clinical diagnostic information for the neurologist. This specific proposed study makes use of wavelet packet based log and norm entropies with a recurrent Elman neural network (REN) for the automated detection of epileptic seizures. Three conditions, normal, pre-ictal and epileptic EEG recordings were considered for the proposed study. An adaptive Weiner filter was initially applied to remove the power line noise of 50 Hz from raw EEG recordings. Raw EEGs were segmented into 1 s patterns to ensure stationarity of the signal. Then wavelet packet using Haar wavelet with a five level decomposition was introduced and two entropies, log and norm were estimated and were applied to REN classifier to perform binary classification. The non-linear Wilcoxon statistical test was applied to observe the variation in the features under these conditions. The effect of log energy entropy (without wavelets) was also studied. It was found from the simulation results that the wavelet packet log entropy with REN classifier yielded a classification accuracy of 99.70 % for normal-pre-ictal, 99.70 % for normal-epileptic and 99.85 % for pre-ictal-epileptic.

  2. Are equilibrium multichannel networks predictable? The case of the regulated Indus River, Pakistan

    NASA Astrophysics Data System (ADS)

    Carling, P. A.; Trieu, H.; Hornby, D. D.; Huang, He Qing; Darby, S. E.; Sear, D. A.; Hutton, C.; Hill, C.; Ali, Z.; Ahmed, A.; Iqbal, I.; Hussain, Z.

    2018-02-01

    Arguably, the current planform behaviour of the Indus River is broadly predictable. Between Chashma and Taunsa, Pakistan, the Indus is a 264-km-long multiple-channel reach. Remote sensing imagery, encompassing major floods in 2007 and 2010, shows that the Indus has a minimum of two and a maximum of nine channels, with on average four active channels during the dry season and five during the annual monsoon. Thus, the network structure, if not detailed planform, remains stable even for the record 2010 flood (27,100 m3 s- 1; recurrence interval > 100 years). Bankline recession is negligible for discharges less than a peak annual discharge of 6000 m3 s- 1 ( 80% of mean annual flood). The Maximum Flow Efficiency (MFE) principle demonstrates that the channel network is insensitive to the monsoon floods, which typically peak at 13,200 m3 s- 1. Rather, the network is in near-equilibrium with the mean annual flood (7530 m3 s- 1). The MFE principle indicates that stable networks have three to four channels, thus the observed stability in the number of active channels accords with the presence of a near-equilibrium reach-scale channel network. Insensitivity to the annual hydrological cycle demonstrates that the timescale for network adjustment is much longer than the timescale of the monsoon hydrograph, with the annual excess water being stored on floodplains rather than being conveyed in an enlarged channel network. The analysis explains the lack of significant channel adjustment following the largest flood in 40 years and the extensive Indus flooding experienced on an annual basis, with its substantial impacts on the populace and agricultural production.

  3. A model of human motor sequence learning explains facilitation and interference effects based on spike-timing dependent plasticity.

    PubMed

    Wang, Quan; Rothkopf, Constantin A; Triesch, Jochen

    2017-08-01

    The ability to learn sequential behaviors is a fundamental property of our brains. Yet a long stream of studies including recent experiments investigating motor sequence learning in adult human subjects have produced a number of puzzling and seemingly contradictory results. In particular, when subjects have to learn multiple action sequences, learning is sometimes impaired by proactive and retroactive interference effects. In other situations, however, learning is accelerated as reflected in facilitation and transfer effects. At present it is unclear what the underlying neural mechanism are that give rise to these diverse findings. Here we show that a recently developed recurrent neural network model readily reproduces this diverse set of findings. The self-organizing recurrent neural network (SORN) model is a network of recurrently connected threshold units that combines a simplified form of spike-timing dependent plasticity (STDP) with homeostatic plasticity mechanisms ensuring network stability, namely intrinsic plasticity (IP) and synaptic normalization (SN). When trained on sequence learning tasks modeled after recent experiments we find that it reproduces the full range of interference, facilitation, and transfer effects. We show how these effects are rooted in the network's changing internal representation of the different sequences across learning and how they depend on an interaction of training schedule and task similarity. Furthermore, since learning in the model is based on fundamental neuronal plasticity mechanisms, the model reveals how these plasticity mechanisms are ultimately responsible for the network's sequence learning abilities. In particular, we find that all three plasticity mechanisms are essential for the network to learn effective internal models of the different training sequences. This ability to form effective internal models is also the basis for the observed interference and facilitation effects. This suggests that STDP, IP, and SN may be the driving forces behind our ability to learn complex action sequences.

  4. Bayesian Inference and Online Learning in Poisson Neuronal Networks.

    PubMed

    Huang, Yanping; Rao, Rajesh P N

    2016-08-01

    Motivated by the growing evidence for Bayesian computation in the brain, we show how a two-layer recurrent network of Poisson neurons can perform both approximate Bayesian inference and learning for any hidden Markov model. The lower-layer sensory neurons receive noisy measurements of hidden world states. The higher-layer neurons infer a posterior distribution over world states via Bayesian inference from inputs generated by sensory neurons. We demonstrate how such a neuronal network with synaptic plasticity can implement a form of Bayesian inference similar to Monte Carlo methods such as particle filtering. Each spike in a higher-layer neuron represents a sample of a particular hidden world state. The spiking activity across the neural population approximates the posterior distribution over hidden states. In this model, variability in spiking is regarded not as a nuisance but as an integral feature that provides the variability necessary for sampling during inference. We demonstrate how the network can learn the likelihood model, as well as the transition probabilities underlying the dynamics, using a Hebbian learning rule. We present results illustrating the ability of the network to perform inference and learning for arbitrary hidden Markov models.

  5. Statistical Frequency-Dependent Analysis of Trial-to-Trial Variability in Single Time Series by Recurrence Plots.

    PubMed

    Tošić, Tamara; Sellers, Kristin K; Fröhlich, Flavio; Fedotenkova, Mariia; Beim Graben, Peter; Hutt, Axel

    2015-01-01

    For decades, research in neuroscience has supported the hypothesis that brain dynamics exhibits recurrent metastable states connected by transients, which together encode fundamental neural information processing. To understand the system's dynamics it is important to detect such recurrence domains, but it is challenging to extract them from experimental neuroscience datasets due to the large trial-to-trial variability. The proposed methodology extracts recurrent metastable states in univariate time series by transforming datasets into their time-frequency representations and computing recurrence plots based on instantaneous spectral power values in various frequency bands. Additionally, a new statistical inference analysis compares different trial recurrence plots with corresponding surrogates to obtain statistically significant recurrent structures. This combination of methods is validated by applying it to two artificial datasets. In a final study of visually-evoked Local Field Potentials in partially anesthetized ferrets, the methodology is able to reveal recurrence structures of neural responses with trial-to-trial variability. Focusing on different frequency bands, the δ-band activity is much less recurrent than α-band activity. Moreover, α-activity is susceptible to pre-stimuli, while δ-activity is much less sensitive to pre-stimuli. This difference in recurrence structures in different frequency bands indicates diverse underlying information processing steps in the brain.

  6. Statistical Frequency-Dependent Analysis of Trial-to-Trial Variability in Single Time Series by Recurrence Plots

    PubMed Central

    Tošić, Tamara; Sellers, Kristin K.; Fröhlich, Flavio; Fedotenkova, Mariia; beim Graben, Peter; Hutt, Axel

    2016-01-01

    For decades, research in neuroscience has supported the hypothesis that brain dynamics exhibits recurrent metastable states connected by transients, which together encode fundamental neural information processing. To understand the system's dynamics it is important to detect such recurrence domains, but it is challenging to extract them from experimental neuroscience datasets due to the large trial-to-trial variability. The proposed methodology extracts recurrent metastable states in univariate time series by transforming datasets into their time-frequency representations and computing recurrence plots based on instantaneous spectral power values in various frequency bands. Additionally, a new statistical inference analysis compares different trial recurrence plots with corresponding surrogates to obtain statistically significant recurrent structures. This combination of methods is validated by applying it to two artificial datasets. In a final study of visually-evoked Local Field Potentials in partially anesthetized ferrets, the methodology is able to reveal recurrence structures of neural responses with trial-to-trial variability. Focusing on different frequency bands, the δ-band activity is much less recurrent than α-band activity. Moreover, α-activity is susceptible to pre-stimuli, while δ-activity is much less sensitive to pre-stimuli. This difference in recurrence structures in different frequency bands indicates diverse underlying information processing steps in the brain. PMID:26834580

  7. Memory Retrieval Time and Memory Capacity of the CA3 Network: Role of Gamma Frequency Oscillations

    ERIC Educational Resources Information Center

    de Almeida, Licurgo; Idiart, Marco; Lisman, John E.

    2007-01-01

    The existence of recurrent synaptic connections in CA3 led to the hypothesis that CA3 is an autoassociative network similar to the Hopfield networks studied by theorists. CA3 undergoes gamma frequency periodic inhibition that prevents a persistent attractor state. This argues against the analogy to Hopfield nets, in which an attractor state can be…

  8. Generalized Recurrent Neural Network accommodating Dynamic Causal Modeling for functional MRI analysis.

    PubMed

    Wang, Yuan; Wang, Yao; Lui, Yvonne W

    2018-05-18

    Dynamic Causal Modeling (DCM) is an advanced biophysical model which explicitly describes the entire process from experimental stimuli to functional magnetic resonance imaging (fMRI) signals via neural activity and cerebral hemodynamics. To conduct a DCM study, one needs to represent the experimental stimuli as a compact vector-valued function of time, which is hard in complex tasks such as book reading and natural movie watching. Deep learning provides the state-of-the-art signal representation solution, encoding complex signals into compact dense vectors while preserving the essence of the original signals. There is growing interest in using Recurrent Neural Networks (RNNs), a major family of deep learning techniques, in fMRI modeling. However, the generic RNNs used in existing studies work as black boxes, making the interpretation of results in a neuroscience context difficult and obscure. In this paper, we propose a new biophysically interpretable RNN built on DCM, DCM-RNN. We generalize the vanilla RNN and show that DCM can be cast faithfully as a special form of the generalized RNN. DCM-RNN uses back propagation for parameter estimation. We believe DCM-RNN is a promising tool for neuroscience. It can fit seamlessly into classical DCM studies. We demonstrate face validity of DCM-RNN in two principal applications of DCM: causal brain architecture hypotheses testing and effective connectivity estimation. We also demonstrate construct validity of DCM-RNN in an attention-visual experiment. Moreover, DCM-RNN enables end-to-end training of DCM and representation learning deep neural networks, extending DCM studies to complex tasks. Copyright © 2018 Elsevier Inc. All rights reserved.

  9. Recurrent neural network based virtual detection line

    NASA Astrophysics Data System (ADS)

    Kadikis, Roberts

    2018-04-01

    The paper proposes an efficient method for detection of moving objects in the video. The objects are detected when they cross a virtual detection line. Only the pixels of the detection line are processed, which makes the method computationally efficient. A Recurrent Neural Network processes these pixels. The machine learning approach allows one to train a model that works in different and changing outdoor conditions. Also, the same network can be trained for various detection tasks, which is demonstrated by the tests on vehicle and people counting. In addition, the paper proposes a method for semi-automatic acquisition of labeled training data. The labeling method is used to create training and testing datasets, which in turn are used to train and evaluate the accuracy and efficiency of the detection method. The method shows similar accuracy as the alternative efficient methods but provides greater adaptability and usability for different tasks.

  10. Dual-phase evolution in complex adaptive systems

    PubMed Central

    Paperin, Greg; Green, David G.; Sadedin, Suzanne

    2011-01-01

    Understanding the origins of complexity is a key challenge in many sciences. Although networks are known to underlie most systems, showing how they contribute to well-known phenomena remains an issue. Here, we show that recurrent phase transitions in network connectivity underlie emergent phenomena in many systems. We identify properties that are typical of systems in different connectivity phases, as well as characteristics commonly associated with the phase transitions. We synthesize these common features into a common framework, which we term dual-phase evolution (DPE). Using this framework, we review the literature from several disciplines to show that recurrent connectivity phase transitions underlie the complex properties of many biological, physical and human systems. We argue that the DPE framework helps to explain many complex phenomena, including perpetual novelty, modularity, scale-free networks and criticality. Our review concludes with a discussion of the way DPE relates to other frameworks, in particular, self-organized criticality and the adaptive cycle. PMID:21247947

  11. Dual-phase evolution in complex adaptive systems.

    PubMed

    Paperin, Greg; Green, David G; Sadedin, Suzanne

    2011-05-06

    Understanding the origins of complexity is a key challenge in many sciences. Although networks are known to underlie most systems, showing how they contribute to well-known phenomena remains an issue. Here, we show that recurrent phase transitions in network connectivity underlie emergent phenomena in many systems. We identify properties that are typical of systems in different connectivity phases, as well as characteristics commonly associated with the phase transitions. We synthesize these common features into a common framework, which we term dual-phase evolution (DPE). Using this framework, we review the literature from several disciplines to show that recurrent connectivity phase transitions underlie the complex properties of many biological, physical and human systems. We argue that the DPE framework helps to explain many complex phenomena, including perpetual novelty, modularity, scale-free networks and criticality. Our review concludes with a discussion of the way DPE relates to other frameworks, in particular, self-organized criticality and the adaptive cycle.

  12. A statistical framework for evaluating neural networks to predict recurrent events in breast cancer

    NASA Astrophysics Data System (ADS)

    Gorunescu, Florin; Gorunescu, Marina; El-Darzi, Elia; Gorunescu, Smaranda

    2010-07-01

    Breast cancer is the second leading cause of cancer deaths in women today. Sometimes, breast cancer can return after primary treatment. A medical diagnosis of recurrent cancer is often a more challenging task than the initial one. In this paper, we investigate the potential contribution of neural networks (NNs) to support health professionals in diagnosing such events. The NN algorithms are tested and applied to two different datasets. An extensive statistical analysis has been performed to verify our experiments. The results show that a simple network structure for both the multi-layer perceptron and radial basis function can produce equally good results, not all attributes are needed to train these algorithms and, finally, the classification performances of all algorithms are statistically robust. Moreover, we have shown that the best performing algorithm will strongly depend on the features of the datasets, and hence, there is not necessarily a single best classifier.

  13. Electrogram morphology recurrence patterns during atrial fibrillation.

    PubMed

    Ng, Jason; Gordon, David; Passman, Rod S; Knight, Bradley P; Arora, Rishi; Goldberger, Jeffrey J

    2014-11-01

    Traditional mapping of atrial fibrillation (AF) is limited by changing electrogram morphologies and variable cycle lengths. We tested the hypothesis that morphology recurrence plot analysis would identify sites of stable and repeatable electrogram morphology patterns. AF electrograms recorded from left atrial (LA) and right atrial (RA) sites in 19 patients (10 men; mean age 59 ± 10 years) before AF ablation were analyzed. Morphology recurrence plots for each electrogram recording were created by cross-correlation of each automatically detected activation with every other activation in the recording. A recurrence percentage, the percentage of the most common morphology, and the mean cycle length of activations with the most recurrent morphology were computed. The morphology recurrence plots commonly showed checkerboard patterns of alternating high and low cross-correlation values, indicating periodic recurrences in morphologies. The mean recurrence percentage for all sites and all patients was 38 ± 25%. The highest recurrence percentage per patient averaged 83 ± 17%. The highest recurrence percentage was located in the RA in 5 patients and in the LA in 14 patients. Patients with sites of shortest mean cycle length of activations with the most recurrent morphology in the LA and RA had ablation failure rates of 25% and 100%, respectively (hazard ratio 4.95; P = .05). A new technique to characterize electrogram morphology recurrence demonstrated that there is a distribution of sites with high and low repeatability of electrogram morphologies. Sites with rapid activation of highly repetitive morphology patterns may be critical to sustaining AF. Further testing of this approach to map and ablate AF sources is warranted. Copyright © 2014 Heart Rhythm Society. Published by Elsevier Inc. All rights reserved.

  14. Emergence of context-dependent variability across a basal ganglia network.

    PubMed

    Woolley, Sarah C; Rajan, Raghav; Joshua, Mati; Doupe, Allison J

    2014-04-02

    Context dependence is a key feature of cortical-basal ganglia circuit activity, and in songbirds the cortical outflow of a basal ganglia circuit specialized for song, LMAN, shows striking increases in trial-by-trial variability and bursting when birds sing alone rather than to females. To reveal where this variability and its social regulation emerge, we recorded stepwise from corticostriatal (HVC) neurons and their target spiny and pallidal neurons in Area X. We find that corticostriatal and spiny neurons both show precise singing-related firing across both social settings. Pallidal neurons, in contrast, exhibit markedly increased trial-by-trial variation when birds sing alone, created by highly variable pauses in firing. This variability persists even when recurrent inputs from LMAN are ablated. These data indicate that variability and its context sensitivity emerge within the basal ganglia network, suggest a network mechanism for this emergence, and highlight variability generation and regulation as basal ganglia functions. Copyright © 2014 Elsevier Inc. All rights reserved.

  15. Emergence of context-dependent variability across a basal ganglia network

    PubMed Central

    Woolley, Sarah C.; Rajan, Raghav; Joshua, Mati; Doupe, Allison J.

    2014-01-01

    Summary Context-dependence is a key feature of cortical-basal ganglia circuit activity, and in songbirds, the cortical outflow of a basal ganglia circuit specialized for song, LMAN, shows striking increases in trial-by-trial variability and bursting when birds sing alone rather than to females. To reveal where this variability and its social regulation emerge, we recorded stepwise from cortico-striatal (HVC) neurons and their target spiny and pallidal neurons in Area X. We find that cortico-striatal and spiny neurons both show precise singing-related firing across both social settings. Pallidal neurons, in contrast, exhibit markedly increased trial-by-trial variation when birds sing alone, created by highly variable pauses in firing. This variability persists even when recurrent inputs from LMAN are ablated. These data indicate that variability and its context-sensitivity emerge within the basal ganglia network, suggest a network mechanism for this emergence, and highlight variability generation and regulation as basal ganglia functions. PMID:24698276

  16. Poisson-Like Spiking in Circuits with Probabilistic Synapses

    PubMed Central

    Moreno-Bote, Rubén

    2014-01-01

    Neuronal activity in cortex is variable both spontaneously and during stimulation, and it has the remarkable property that it is Poisson-like over broad ranges of firing rates covering from virtually zero to hundreds of spikes per second. The mechanisms underlying cortical-like spiking variability over such a broad continuum of rates are currently unknown. We show that neuronal networks endowed with probabilistic synaptic transmission, a well-documented source of variability in cortex, robustly generate Poisson-like variability over several orders of magnitude in their firing rate without fine-tuning of the network parameters. Other sources of variability, such as random synaptic delays or spike generation jittering, do not lead to Poisson-like variability at high rates because they cannot be sufficiently amplified by recurrent neuronal networks. We also show that probabilistic synapses predict Fano factor constancy of synaptic conductances. Our results suggest that synaptic noise is a robust and sufficient mechanism for the type of variability found in cortex. PMID:25032705

  17. The neural circuit and synaptic dynamics underlying perceptual decision-making

    NASA Astrophysics Data System (ADS)

    Liu, Feng

    2015-03-01

    Decision-making with several choice options is central to cognition. To elucidate the neural mechanisms of multiple-choice motion discrimination, we built a continuous recurrent network model to represent a local circuit in the lateral intraparietal area (LIP). The network is composed of pyramidal cells and interneurons, which are directionally tuned. All neurons are reciprocally connected, and the synaptic connectivity strength is heterogeneous. Specifically, we assume two types of inhibitory connectivity to pyramidal cells: opposite-feature and similar-feature inhibition. The model accounted for both physiological and behavioral data from monkey experiments. The network is endowed with slow excitatory reverberation, which subserves the buildup and maintenance of persistent neural activity, and predominant feedback inhibition, which underlies the winner-take-all competition and attractor dynamics. The opposite-feature and opposite-feature inhibition have different effects on decision-making, and only their combination allows for a categorical choice among 12 alternatives. Together, our work highlights the importance of structured synaptic inhibition in multiple-choice decision-making processes.

  18. Natural Language Video Description using Deep Recurrent Neural Networks

    DTIC Science & Technology

    2015-11-23

    records who says what, but lacks tim- ing information. Movie scripts typically include names of all characters and most movies loosely follow the...and Jürgen Schmidhuber. A novel approach to on-line handwriting recognition based on bidirectional long short-term memory networks. In Proc. 9th Int

  19. Continuous Timescale Long-Short Term Memory Neural Network for Human Intent Understanding

    PubMed Central

    Yu, Zhibin; Moirangthem, Dennis S.; Lee, Minho

    2017-01-01

    Understanding of human intention by observing a series of human actions has been a challenging task. In order to do so, we need to analyze longer sequences of human actions related with intentions and extract the context from the dynamic features. The multiple timescales recurrent neural network (MTRNN) model, which is believed to be a kind of solution, is a useful tool for recording and regenerating a continuous signal for dynamic tasks. However, the conventional MTRNN suffers from the vanishing gradient problem which renders it impossible to be used for longer sequence understanding. To address this problem, we propose a new model named Continuous Timescale Long-Short Term Memory (CTLSTM) in which we inherit the multiple timescales concept into the Long-Short Term Memory (LSTM) recurrent neural network (RNN) that addresses the vanishing gradient problem. We design an additional recurrent connection in the LSTM cell outputs to produce a time-delay in order to capture the slow context. Our experiments show that the proposed model exhibits better context modeling ability and captures the dynamic features on multiple large dataset classification tasks. The results illustrate that the multiple timescales concept enhances the ability of our model to handle longer sequences related with human intentions and hence proving to be more suitable for complex tasks, such as intention recognition. PMID:28878646

  20. Classification of epileptiform and wicket spike of EEG pattern using backpropagation neural network

    NASA Astrophysics Data System (ADS)

    Puspita, Juni Wijayanti; Jaya, Agus Indra; Gunadharma, Suryani

    2017-03-01

    Epilepsy is characterized by recurrent seizures that is resulted by permanent brain abnormalities. One of tools to support the diagnosis of epilepsy is Electroencephalograph (EEG), which describes the recording of brain electrical activity. Abnormal EEG patterns in epilepsy patients consist of Spike and Sharp waves. While both waves, there is a normal pattern that sometimes misinterpreted as epileptiform by electroenchepalographer (EEGer), namely Wicket Spike. The main difference of the three waves are on the time duration that related to the frequency. In this study, we proposed a method to classify a EEG wave into Sharp wave, Spike wave or Wicket spike group using Backpropagation Neural Network based on the frequency and amplitude of each wave. The results show that the proposed method can classifies the three group of waves with good accuracy.

  1. Structured Semantic Knowledge Can Emerge Automatically from Predicting Word Sequences in Child-Directed Speech

    PubMed Central

    Huebner, Philip A.; Willits, Jon A.

    2018-01-01

    Previous research has suggested that distributional learning mechanisms may contribute to the acquisition of semantic knowledge. However, distributional learning mechanisms, statistical learning, and contemporary “deep learning” approaches have been criticized for being incapable of learning the kind of abstract and structured knowledge that many think is required for acquisition of semantic knowledge. In this paper, we show that recurrent neural networks, trained on noisy naturalistic speech to children, do in fact learn what appears to be abstract and structured knowledge. We trained two types of recurrent neural networks (Simple Recurrent Network, and Long Short-Term Memory) to predict word sequences in a 5-million-word corpus of speech directed to children ages 0–3 years old, and assessed what semantic knowledge they acquired. We found that learned internal representations are encoding various abstract grammatical and semantic features that are useful for predicting word sequences. Assessing the organization of semantic knowledge in terms of the similarity structure, we found evidence of emergent categorical and hierarchical structure in both models. We found that the Long Short-term Memory (LSTM) and SRN are both learning very similar kinds of representations, but the LSTM achieved higher levels of performance on a quantitative evaluation. We also trained a non-recurrent neural network, Skip-gram, on the same input to compare our results to the state-of-the-art in machine learning. We found that Skip-gram achieves relatively similar performance to the LSTM, but is representing words more in terms of thematic compared to taxonomic relations, and we provide reasons why this might be the case. Our findings show that a learning system that derives abstract, distributed representations for the purpose of predicting sequential dependencies in naturalistic language may provide insight into emergence of many properties of the developing semantic system. PMID:29520243

  2. Classification of conductance traces with recurrent neural networks

    NASA Astrophysics Data System (ADS)

    Lauritzen, Kasper P.; Magyarkuti, András; Balogh, Zoltán; Halbritter, András; Solomon, Gemma C.

    2018-02-01

    We present a new automated method for structural classification of the traces obtained in break junction experiments. Using recurrent neural networks trained on the traces of minimal cross-sectional area in molecular dynamics simulations, we successfully separate the traces into two classes: point contact or nanowire. This is done without any assumptions about the expected features of each class. The trained neural network is applied to experimental break junction conductance traces, and it separates the classes as well as the previously used experimental methods. The effect of using partial conductance traces is explored, and we show that the method performs equally well using full or partial traces (as long as the trace just prior to breaking is included). When only the initial part of the trace is included, the results are still better than random chance. Finally, we show that the neural network classification method can be used to classify experimental conductance traces without using simulated results for training, but instead training the network on a few representative experimental traces. This offers a tool to recognize some characteristic motifs of the traces, which can be hard to find by simple data selection algorithms.

  3. Extracting functionally feedforward networks from a population of spiking neurons

    PubMed Central

    Vincent, Kathleen; Tauskela, Joseph S.; Thivierge, Jean-Philippe

    2012-01-01

    Neuronal avalanches are a ubiquitous form of activity characterized by spontaneous bursts whose size distribution follows a power-law. Recent theoretical models have replicated power-law avalanches by assuming the presence of functionally feedforward connections (FFCs) in the underlying dynamics of the system. Accordingly, avalanches are generated by a feedforward chain of activation that persists despite being embedded in a larger, massively recurrent circuit. However, it is unclear to what extent networks of living neurons that exhibit power-law avalanches rely on FFCs. Here, we employed a computational approach to reconstruct the functional connectivity of cultured cortical neurons plated on multielectrode arrays (MEAs) and investigated whether pharmacologically induced alterations in avalanche dynamics are accompanied by changes in FFCs. This approach begins by extracting a functional network of directed links between pairs of neurons, and then evaluates the strength of FFCs using Schur decomposition. In a first step, we examined the ability of this approach to extract FFCs from simulated spiking neurons. The strength of FFCs obtained in strictly feedforward networks diminished monotonically as links were gradually rewired at random. Next, we estimated the FFCs of spontaneously active cortical neuron cultures in the presence of either a control medium, a GABAA receptor antagonist (PTX), or an AMPA receptor antagonist combined with an NMDA receptor antagonist (APV/DNQX). The distribution of avalanche sizes in these cultures was modulated by this pharmacology, with a shallower power-law under PTX (due to the prominence of larger avalanches) and a steeper power-law under APV/DNQX (due to avalanches recruiting fewer neurons) relative to control cultures. The strength of FFCs increased in networks after application of PTX, consistent with an amplification of feedforward activity during avalanches. Conversely, FFCs decreased after application of APV/DNQX, consistent with fading feedforward activation. The observed alterations in FFCs provide experimental support for recent theoretical work linking power-law avalanches to the feedforward organization of functional connections in local neuronal circuits. PMID:23091458

  4. Extracting functionally feedforward networks from a population of spiking neurons.

    PubMed

    Vincent, Kathleen; Tauskela, Joseph S; Thivierge, Jean-Philippe

    2012-01-01

    Neuronal avalanches are a ubiquitous form of activity characterized by spontaneous bursts whose size distribution follows a power-law. Recent theoretical models have replicated power-law avalanches by assuming the presence of functionally feedforward connections (FFCs) in the underlying dynamics of the system. Accordingly, avalanches are generated by a feedforward chain of activation that persists despite being embedded in a larger, massively recurrent circuit. However, it is unclear to what extent networks of living neurons that exhibit power-law avalanches rely on FFCs. Here, we employed a computational approach to reconstruct the functional connectivity of cultured cortical neurons plated on multielectrode arrays (MEAs) and investigated whether pharmacologically induced alterations in avalanche dynamics are accompanied by changes in FFCs. This approach begins by extracting a functional network of directed links between pairs of neurons, and then evaluates the strength of FFCs using Schur decomposition. In a first step, we examined the ability of this approach to extract FFCs from simulated spiking neurons. The strength of FFCs obtained in strictly feedforward networks diminished monotonically as links were gradually rewired at random. Next, we estimated the FFCs of spontaneously active cortical neuron cultures in the presence of either a control medium, a GABA(A) receptor antagonist (PTX), or an AMPA receptor antagonist combined with an NMDA receptor antagonist (APV/DNQX). The distribution of avalanche sizes in these cultures was modulated by this pharmacology, with a shallower power-law under PTX (due to the prominence of larger avalanches) and a steeper power-law under APV/DNQX (due to avalanches recruiting fewer neurons) relative to control cultures. The strength of FFCs increased in networks after application of PTX, consistent with an amplification of feedforward activity during avalanches. Conversely, FFCs decreased after application of APV/DNQX, consistent with fading feedforward activation. The observed alterations in FFCs provide experimental support for recent theoretical work linking power-law avalanches to the feedforward organization of functional connections in local neuronal circuits.

  5. Implicity Defined Neural Networks for Sequence Labeling

    DTIC Science & Technology

    2017-02-13

    popularity of the Long Short - Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) and variants such as the Gated Recurrent Unit (GRU) (Cho et al., 2014...bidirectional lstm and other neural network architectures. Neural Net- works 18(5):602–610. Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short - term ...hid- den states of the network to coupled together, allowing potential improvement on problems with complex, long -distance dependencies. Initial

  6. Spatiotemporal Dynamics and Reliable Computations in Recurrent Spiking Neural Networks

    NASA Astrophysics Data System (ADS)

    Pyle, Ryan; Rosenbaum, Robert

    2017-01-01

    Randomly connected networks of excitatory and inhibitory spiking neurons provide a parsimonious model of neural variability, but are notoriously unreliable for performing computations. We show that this difficulty is overcome by incorporating the well-documented dependence of connection probability on distance. Spatially extended spiking networks exhibit symmetry-breaking bifurcations and generate spatiotemporal patterns that can be trained to perform dynamical computations under a reservoir computing framework.

  7. Recurrent neural network-based modeling of gene regulatory network using elephant swarm water search algorithm.

    PubMed

    Mandal, Sudip; Saha, Goutam; Pal, Rajat Kumar

    2017-08-01

    Correct inference of genetic regulations inside a cell from the biological database like time series microarray data is one of the greatest challenges in post genomic era for biologists and researchers. Recurrent Neural Network (RNN) is one of the most popular and simple approach to model the dynamics as well as to infer correct dependencies among genes. Inspired by the behavior of social elephants, we propose a new metaheuristic namely Elephant Swarm Water Search Algorithm (ESWSA) to infer Gene Regulatory Network (GRN). This algorithm is mainly based on the water search strategy of intelligent and social elephants during drought, utilizing the different types of communication techniques. Initially, the algorithm is tested against benchmark small and medium scale artificial genetic networks without and with presence of different noise levels and the efficiency was observed in term of parametric error, minimum fitness value, execution time, accuracy of prediction of true regulation, etc. Next, the proposed algorithm is tested against the real time gene expression data of Escherichia Coli SOS Network and results were also compared with others state of the art optimization methods. The experimental results suggest that ESWSA is very efficient for GRN inference problem and performs better than other methods in many ways.

  8. Structure simulation into a lamellar supramolecular network and calculation of the metal ions/ligands ratio

    PubMed Central

    2012-01-01

    Background Research interest in phosphonates metal organic frameworks (MOF) has increased extremely in the last two decades, because of theirs fascinating and complex topology and structural flexibility. In this paper we present a mathematical model for ligand/metal ion ratio of an octahedral (Oh) network of cobalt vinylphosphonate (Co(vP)·H2O). Results A recurrent relationship of the ratio between the number of ligands and the number of metal ions in a lamellar octahedral (Oh) network Co(vP)·H2O, has been deducted by building the 3D network step by step using HyperChem 7.52 package. The mathematical relationship has been validated using X ray analysis, experimental thermogravimetric and elemental analysis data. Conclusions Based on deducted recurrence relationship, we can conclude prior to perform X ray analysis, that in the case of a thermogravimetric analysis pointing a ratio between the number of metal ions and ligands number around 1, the 3D network will have a central metal ion that corresponds to a single ligand. This relation is valid for every type of supramolecular network with divalent metal central ion Oh coordinated and bring valuable information with low effort and cost. PMID:22932493

  9. Alterations of network synchrony after epileptic seizures: An analysis of post-ictal intracranial recordings in pediatric epilepsy patients.

    PubMed

    Tomlinson, Samuel B; Khambhati, Ankit N; Bermudez, Camilo; Kamens, Rebecca M; Heuer, Gregory G; Porter, Brenda E; Marsh, Eric D

    2018-07-01

    Post-ictal EEG alterations have been identified in studies of intracranial recordings, but the clinical significance of post-ictal EEG activity is undetermined. The purpose of this study was to examine the relationship between peri-ictal EEG activity, surgical outcome, and extent of seizure propagation in a sample of pediatric epilepsy patients. Intracranial EEG recordings were obtained from 19 patients (mean age = 11.4 years, range = 3-20 years) with 57 seizures used for analysis (mean = 3.0 seizures per patient). For each seizure, 3-min segments were extracted from adjacent pre-ictal and post-ictal epochs. To compare physiology of the epileptic network between epochs, we calculated the relative delta power (Δ) using discrete Fourier transformation and constructed functional networks based on broadband connectivity (conn). We investigated differences between the pre-ictal (Δ pre , conn pre ) and post-ictal (Δ post , conn post ) segments in focal-network (i.e., confined to seizure onset zone) versus distributed-network (i.e., diffuse ictal propagation) seizures. Distributed-network (DN) seizures exhibited increased post-ictal delta power and global EEG connectivity compared to focal-network (FN) seizures. Following DN seizures, patients with seizure-free outcomes exhibited a 14.7% mean increase in delta power and an 8.3% mean increase in global connectivity compared to pre-ictal baseline, which was dramatically less than values observed among seizure-persistent patients (29.6% and 47.1%, respectively). Post-ictal differences between DN and FN seizures correlate with post-operative seizure persistence. We hypothesize that post-ictal deactivation of subcortical nuclei recruited during seizure propagation may account for this result while lending insights into mechanisms of post-operative seizure recurrence. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. Circular RNA In Invasive and Recurrent Clinical Nonfunctioning Pituitary Adenomas: Expression Profiles and Bioinformatic Analysis.

    PubMed

    Wang, Jianpeng; Wang, Dong; Wan, Dehong; Ma, Qingxia; Liu, Qian; Li, Jiye; Li, Zhaojian; Gao, Yang; Jiang, Guohui; Ma, Leina; Liu, Jia; Li, Chuzhong

    2018-06-14

    The invasion and recurrence of clinical nonfunctioning pituitary adenomas (NFA) often lead to surgical treatment failure. Circular RNAs (circRNAs) are a novel class of RNAs whose 3' and 5' ends are joined together and have been shown to play important roles in cancer development. Up to now, the roles of circRNAs remain unclear in invasive and recurrent NFA. We detected and summarized the circRNA expression pattern in 75 NFA tissues from 10 non-invasive cases and 65 invasive cases and 9 pairs NFA tumor tissues from 9 recurrent cases by circRNA microarrays. Accordingly, functional enrichment analysis and pathway analysis were performed and circRNA-microRNA(miRNA) network were generated by bioinformatic analysis tools. 5 new invasive NFA samples and 5 non-invasive NFA samples were collected to measure the microarray results. 570 dysregulated circRNAs (Invasive Tumor vs. Non-invasive Tumor) and 10 up-regulated circRNAs (Recurrent tumor Tissue vs. First surgery tumor Tissue) were identified based on the situation (FC>2, P<0.05). The parental genes of the dysregulated circRNAs in the comparison between invasion tumor and non-invasion tumor were found to be enriched in some cell adhesion signaling pathways such as Focal adhesion, Hippo signaling pathway, PI3K-Akt signaling pathway, and Adherens junction. The circRNA-miRNA network showed that the dysregulated circRNA may function as miRNA sponges. This is the first study to conduct and comprehensively analyze the circRNA expression profile in invasive and recurrent NFA. Our finding will provide evidence for the significance of circRNAs in NFA diagnosis, prognosis and clinical treatment. Copyright © 2018 Elsevier Inc. All rights reserved.

  11. Toyota Prius HEV neurocontrol and diagnostics.

    PubMed

    Prokhorov, Danil V

    2008-01-01

    A neural network controller for improved fuel efficiency of the Toyota Prius hybrid electric vehicle is proposed. A new method to detect and mitigate a battery fault is also presented. The approach is based on recurrent neural networks and includes the extended Kalman filter. The proposed approach is quite general and applicable to other control systems.

  12. Speaker-dependent Multipitch Tracking Using Deep Neural Networks

    DTIC Science & Technology

    2015-01-01

    connections through time. Studies have shown that RNNs are good at modeling sequential data like handwriting [12] and speech [26]. We plan to explore RNNs in...Schmidhuber, and S. Fernández, “Unconstrained on-line handwriting recognition with recurrent neural networks,” in Proceedings of NIPS, 2008, pp. 577–584. [13

  13. Drug synergy screen and network modeling in dedifferentiated liposarcoma identifies CDK4 and IGF1R as synergistic drug targets.

    PubMed

    Miller, Martin L; Molinelli, Evan J; Nair, Jayasree S; Sheikh, Tahir; Samy, Rita; Jing, Xiaohong; He, Qin; Korkut, Anil; Crago, Aimee M; Singer, Samuel; Schwartz, Gary K; Sander, Chris

    2013-09-24

    Dedifferentiated liposarcoma (DDLS) is a rare but aggressive cancer with high recurrence and low response rates to targeted therapies. Increasing treatment efficacy may require combinations of targeted agents that counteract the effects of multiple abnormalities. To identify a possible multicomponent therapy, we performed a combinatorial drug screen in a DDLS-derived cell line and identified cyclin-dependent kinase 4 (CDK4) and insulin-like growth factor 1 receptor (IGF1R) as synergistic drug targets. We measured the phosphorylation of multiple proteins and cell viability in response to systematic drug combinations and derived computational models of the signaling network. These models predict that the observed synergy in reducing cell viability with CDK4 and IGF1R inhibitors depends on the activity of the AKT pathway. Experiments confirmed that combined inhibition of CDK4 and IGF1R cooperatively suppresses the activation of proteins within the AKT pathway. Consistent with these findings, synergistic reductions in cell viability were also found when combining CDK4 inhibition with inhibition of either AKT or epidermal growth factor receptor (EGFR), another receptor similar to IGF1R that activates AKT. Thus, network models derived from context-specific proteomic measurements of systematically perturbed cancer cells may reveal cancer-specific signaling mechanisms and aid in the design of effective combination therapies.

  14. Drug Synergy Screen and Network Modeling in Dedifferentiated Liposarcoma Identifies CDK4 and IGF1R as Synergistic Drug Targets

    PubMed Central

    Miller, Martin L.; Molinelli, Evan J.; Nair, Jayasree S.; Sheikh, Tahir; Samy, Rita; Jing, Xiaohong; He, Qin; Korkut, Anil; Crago, Aimee M.; Singer, Samuel; Schwartz, Gary K.; Sander, Chris

    2014-01-01

    Dedifferentiated liposarcoma (DDLS) is a rare but aggressive cancer with high recurrence and low response rates to targeted therapies. Increasing treatment efficacy may require combinations of targeted agents that counteract the effects of multiple abnormalities. To identify a possible multicomponent therapy, we performed a combinatorial drug screen in a DDLS-derived cell line and identified cyclin-dependent kinase 4 (CDK4) and insulin-like growth factor 1 receptor (IGF1R) as synergistic drug targets. We measured the phosphorylation of multiple proteins and cell viability in response to systematic drug combinations and derived computational models of the signaling network. These models predict that the observed synergy in reducing cell viability with CDK4 and IGF1R inhibitors depend on activity of the AKT pathway. Experiments confirmed that combined inhibition of CDK4 and IGF1R cooperatively suppresses the activation of proteins within the AKT pathway. Consistent with these findings, synergistic reductions in cell viability were also found when combining CDK4 inhibition with inhibition of either AKT or epidermal growth factor receptor (EGFR), another receptor similar to IGF1R that activates AKT. Thus, network models derived from context-specific proteomic measurements of systematically perturbed cancer cells may reveal cancer-specific signaling mechanisms and aid in the design of effective combination therapies. PMID:24065146

  15. Weighted gene co-expression network analysis of gene modules for the prognosis of esophageal cancer.

    PubMed

    Zhang, Cong; Sun, Qian

    2017-06-01

    Esophageal cancer is a common malignant tumor, whose pathogenesis and prognosis factors are not fully understood. This study aimed to discover the gene clusters that have similar functions and can be used to predict the prognosis of esophageal cancer. The matched microarray and RNA sequencing data of 185 patients with esophageal cancer were downloaded from The Cancer Genome Atlas (TCGA), and gene co-expression networks were built without distinguishing between squamous carcinoma and adenocarcinoma. The result showed that 12 modules were associated with one or more survival data such as recurrence status, recurrence time, vital status or vital time. Furthermore, survival analysis showed that 5 out of the 12 modules were related to progression-free survival (PFS) or overall survival (OS). As the most important module, the midnight blue module with 82 genes was related to PFS, apart from the patient age, tumor grade, primary treatment success, and duration of smoking and tumor histological type. Gene ontology enrichment analysis revealed that "glycoprotein binding" was the top enriched function of midnight blue module genes. Additionally, the blue module was the exclusive gene clusters related to OS. Platelet activating factor receptor (PTAFR) and feline Gardner-Rasheed (FGR) were the top hub genes in both modeling datasets and the STRING protein interaction database. In conclusion, our study provides novel insights into the prognosis-associated genes and screens out candidate biomarkers for esophageal cancer.

  16. Machine learning in sentiment reconstruction of the simulated stock market

    NASA Astrophysics Data System (ADS)

    Goykhman, Mikhail; Teimouri, Ali

    2018-02-01

    In this paper we continue the study of the simulated stock market framework defined by the driving sentiment processes. We focus on the market environment driven by the buy/sell trading sentiment process of the Markov chain type. We apply the methodology of the Hidden Markov Models and the Recurrent Neural Networks to reconstruct the transition probabilities matrix of the Markov sentiment process and recover the underlying sentiment states from the observed stock price behavior. We demonstrate that the Hidden Markov Model can successfully recover the transition probabilities matrix for the hidden sentiment process of the Markov Chain type. We also demonstrate that the Recurrent Neural Network can successfully recover the hidden sentiment states from the observed simulated stock price time series.

  17. Exploring multiple feature combination strategies with a recurrent neural network architecture for off-line handwriting recognition

    NASA Astrophysics Data System (ADS)

    Mioulet, L.; Bideault, G.; Chatelain, C.; Paquet, T.; Brunessaux, S.

    2015-01-01

    The BLSTM-CTC is a novel recurrent neural network architecture that has outperformed previous state of the art algorithms in tasks such as speech recognition or handwriting recognition. It has the ability to process long term dependencies in temporal signals in order to label unsegmented data. This paper describes different ways of combining features using a BLSTM-CTC architecture. Not only do we explore the low level combination (feature space combination) but we also explore high level combination (decoding combination) and mid-level (internal system representation combination). The results are compared on the RIMES word database. Our results show that the low level combination works best, thanks to the powerful data modeling of the LSTM neurons.

  18. Automatic temporal segment detection via bilateral long short-term memory recurrent neural networks

    NASA Astrophysics Data System (ADS)

    Sun, Bo; Cao, Siming; He, Jun; Yu, Lejun; Li, Liandong

    2017-03-01

    Constrained by the physiology, the temporal factors associated with human behavior, irrespective of facial movement or body gesture, are described by four phases: neutral, onset, apex, and offset. Although they may benefit related recognition tasks, it is not easy to accurately detect such temporal segments. An automatic temporal segment detection framework using bilateral long short-term memory recurrent neural networks (BLSTM-RNN) to learn high-level temporal-spatial features, which synthesizes the local and global temporal-spatial information more efficiently, is presented. The framework is evaluated in detail over the face and body database (FABO). The comparison shows that the proposed framework outperforms state-of-the-art methods for solving the problem of temporal segment detection.

  19. Recurrent hepatocellular carcinoma after liver transplant: identifying the high-risk patient

    PubMed Central

    Nissen, Nicholas N; Menon, Vijay; Bresee, Catherine; Tran, Tram T; Annamalai, Alagappan; Poordad, Fred; Fair, Jeffrey H; Klein, Andrew S; Boland, Brendan; Colquhoun, Steven D

    2011-01-01

    Background Recurrence of hepatocellular carcinoma (HCC) after liver transplantation (LT) is rarely curable. However, in view of the advent of new treatments, it is critical that patients at high risk for recurrence are identified. Methods Patients undergoing LT for HCC at a single centre between 2002 and 2010 were reviewed and data on clinical parameters and explant pathology were analysed to determine factors associated with HCC recurrence. All necrotic and viable tumour nodules were included in explant staging. All patients underwent LT according to the United Network for Organ Sharing (UNOS) Model for End-stage Liver Disease (MELD) tumour exception policies. Results Liver transplantation was performed in 122 patients with HCC during this period. Rates of recurrence-free survival in the entire cohort at 1 year and 3 years were 95% and 89%, respectively. Thirteen patients developed HCC recurrence at a median of 14 months post-LT. In univariate analysis the factors associated with HCC recurrence were bilobar tumours, vascular invasion, and stage exceeding either Milan or University of California San Francisco (UCSF) Criteria. Multivariate analysis showed pathology outside UCSF Criteria was the major predictor of recurrence; when pathology outside UCSF Criteria was found in combination with vascular invasion, the predicted 3-year recurrence-free survival was only 26%. Conclusions Explant pathology can be used to predict the risk for recurrent HCC after LT, which may allow for improved adjuvant and management strategies. PMID:21843263

  20. Detecting recurrent gene mutation in interaction network context using multi-scale graph diffusion.

    PubMed

    Babaei, Sepideh; Hulsman, Marc; Reinders, Marcel; de Ridder, Jeroen

    2013-01-23

    Delineating the molecular drivers of cancer, i.e. determining cancer genes and the pathways which they deregulate, is an important challenge in cancer research. In this study, we aim to identify pathways of frequently mutated genes by exploiting their network neighborhood encoded in the protein-protein interaction network. To this end, we introduce a multi-scale diffusion kernel and apply it to a large collection of murine retroviral insertional mutagenesis data. The diffusion strength plays the role of scale parameter, determining the size of the network neighborhood that is taken into account. As a result, in addition to detecting genes with frequent mutations in their genomic vicinity, we find genes that harbor frequent mutations in their interaction network context. We identify densely connected components of known and putatively novel cancer genes and demonstrate that they are strongly enriched for cancer related pathways across the diffusion scales. Moreover, the mutations in the clusters exhibit a significant pattern of mutual exclusion, supporting the conjecture that such genes are functionally linked. Using multi-scale diffusion kernel, various infrequently mutated genes are found to harbor significant numbers of mutations in their interaction network neighborhood. Many of them are well-known cancer genes. The results demonstrate the importance of defining recurrent mutations while taking into account the interaction network context. Importantly, the putative cancer genes and networks detected in this study are found to be significant at different diffusion scales, confirming the necessity of a multi-scale analysis.

  1. Conditional bistability, a generic cellular mnemonic mechanism for robust and flexible working memory computations.

    PubMed

    Rodriguez, Guillaume; Sarazin, Matthieu; Clemente, Alexandra; Holden, Stephanie; Paz, Jeanne T; Delord, Bruno

    2018-04-30

    Persistent neural activity, the substrate of working memory, is thought to emerge from synaptic reverberation within recurrent networks. However, reverberation models do not robustly explain fundamental dynamics of persistent activity, including high-spiking irregularity, large intertrial variability, and state transitions. While cellular bistability may contribute to persistent activity, its rigidity appears incompatible with persistent activity labile characteristics. Here, we unravel in a cellular model a form of spike-mediated conditional bistability that is robust, generic and provides a rich repertoire of mnemonic computations. Under asynchronous synaptic inputs of the awakened state, conditional bistability generates spiking/bursting episodes, accounting for the irregularity, variability and state transitions characterizing persistent activity. This mechanism has likely been overlooked because of the sub-threshold input it requires and we predict how to assess it experimentally. Our results suggest a reexamination of the role of intrinsic properties in the collective network dynamics responsible for flexible working memory. SIGNIFICANCE STATEMENT This study unravels a novel form of intrinsic neuronal property, i.e. conditional bistability. We show that, thanks of its conditional character, conditional bistability favors the emergence of flexible and robust forms of persistent activity in PFC neural networks, in opposition to previously studied classical forms of absolute bistability. Specifically, we demonstrate for the first time that conditional bistability 1) is a generic biophysical spike-dependent mechanism of layer V pyramidal neurons in the PFC and that 2) it accounts for essential neurodynamical features for the organisation and flexibility of PFC persistent activity (the large irregularity and intertrial variability of the discharge and its organization under discrete stable states), which remain unexplained in a robust fashion by current models. Copyright © 2018 the authors.

  2. Balanced cortical microcircuitry for spatial working memory based on corrective feedback control.

    PubMed

    Lim, Sukbin; Goldman, Mark S

    2014-05-14

    A hallmark of working memory is the ability to maintain graded representations of both the spatial location and amplitude of a memorized stimulus. Previous work has identified a neural correlate of spatial working memory in the persistent maintenance of spatially specific patterns of neural activity. How such activity is maintained by neocortical circuits remains unknown. Traditional models of working memory maintain analog representations of either the spatial location or the amplitude of a stimulus, but not both. Furthermore, although most previous models require local excitation and lateral inhibition to maintain spatially localized persistent activity stably, the substrate for lateral inhibitory feedback pathways is unclear. Here, we suggest an alternative model for spatial working memory that is capable of maintaining analog representations of both the spatial location and amplitude of a stimulus, and that does not rely on long-range feedback inhibition. The model consists of a functionally columnar network of recurrently connected excitatory and inhibitory neural populations. When excitation and inhibition are balanced in strength but offset in time, drifts in activity trigger spatially specific negative feedback that corrects memory decay. The resulting networks can temporally integrate inputs at any spatial location, are robust against many commonly considered perturbations in network parameters, and, when implemented in a spiking model, generate irregular neural firing characteristic of that observed experimentally during persistent activity. This work suggests balanced excitatory-inhibitory memory circuits implementing corrective negative feedback as a substrate for spatial working memory. Copyright © 2014 the authors 0270-6474/14/346790-17$15.00/0.

  3. Modulation of short-term plasticity in the corticothalamic circuit by group III metabotropic glutamate receptors.

    PubMed

    Kyuyoung, Christine L; Huguenard, John R

    2014-01-08

    Recurrent connections in the corticothalamic circuit underlie oscillatory behavior in this network and range from normal sleep rhythms to the abnormal spike-wave discharges seen in absence epilepsy. The propensity of thalamic neurons to fire postinhibitory rebound bursts mediated by low-threshold calcium spikes renders the circuit vulnerable to both increased excitation and increased inhibition, such as excessive excitatory cortical drive to thalamic reticular (RT) neurons or heightened inhibition of thalamocortical relay (TC) neurons by RT. In this context, a protective role may be played by group III metabotropic receptors (mGluRs), which are uniquely located in the presynaptic active zone and typically act as autoreceptors or heteroceptors to depress synaptic release. Here, we report that these receptors regulate short-term plasticity at two loci in the corticothalamic circuit in rats: glutamatergic cortical synapses onto RT neurons and GABAergic synapses onto TC neurons in somatosensory ventrobasal thalamus. The net effect of group III mGluR activation at these synapses is to suppress thalamic oscillations as assayed in vitro. These findings suggest a functional role of these receptors to modulate corticothalamic transmission and protect against prolonged activity in the network.

  4. Maximization of Learning Speed in the Motor Cortex Due to Neuronal Redundancy

    PubMed Central

    Takiyama, Ken; Okada, Masato

    2012-01-01

    Many redundancies play functional roles in motor control and motor learning. For example, kinematic and muscle redundancies contribute to stabilizing posture and impedance control, respectively. Another redundancy is the number of neurons themselves; there are overwhelmingly more neurons than muscles, and many combinations of neural activation can generate identical muscle activity. The functional roles of this neuronal redundancy remains unknown. Analysis of a redundant neural network model makes it possible to investigate these functional roles while varying the number of model neurons and holding constant the number of output units. Our analysis reveals that learning speed reaches its maximum value if and only if the model includes sufficient neuronal redundancy. This analytical result does not depend on whether the distribution of the preferred direction is uniform or a skewed bimodal, both of which have been reported in neurophysiological studies. Neuronal redundancy maximizes learning speed, even if the neural network model includes recurrent connections, a nonlinear activation function, or nonlinear muscle units. Furthermore, our results do not rely on the shape of the generalization function. The results of this study suggest that one of the functional roles of neuronal redundancy is to maximize learning speed. PMID:22253586

  5. Spatiotemporal Computations of an Excitable and Plastic Brain: Neuronal Plasticity Leads to Noise-Robust and Noise-Constructive Computations

    PubMed Central

    Toutounji, Hazem; Pipa, Gordon

    2014-01-01

    It is a long-established fact that neuronal plasticity occupies the central role in generating neural function and computation. Nevertheless, no unifying account exists of how neurons in a recurrent cortical network learn to compute on temporally and spatially extended stimuli. However, these stimuli constitute the norm, rather than the exception, of the brain's input. Here, we introduce a geometric theory of learning spatiotemporal computations through neuronal plasticity. To that end, we rigorously formulate the problem of neural representations as a relation in space between stimulus-induced neural activity and the asymptotic dynamics of excitable cortical networks. Backed up by computer simulations and numerical analysis, we show that two canonical and widely spread forms of neuronal plasticity, that is, spike-timing-dependent synaptic plasticity and intrinsic plasticity, are both necessary for creating neural representations, such that these computations become realizable. Interestingly, the effects of these forms of plasticity on the emerging neural code relate to properties necessary for both combating and utilizing noise. The neural dynamics also exhibits features of the most likely stimulus in the network's spontaneous activity. These properties of the spatiotemporal neural code resulting from plasticity, having their grounding in nature, further consolidate the biological relevance of our findings. PMID:24651447

  6. Recurrent Artificial Neural Networks and Finite State Natural Language Processing.

    ERIC Educational Resources Information Center

    Moisl, Hermann

    It is argued that pessimistic assessments of the adequacy of artificial neural networks (ANNs) for natural language processing (NLP) on the grounds that they have a finite state architecture are unjustified, and that their adequacy in this regard is an empirical issue. First, arguments that counter standard objections to finite state NLP on the…

  7. Birth of an Abstraction: A Dynamical Systems Account of the Discovery of an Elsewhere Principle in a Category Learning Task

    ERIC Educational Resources Information Center

    Tabor, Whitney; Cho, Pyeong W.; Dankowicz, Harry

    2013-01-01

    Human participants and recurrent ("connectionist") neural networks were both trained on a categorization system abstractly similar to natural language systems involving irregular ("strong") classes and a default class. Both the humans and the networks exhibited staged learning and a generalization pattern reminiscent of the…

  8. Criticality meets learning: Criticality signatures in a self-organizing recurrent neural network

    PubMed Central

    Del Papa, Bruno; Priesemann, Viola

    2017-01-01

    Many experiments have suggested that the brain operates close to a critical state, based on signatures of criticality such as power-law distributed neuronal avalanches. In neural network models, criticality is a dynamical state that maximizes information processing capacities, e.g. sensitivity to input, dynamical range and storage capacity, which makes it a favorable candidate state for brain function. Although models that self-organize towards a critical state have been proposed, the relation between criticality signatures and learning is still unclear. Here, we investigate signatures of criticality in a self-organizing recurrent neural network (SORN). Investigating criticality in the SORN is of particular interest because it has not been developed to show criticality. Instead, the SORN has been shown to exhibit spatio-temporal pattern learning through a combination of neural plasticity mechanisms and it reproduces a number of biological findings on neural variability and the statistics and fluctuations of synaptic efficacies. We show that, after a transient, the SORN spontaneously self-organizes into a dynamical state that shows criticality signatures comparable to those found in experiments. The plasticity mechanisms are necessary to attain that dynamical state, but not to maintain it. Furthermore, onset of external input transiently changes the slope of the avalanche distributions – matching recent experimental findings. Interestingly, the membrane noise level necessary for the occurrence of the criticality signatures reduces the model’s performance in simple learning tasks. Overall, our work shows that the biologically inspired plasticity and homeostasis mechanisms responsible for the SORN’s spatio-temporal learning abilities can give rise to criticality signatures in its activity when driven by random input, but these break down under the structured input of short repeating sequences. PMID:28552964

  9. Gust prediction via artificial hair sensor array and neural network

    NASA Astrophysics Data System (ADS)

    Pankonien, Alexander M.; Thapa Magar, Kaman S.; Beblo, Richard V.; Reich, Gregory W.

    2017-04-01

    Gust Load Alleviation (GLA) is an important aspect of flight dynamics and control that reduces structural loadings and enhances ride quality. In conventional GLA systems, the structural response to aerodynamic excitation informs the control scheme. A phase lag, imposed by inertia, between the excitation and the measurement inherently limits the effectiveness of these systems. Hence, direct measurement of the aerodynamic loading can eliminate this lag, providing valuable information for effective GLA system design. Distributed arrays of Artificial Hair Sensors (AHS) are ideal for surface flow measurements that can be used to predict other necessary parameters such as aerodynamic forces, moments, and turbulence. In previous work, the spatially distributed surface flow velocities obtained from an array of artificial hair sensors using a Single-State (or feedforward) Neural Network were found to be effective in estimating the steady aerodynamic parameters such as air speed, angle of attack, lift and moment coefficient. This paper extends the investigation of the same configuration to unsteady force and moment estimation, which is important for active GLA control design. Implementing a Recurrent Neural Network that includes previous-timestep sensor information, the hair sensor array is shown to be capable of capturing gust disturbances with a wide range of periods, reducing predictive error in lift and moment by 68% and 52% respectively. The L2 norms of the first layer of the weight matrices were compared showing a 23% emphasis on prior versus current information. The Recurrent architecture also improves robustness, exhibiting only a 30% increase in predictive error when undertrained as compared to a 170% increase by the Single-State NN. This diverse, localized information can thus be directly implemented into a control scheme that alleviates the gusts without waiting for a structural response or requiring user-intensive sensor calibration.

  10. Consistency Analysis of Genome-Scale Models of Bacterial Metabolism: A Metamodel Approach

    PubMed Central

    Ponce-de-Leon, Miguel; Calle-Espinosa, Jorge; Peretó, Juli; Montero, Francisco

    2015-01-01

    Genome-scale metabolic models usually contain inconsistencies that manifest as blocked reactions and gap metabolites. With the purpose to detect recurrent inconsistencies in metabolic models, a large-scale analysis was performed using a previously published dataset of 130 genome-scale models. The results showed that a large number of reactions (~22%) are blocked in all the models where they are present. To unravel the nature of such inconsistencies a metamodel was construed by joining the 130 models in a single network. This metamodel was manually curated using the unconnected modules approach, and then, it was used as a reference network to perform a gap-filling on each individual genome-scale model. Finally, a set of 36 models that had not been considered during the construction of the metamodel was used, as a proof of concept, to extend the metamodel with new biochemical information, and to assess its impact on gap-filling results. The analysis performed on the metamodel allowed to conclude: 1) the recurrent inconsistencies found in the models were already present in the metabolic database used during the reconstructions process; 2) the presence of inconsistencies in a metabolic database can be propagated to the reconstructed models; 3) there are reactions not manifested as blocked which are active as a consequence of some classes of artifacts, and; 4) the results of an automatic gap-filling are highly dependent on the consistency and completeness of the metamodel or metabolic database used as the reference network. In conclusion the consistency analysis should be applied to metabolic databases in order to detect and fill gaps as well as to detect and remove artifacts and redundant information. PMID:26629901

  11. Biomimetic Hybrid Feedback Feedforward Neural-Network Learning Control.

    PubMed

    Pan, Yongping; Yu, Haoyong

    2017-06-01

    This brief presents a biomimetic hybrid feedback feedforward neural-network learning control (NNLC) strategy inspired by the human motor learning control mechanism for a class of uncertain nonlinear systems. The control structure includes a proportional-derivative controller acting as a feedback servo machine and a radial-basis-function (RBF) NN acting as a feedforward predictive machine. Under the sufficient constraints on control parameters, the closed-loop system achieves semiglobal practical exponential stability, such that an accurate NN approximation is guaranteed in a local region along recurrent reference trajectories. Compared with the existing NNLC methods, the novelties of the proposed method include: 1) the implementation of an adaptive NN control to guarantee plant states being recurrent is not needed, since recurrent reference signals rather than plant states are utilized as NN inputs, which greatly simplifies the analysis and synthesis of the NNLC and 2) the domain of NN approximation can be determined a priori by the given reference signals, which leads to an easy construction of the RBF-NNs. Simulation results have verified the effectiveness of this approach.

  12. Stochastic inference with spiking neurons in the high-conductance state

    NASA Astrophysics Data System (ADS)

    Petrovici, Mihai A.; Bill, Johannes; Bytschok, Ilja; Schemmel, Johannes; Meier, Karlheinz

    2016-10-01

    The highly variable dynamics of neocortical circuits observed in vivo have been hypothesized to represent a signature of ongoing stochastic inference but stand in apparent contrast to the deterministic response of neurons measured in vitro. Based on a propagation of the membrane autocorrelation across spike bursts, we provide an analytical derivation of the neural activation function that holds for a large parameter space, including the high-conductance state. On this basis, we show how an ensemble of leaky integrate-and-fire neurons with conductance-based synapses embedded in a spiking environment can attain the correct firing statistics for sampling from a well-defined target distribution. For recurrent networks, we examine convergence toward stationarity in computer simulations and demonstrate sample-based Bayesian inference in a mixed graphical model. This points to a new computational role of high-conductance states and establishes a rigorous link between deterministic neuron models and functional stochastic dynamics on the network level.

  13. A symbolic/subsymbolic interface protocol for cognitive modeling

    PubMed Central

    Simen, Patrick; Polk, Thad

    2009-01-01

    Researchers studying complex cognition have grown increasingly interested in mapping symbolic cognitive architectures onto subsymbolic brain models. Such a mapping seems essential for understanding cognition under all but the most extreme viewpoints (namely, that cognition consists exclusively of digitally implemented rules; or instead, involves no rules whatsoever). Making this mapping reduces to specifying an interface between symbolic and subsymbolic descriptions of brain activity. To that end, we propose parameterization techniques for building cognitive models as programmable, structured, recurrent neural networks. Feedback strength in these models determines whether their components implement classically subsymbolic neural network functions (e.g., pattern recognition), or instead, logical rules and digital memory. These techniques support the implementation of limited production systems. Though inherently sequential and symbolic, these neural production systems can exploit principles of parallel, analog processing from decision-making models in psychology and neuroscience to explain the effects of brain damage on problem solving behavior. PMID:20711520

  14. Cross-Participant EEG-Based Assessment of Cognitive Workload Using Multi-Path Convolutional Recurrent Neural Networks.

    PubMed

    Hefron, Ryan; Borghetti, Brett; Schubert Kabban, Christine; Christensen, James; Estepp, Justin

    2018-04-26

    Applying deep learning methods to electroencephalograph (EEG) data for cognitive state assessment has yielded improvements over previous modeling methods. However, research focused on cross-participant cognitive workload modeling using these techniques is underrepresented. We study the problem of cross-participant state estimation in a non-stimulus-locked task environment, where a trained model is used to make workload estimates on a new participant who is not represented in the training set. Using experimental data from the Multi-Attribute Task Battery (MATB) environment, a variety of deep neural network models are evaluated in the trade-space of computational efficiency, model accuracy, variance and temporal specificity yielding three important contributions: (1) The performance of ensembles of individually-trained models is statistically indistinguishable from group-trained methods at most sequence lengths. These ensembles can be trained for a fraction of the computational cost compared to group-trained methods and enable simpler model updates. (2) While increasing temporal sequence length improves mean accuracy, it is not sufficient to overcome distributional dissimilarities between individuals’ EEG data, as it results in statistically significant increases in cross-participant variance. (3) Compared to all other networks evaluated, a novel convolutional-recurrent model using multi-path subnetworks and bi-directional, residual recurrent layers resulted in statistically significant increases in predictive accuracy and decreases in cross-participant variance.

  15. An automatic microseismic or acoustic emission arrival identification scheme with deep recurrent neural networks

    NASA Astrophysics Data System (ADS)

    Zheng, Jing; Lu, Jiren; Peng, Suping; Jiang, Tianqi

    2018-02-01

    The conventional arrival pick-up algorithms cannot avoid the manual modification of the parameters for the simultaneous identification of multiple events under different signal-to-noise ratios (SNRs). Therefore, in order to automatically obtain the arrivals of multiple events with high precision under different SNRs, in this study an algorithm was proposed which had the ability to pick up the arrival of microseismic or acoustic emission events based on deep recurrent neural networks. The arrival identification was performed using two important steps, which included a training phase and a testing phase. The training process was mathematically modelled by deep recurrent neural networks using Long Short-Term Memory architecture. During the testing phase, the learned weights were utilized to identify the arrivals through the microseismic/acoustic emission data sets. The data sets were obtained by rock physics experiments of the acoustic emission. In order to obtain the data sets under different SNRs, this study added random noise to the raw experiments' data sets. The results showed that the outcome of the proposed method was able to attain an above 80 per cent hit-rate at SNR 0 dB, and an approximately 70 per cent hit-rate at SNR -5 dB, with an absolute error in 10 sampling points. These results indicated that the proposed method had high selection precision and robustness.

  16. An adaptive PID like controller using mix locally recurrent neural network for robotic manipulator with variable payload.

    PubMed

    Sharma, Richa; Kumar, Vikas; Gaur, Prerna; Mittal, A P

    2016-05-01

    Being complex, non-linear and coupled system, the robotic manipulator cannot be effectively controlled using classical proportional-integral-derivative (PID) controller. To enhance the effectiveness of the conventional PID controller for the nonlinear and uncertain systems, gains of the PID controller should be conservatively tuned and should adapt to the process parameter variations. In this work, a mix locally recurrent neural network (MLRNN) architecture is investigated to mimic a conventional PID controller which consists of at most three hidden nodes which act as proportional, integral and derivative node. The gains of the mix locally recurrent neural network based PID (MLRNNPID) controller scheme are initialized with a newly developed cuckoo search algorithm (CSA) based optimization method rather than assuming randomly. A sequential learning based least square algorithm is then investigated for the on-line adaptation of the gains of MLRNNPID controller. The performance of the proposed controller scheme is tested against the plant parameters uncertainties and external disturbances for both links of the two link robotic manipulator with variable payload (TL-RMWVP). The stability of the proposed controller is analyzed using Lyapunov stability criteria. A performance comparison is carried out among MLRNNPID controller, CSA optimized NNPID (OPTNNPID) controller and CSA optimized conventional PID (OPTPID) controller in order to establish the effectiveness of the MLRNNPID controller. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  17. Cross-Participant EEG-Based Assessment of Cognitive Workload Using Multi-Path Convolutional Recurrent Neural Networks

    PubMed Central

    Hefron, Ryan; Borghetti, Brett; Schubert Kabban, Christine; Christensen, James; Estepp, Justin

    2018-01-01

    Applying deep learning methods to electroencephalograph (EEG) data for cognitive state assessment has yielded improvements over previous modeling methods. However, research focused on cross-participant cognitive workload modeling using these techniques is underrepresented. We study the problem of cross-participant state estimation in a non-stimulus-locked task environment, where a trained model is used to make workload estimates on a new participant who is not represented in the training set. Using experimental data from the Multi-Attribute Task Battery (MATB) environment, a variety of deep neural network models are evaluated in the trade-space of computational efficiency, model accuracy, variance and temporal specificity yielding three important contributions: (1) The performance of ensembles of individually-trained models is statistically indistinguishable from group-trained methods at most sequence lengths. These ensembles can be trained for a fraction of the computational cost compared to group-trained methods and enable simpler model updates. (2) While increasing temporal sequence length improves mean accuracy, it is not sufficient to overcome distributional dissimilarities between individuals’ EEG data, as it results in statistically significant increases in cross-participant variance. (3) Compared to all other networks evaluated, a novel convolutional-recurrent model using multi-path subnetworks and bi-directional, residual recurrent layers resulted in statistically significant increases in predictive accuracy and decreases in cross-participant variance. PMID:29701668

  18. Distributed Bandpass Filtering and Signal Demodulation in Cortical Network Models

    NASA Astrophysics Data System (ADS)

    McDonnell, Mark D.

    Experimental recordings of cortical activity often exhibit narrowband oscillations, at various center frequencies ranging in the order of 1-200 Hz. Many neuronal mechanisms are known to give rise to oscillations, but here we focus on a population effect known as sparsely synchronised oscillations. In this effect, individual neurons in a cortical network fire irregularly at slow average spike rates (1-10 Hz), but the population spike rate oscillates at gamma frequencies (greater than 40 Hz) in response to spike bombardment from the thalamus. These cortical networks form recurrent (feedback) synapses. Here we describe a model of sparsely synchronized population oscillations using the language of feedback control engineering, where we treat spiking as noisy feedback. We show, using a biologically realistic model of synaptic current that includes a delayed response to inputs, that the collective behavior of the neurons in the network is like a distributed bandpass filter acting on the network inputs. Consequently, the population response has the character of narrowband random noise, and therefore has an envelope and instantaneous frequency with lowpass characteristics. Given that there exist biologically plausible neuronal mechanisms for demodulating the envelope and instantaneous frequency, we suggest there is potential for similar effects to be exploited in nanoscale electronics implementations of engineered communications receivers.

  19. Nonlinear Recurrent Neural Network Predictive Control for Energy Distribution of a Fuel Cell Powered Robot

    PubMed Central

    Chen, Qihong; Long, Rong; Quan, Shuhai

    2014-01-01

    This paper presents a neural network predictive control strategy to optimize power distribution for a fuel cell/ultracapacitor hybrid power system of a robot. We model the nonlinear power system by employing time variant auto-regressive moving average with exogenous (ARMAX), and using recurrent neural network to represent the complicated coefficients of the ARMAX model. Because the dynamic of the system is viewed as operating- state- dependent time varying local linear behavior in this frame, a linear constrained model predictive control algorithm is developed to optimize the power splitting between the fuel cell and ultracapacitor. The proposed algorithm significantly simplifies implementation of the controller and can handle multiple constraints, such as limiting substantial fluctuation of fuel cell current. Experiment and simulation results demonstrate that the control strategy can optimally split power between the fuel cell and ultracapacitor, limit the change rate of the fuel cell current, and so as to extend the lifetime of the fuel cell. PMID:24707206

  20. Critical regimes driven by recurrent mobility patterns of reaction-diffusion processes in networks

    NASA Astrophysics Data System (ADS)

    Gómez-Gardeñes, J.; Soriano-Paños, D.; Arenas, A.

    2018-04-01

    Reaction-diffusion processes1 have been widely used to study dynamical processes in epidemics2-4 and ecology5 in networked metapopulations. In the context of epidemics6, reaction processes are understood as contagions within each subpopulation (patch), while diffusion represents the mobility of individuals between patches. Recently, the characteristics of human mobility7, such as its recurrent nature, have been proven crucial to understand the phase transition to endemic epidemic states8,9. Here, by developing a framework able to cope with the elementary epidemic processes, the spatial distribution of populations and the commuting mobility patterns, we discover three different critical regimes of the epidemic incidence as a function of these parameters. Interestingly, we reveal a regime of the reaction-diffussion process in which, counter-intuitively, mobility is detrimental to the spread of disease. We analytically determine the precise conditions for the emergence of any of the three possible critical regimes in real and synthetic networks.

  1. Algorithm for Training a Recurrent Multilayer Perceptron

    NASA Technical Reports Server (NTRS)

    Parlos, Alexander G.; Rais, Omar T.; Menon, Sunil K.; Atiya, Amir F.

    2004-01-01

    An improved algorithm has been devised for training a recurrent multilayer perceptron (RMLP) for optimal performance in predicting the behavior of a complex, dynamic, and noisy system multiple time steps into the future. [An RMLP is a computational neural network with self-feedback and cross-talk (both delayed by one time step) among neurons in hidden layers]. Like other neural-network-training algorithms, this algorithm adjusts network biases and synaptic-connection weights according to a gradient-descent rule. The distinguishing feature of this algorithm is a combination of global feedback (the use of predictions as well as the current output value in computing the gradient at each time step) and recursiveness. The recursive aspect of the algorithm lies in the inclusion of the gradient of predictions at each time step with respect to the predictions at the preceding time step; this recursion enables the RMLP to learn the dynamics. It has been conjectured that carrying the recursion to even earlier time steps would enable the RMLP to represent a noisier, more complex system.

  2. Sensorless control for permanent magnet synchronous motor using a neural network based adaptive estimator

    NASA Astrophysics Data System (ADS)

    Kwon, Chung-Jin; Kim, Sung-Joong; Han, Woo-Young; Min, Won-Kyoung

    2005-12-01

    The rotor position and speed estimation of permanent-magnet synchronous motor(PMSM) was dealt with. By measuring the phase voltages and currents of the PMSM drive, two diagonally recurrent neural network(DRNN) based observers, a neural current observer and a neural velocity observer were developed. DRNN which has self-feedback of the hidden neurons ensures that the outputs of DRNN contain the whole past information of the system even if the inputs of DRNN are only the present states and inputs of the system. Thus the structure of DRNN may be simpler than that of feedforward and fully recurrent neural networks. If the backpropagation method was used for the training of the DRNN the problem of slow convergence arise. In order to reduce this problem, recursive prediction error(RPE) based learning method for the DRNN was presented. The simulation results show that the proposed approach gives a good estimation of rotor speed and position, and RPE based training has requires a shorter computation time compared to backpropagation based training.

  3. Functional connectivity analysis in resting state fMRI with echo-state networks and non-metric clustering for network structure recovery

    NASA Astrophysics Data System (ADS)

    Wismüller, Axel; DSouza, Adora M.; Abidin, Anas Z.; Wang, Xixi; Hobbs, Susan K.; Nagarajan, Mahesh B.

    2015-03-01

    Echo state networks (ESN) are recurrent neural networks where the hidden layer is replaced with a fixed reservoir of neurons. Unlike feed-forward networks, neuron training in ESN is restricted to the output neurons alone thereby providing a computational advantage. We demonstrate the use of such ESNs in our mutual connectivity analysis (MCA) framework for recovering the primary motor cortex network associated with hand movement from resting state functional MRI (fMRI) data. Such a framework consists of two steps - (1) defining a pair-wise affinity matrix between different pixel time series within the brain to characterize network activity and (2) recovering network components from the affinity matrix with non-metric clustering. Here, ESNs are used to evaluate pair-wise cross-estimation performance between pixel time series to create the affinity matrix, which is subsequently subject to non-metric clustering with the Louvain method. For comparison, the ground truth of the motor cortex network structure is established with a task-based fMRI sequence. Overlap between the primary motor cortex network recovered with our model free MCA approach and the ground truth was measured with the Dice coefficient. Our results show that network recovery with our proposed MCA approach is in close agreement with the ground truth. Such network recovery is achieved without requiring low-pass filtering of the time series ensembles prior to analysis, an fMRI preprocessing step that has courted controversy in recent years. Thus, we conclude our MCA framework can allow recovery and visualization of the underlying functionally connected networks in the brain on resting state fMRI.

  4. Network-State Modulation of Power-Law Frequency-Scaling in Visual Cortical Neurons

    PubMed Central

    Béhuret, Sébastien; Baudot, Pierre; Yger, Pierre; Bal, Thierry; Destexhe, Alain; Frégnac, Yves

    2009-01-01

    Various types of neural-based signals, such as EEG, local field potentials and intracellular synaptic potentials, integrate multiple sources of activity distributed across large assemblies. They have in common a power-law frequency-scaling structure at high frequencies, but it is still unclear whether this scaling property is dominated by intrinsic neuronal properties or by network activity. The latter case is particularly interesting because if frequency-scaling reflects the network state it could be used to characterize the functional impact of the connectivity. In intracellularly recorded neurons of cat primary visual cortex in vivo, the power spectral density of Vm activity displays a power-law structure at high frequencies with a fractional scaling exponent. We show that this exponent is not constant, but depends on the visual statistics used to drive the network. To investigate the determinants of this frequency-scaling, we considered a generic recurrent model of cortex receiving a retinotopically organized external input. Similarly to the in vivo case, our in computo simulations show that the scaling exponent reflects the correlation level imposed in the input. This systematic dependence was also replicated at the single cell level, by controlling independently, in a parametric way, the strength and the temporal decay of the pairwise correlation between presynaptic inputs. This last model was implemented in vitro by imposing the correlation control in artificial presynaptic spike trains through dynamic-clamp techniques. These in vitro manipulations induced a modulation of the scaling exponent, similar to that observed in vivo and predicted in computo. We conclude that the frequency-scaling exponent of the Vm reflects stimulus-driven correlations in the cortical network activity. Therefore, we propose that the scaling exponent could be used to read-out the “effective” connectivity responsible for the dynamical signature of the population signals measured at different integration levels, from Vm to LFP, EEG and fMRI. PMID:19779556

  5. Network-state modulation of power-law frequency-scaling in visual cortical neurons.

    PubMed

    El Boustani, Sami; Marre, Olivier; Béhuret, Sébastien; Baudot, Pierre; Yger, Pierre; Bal, Thierry; Destexhe, Alain; Frégnac, Yves

    2009-09-01

    Various types of neural-based signals, such as EEG, local field potentials and intracellular synaptic potentials, integrate multiple sources of activity distributed across large assemblies. They have in common a power-law frequency-scaling structure at high frequencies, but it is still unclear whether this scaling property is dominated by intrinsic neuronal properties or by network activity. The latter case is particularly interesting because if frequency-scaling reflects the network state it could be used to characterize the functional impact of the connectivity. In intracellularly recorded neurons of cat primary visual cortex in vivo, the power spectral density of V(m) activity displays a power-law structure at high frequencies with a fractional scaling exponent. We show that this exponent is not constant, but depends on the visual statistics used to drive the network. To investigate the determinants of this frequency-scaling, we considered a generic recurrent model of cortex receiving a retinotopically organized external input. Similarly to the in vivo case, our in computo simulations show that the scaling exponent reflects the correlation level imposed in the input. This systematic dependence was also replicated at the single cell level, by controlling independently, in a parametric way, the strength and the temporal decay of the pairwise correlation between presynaptic inputs. This last model was implemented in vitro by imposing the correlation control in artificial presynaptic spike trains through dynamic-clamp techniques. These in vitro manipulations induced a modulation of the scaling exponent, similar to that observed in vivo and predicted in computo. We conclude that the frequency-scaling exponent of the V(m) reflects stimulus-driven correlations in the cortical network activity. Therefore, we propose that the scaling exponent could be used to read-out the "effective" connectivity responsible for the dynamical signature of the population signals measured at different integration levels, from Vm to LFP, EEG and fMRI.

  6. Fluctuations in Wikipedia access-rate and edit-event data

    NASA Astrophysics Data System (ADS)

    Kämpf, Mirko; Tismer, Sebastian; Kantelhardt, Jan W.; Muchnik, Lev

    2012-12-01

    Internet-based social networks often reflect extreme events in nature and society by drastic increases in user activity. We study and compare the dynamics of the two major complex processes necessary for information spread via the online encyclopedia ‘Wikipedia’, i.e., article editing (information upload) and article access (information viewing) based on article edit-event time series and (hourly) user access-rate time series for all articles. Daily and weekly activity patterns occur in addition to fluctuations and bursting activity. The bursts (i.e., significant increases in activity for an extended period of time) are characterized by a power-law distribution of durations of increases and decreases. For describing the recurrence and clustering of bursts we investigate the statistics of the return intervals between them. We find stretched exponential distributions of return intervals in access-rate time series, while edit-event time series yield simple exponential distributions. To characterize the fluctuation behavior we apply detrended fluctuation analysis (DFA), finding that most article access-rate time series are characterized by strong long-term correlations with fluctuation exponents α≈0.9. The results indicate significant differences in the dynamics of information upload and access and help in understanding the complex process of collecting, processing, validating, and distributing information in self-organized social networks.

  7. Palbociclib in Treating Patients With Relapsed or Refractory Rb Positive Advanced Solid Tumors, Non-Hodgkin Lymphoma, or Histiocytic Disorders With Activating Alterations in Cell Cycle Genes (A Pediatric MATCH Treatment Trial)

    ClinicalTrials.gov

    2018-06-13

    Advanced Malignant Solid Neoplasm; RB1 Positive; Recurrent Childhood Ependymoma; Recurrent Ewing Sarcoma; Recurrent Glioma; Recurrent Hepatoblastoma; Recurrent Kidney Wilms Tumor; Recurrent Langerhans Cell Histiocytosis; Recurrent Malignant Germ Cell Tumor; Recurrent Malignant Glioma; Recurrent Medulloblastoma; Recurrent Neuroblastoma; Recurrent Non-Hodgkin Lymphoma; Recurrent Osteosarcoma; Recurrent Peripheral Primitive Neuroectodermal Tumor; Recurrent Rhabdoid Tumor; Recurrent Rhabdomyosarcoma; Recurrent Soft Tissue Sarcoma; Refractory Ependymoma; Refractory Ewing Sarcoma; Refractory Glioma; Refractory Hepatoblastoma; Refractory Langerhans Cell Histiocytosis; Refractory Malignant Germ Cell Tumor; Refractory Malignant Glioma; Refractory Medulloblastoma; Refractory Neuroblastoma; Refractory Non-Hodgkin Lymphoma; Refractory Osteosarcoma; Refractory Peripheral Primitive Neuroectodermal Tumor; Refractory Rhabdoid Tumor; Refractory Rhabdomyosarcoma; Refractory Soft Tissue Sarcoma

  8. How Deep Neural Networks Can Improve Emotion Recognition on Video Data

    DTIC Science & Technology

    2016-09-25

    HOW DEEP NEURAL NETWORKS CAN IMPROVE EMOTION RECOGNITION ON VIDEO DATA Pooya Khorrami1 , Tom Le Paine1, Kevin Brady2, Charlie Dagli2, Thomas S...this work, we present a system that per- forms emotion recognition on video data using both con- volutional neural networks (CNNs) and recurrent...neural net- works (RNNs). We present our findings on videos from the Audio/Visual+Emotion Challenge (AV+EC2015). In our experiments, we analyze the effects

  9. Mining for recurrent long-range interactions in RNA structures reveals embedded hierarchies in network families.

    PubMed

    Reinharz, Vladimir; Soulé, Antoine; Westhof, Eric; Waldispühl, Jérôme; Denise, Alain

    2018-05-04

    The wealth of the combinatorics of nucleotide base pairs enables RNA molecules to assemble into sophisticated interaction networks, which are used to create complex 3D substructures. These interaction networks are essential to shape the 3D architecture of the molecule, and also to provide the key elements to carry molecular functions such as protein or ligand binding. They are made of organised sets of long-range tertiary interactions which connect distinct secondary structure elements in 3D structures. Here, we present a de novo data-driven approach to extract automatically from large data sets of full RNA 3D structures the recurrent interaction networks (RINs). Our methodology enables us for the first time to detect the interaction networks connecting distinct components of the RNA structure, highlighting their diversity and conservation through non-related functional RNAs. We use a graphical model to perform pairwise comparisons of all RNA structures available and to extract RINs and modules. Our analysis yields a complete catalog of RNA 3D structures available in the Protein Data Bank and reveals the intricate hierarchical organization of the RNA interaction networks and modules. We assembled our results in an online database (http://carnaval.lri.fr) which will be regularly updated. Within the site, a tool allows users with a novel RNA structure to detect automatically whether the novel structure contains previously observed RINs.

  10. Electrogram Morphology Recurrence Patterns during Atrial Fibrillation

    PubMed Central

    Ng, Jason; Gordon, David; Passman, Rod S.; Knight, Bradley P.; Arora, Rishi; Goldberger, Jeffrey J.

    2014-01-01

    Background Traditional mapping of atrial fibrillation (AF) is limited by changing electrogram morphologies and variable cycle lengths. Objective We tested the hypothesis that morphology recurrence plot analysis would identify sites of stable and repeatable electrogram morphology patterns. Methods AF electrograms recorded from left atrial (LA) and right atrial (RA) sites in 19 patients (10 male, 59±10 years old) prior to AF ablation were analyzed. Morphology recurrence plots for each electrogram recording were created by cross-correlation of each automatically detected activation with every other activation in the recording. A recurrence percentage, the percentage of the most common morphology, and the mean cycle length of activations with the most common morphology (CLR) were computed. Results The morphology recurrence plots commonly showed checkerboard patterns of alternating high and low cross correlation values indicating periodic recurrences in morphologies. The mean recurrence percentage for all sites and all patients was 38±25%. The highest recurrence percentage per patient averaged 83±17%. The highest recurrence percentage was located in the RA in 5 patients and in the LA in 14 patients. Patients with sites of shortest CLR in the LA and RA had ablation failure rates of 25% and 100%, respectively (HR=4.95; p=0.05). Conclusions A new technique to characterize electrogram morphology recurrence demonstrated that there is a distribution of sites with high and low repeatability of electrogram morphologies. Sites with rapid activation of highly repetitive morphology patterns may be critical to sustaining AF. Further testing of this approach to map and ablate AF sources is warranted. PMID:25101485

  11. InFlo: a novel systems biology framework identifies cAMP-CREB1 axis as a key modulator of platinum resistance in ovarian cancer.

    PubMed

    Dimitrova, N; Nagaraj, A B; Razi, A; Singh, S; Kamalakaran, S; Banerjee, N; Joseph, P; Mankovich, A; Mittal, P; DiFeo, A; Varadan, V

    2017-04-27

    Characterizing the complex interplay of cellular processes in cancer would enable the discovery of key mechanisms underlying its development and progression. Published approaches to decipher driver mechanisms do not explicitly model tissue-specific changes in pathway networks and the regulatory disruptions related to genomic aberrations in cancers. We therefore developed InFlo, a novel systems biology approach for characterizing complex biological processes using a unique multidimensional framework integrating transcriptomic, genomic and/or epigenomic profiles for any given cancer sample. We show that InFlo robustly characterizes tissue-specific differences in activities of signalling networks on a genome scale using unique probabilistic models of molecular interactions on a per-sample basis. Using large-scale multi-omics cancer datasets, we show that InFlo exhibits higher sensitivity and specificity in detecting pathway networks associated with specific disease states when compared to published pathway network modelling approaches. Furthermore, InFlo's ability to infer the activity of unmeasured signalling network components was also validated using orthogonal gene expression signatures. We then evaluated multi-omics profiles of primary high-grade serous ovarian cancer tumours (N=357) to delineate mechanisms underlying resistance to frontline platinum-based chemotherapy. InFlo was the only algorithm to identify hyperactivation of the cAMP-CREB1 axis as a key mechanism associated with resistance to platinum-based therapy, a finding that we subsequently experimentally validated. We confirmed that inhibition of CREB1 phosphorylation potently sensitized resistant cells to platinum therapy and was effective in killing ovarian cancer stem cells that contribute to both platinum-resistance and tumour recurrence. Thus, we propose InFlo to be a scalable and widely applicable and robust integrative network modelling framework for the discovery of evidence-based biomarkers and therapeutic targets.

  12. Application of Deep Learning of Multi-Temporal SENTINEL-1 Images for the Classification of Coastal Vegetation Zone of the Danube Delta

    NASA Astrophysics Data System (ADS)

    Niculescu, S.; Ienco, D.; Hanganu, J.

    2018-04-01

    Land cover is a fundamental variable for regional planning, as well as for the study and understanding of the environment. This work propose a multi-temporal approach relying on a fusion of radar multi-sensor data and information collected by the latest sensor (Sentinel-1) with a view to obtaining better results than traditional image processing techniques. The Danube Delta is the site for this work. The spatial approach relies on new spatial analysis technologies and methodologies: Deep Learning of multi-temporal Sentinel-1. We propose a deep learning network for image classification which exploits the multi-temporal characteristic of Sentinel-1 data. The model we employ is a Gated Recurrent Unit (GRU) Network, a recurrent neural network that explicitly takes into account the time dimension via a gated mechanism to perform the final prediction. The main quality of the GRU network is its ability to consider only the important part of the information coming from the temporal data discarding the irrelevant information via a forgetting mechanism. We propose to use such network structure to classify a series of images Sentinel-1 (20 Sentinel-1 images acquired between 9.10.2014 and 01.04.2016). The results are compared with results of the classification of Random Forest.

  13. Jordan recurrent neural network versus IHACRES in modelling daily streamflows

    NASA Astrophysics Data System (ADS)

    Carcano, Elena Carla; Bartolini, Paolo; Muselli, Marco; Piroddi, Luigi

    2008-12-01

    SummaryA study of possible scenarios for modelling streamflow data from daily time series, using artificial neural networks (ANNs), is presented. Particular emphasis is devoted to the reconstruction of drought periods where water resource management and control are most critical. This paper considers two connectionist models: a feedforward multilayer perceptron (MLP) and a Jordan recurrent neural network (JNN), comparing network performance on real world data from two small catchments (192 and 69 km 2 in size) with irregular and torrential regimes. Several network configurations are tested to ensure a good combination of input features (rainfall and previous streamflow data) that capture the variability of the physical processes at work. Tapped delayed line (TDL) and memory effect techniques are introduced to recognize and reproduce temporal dependence. Results show a poor agreement when using TDL only, but a remarkable improvement can be obtained with JNN and its memory effect procedures, which are able to reproduce the system memory over a catchment in a more effective way. Furthermore, the IHACRES conceptual model, which relies on both rainfall and temperature input data, is introduced for comparative study. The results suggest that when good input data is unavailable, metric models perform better than conceptual ones and, in general, it is difficult to justify substantial conceptualization of complex processes.

  14. A new switching control for finite-time synchronization of memristor-based recurrent neural networks.

    PubMed

    Gao, Jie; Zhu, Peiyong; Alsaedi, Ahmed; Alsaadi, Fuad E; Hayat, Tasawar

    2017-02-01

    In this paper, finite-time synchronization (FTS) of memristor-based recurrent neural networks (MNNs) with time-varying delays is investigated by designing a new switching controller. First, by using the differential inclusions theory and set-valued maps, sufficient conditions to ensure FTS of MNNs are obtained under the two cases of 0<α<1 and α=0, and it is derived that α=0 is the critical value of 0<α<1. Next, it is discussed deeply on the relation between the parameter α and the synchronization time. Then, a new controller with a switching parameter α is designed which can shorten the synchronization time. Finally, some numerical simulation examples are provided to illustrate the effectiveness of the proposed results. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Multi-institutional Outcomes of Endoscopic Management of Stricture Recurrence after Bulbar Urethroplasty.

    PubMed

    Sukumar, Shyam; Elliott, Sean P; Myers, Jeremy B; Voelzke, Bryan B; Smith, Thomas G; Carolan, Alexandra Mc; Maidaa, Michael; Vanni, Alex J; Breyer, Benjamin N; Erickson, Bradley A

    2018-05-03

    Approximately 10-20% of patients will have a recurrence after urethroplasty. Initial management of these recurrences is often with urethral dilation (UD) or direct vision internal urethrotomy (DVIU). In the current study, we describe outcomes of endoscopic management of stricture recurrence after bulbar urethroplasty. We retrospectively reviewed bulbar urethroplasty data from 5 surgeons from the Trauma and Urologic Reconstruction Network of Surgeons. Men who underwent UD or DVIU for urethroplasty recurrence were identified. Recurrence was defined as inability to pass a 17Fr cystoscope through the area of reconstruction. The primary outcome was the success rate of recurrence management. Comparisons were made between UD and DVIU and then between endoscopic management of recurrences after excision and primary anastomosis urethroplasty (EPA) versus substitutional repairs using time-to-event statistics. There were 53 men with recurrence that were initially managed endoscopically. Median time to urethral stricture recurrence after urethroplasty was noted to be 5 months. At a median follow-up of 5 months, overall success was 42%. Success after UD (n=1/10, 10%) was significantly lower than after DVIU (n=21/43, 49%; p < 0.001) with a hazard ratio of failure of 3.15 (p=0.03). DVIU was more effective after substitutional failure than after EPA (53% vs.13%, P=0.005). DVIU is more successful than UD in the management of stricture recurrence after bulbar urethroplasty. DVIU is more successful for patients with a recurrence after a substitution urethroplasty compared to after EPA, perhaps indicating a different mechanism of recurrence for EPA (ischemic) versus substitution urethroplasty (non-ischemic). Copyright © 2018 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  16. Extended observability of linear time-invariant systems under recurrent loss of output data

    NASA Technical Reports Server (NTRS)

    Luck, Rogelio; Ray, Asok; Halevi, Yoram

    1989-01-01

    Recurrent loss of sensor data in integrated control systems of an advanced aircraft may occur under different operating conditions that include detected frame errors and queue saturation in computer networks, and bad data suppression in signal processing. This paper presents an extension of the concept of observability based on a set of randomly selected nonconsecutive outputs in finite-dimensional, linear, time-invariant systems. Conditions for testing extended observability have been established.

  17. Genomic analyses identify molecular subtypes of pancreatic cancer.

    PubMed

    Bailey, Peter; Chang, David K; Nones, Katia; Johns, Amber L; Patch, Ann-Marie; Gingras, Marie-Claude; Miller, David K; Christ, Angelika N; Bruxner, Tim J C; Quinn, Michael C; Nourse, Craig; Murtaugh, L Charles; Harliwong, Ivon; Idrisoglu, Senel; Manning, Suzanne; Nourbakhsh, Ehsan; Wani, Shivangi; Fink, Lynn; Holmes, Oliver; Chin, Venessa; Anderson, Matthew J; Kazakoff, Stephen; Leonard, Conrad; Newell, Felicity; Waddell, Nick; Wood, Scott; Xu, Qinying; Wilson, Peter J; Cloonan, Nicole; Kassahn, Karin S; Taylor, Darrin; Quek, Kelly; Robertson, Alan; Pantano, Lorena; Mincarelli, Laura; Sanchez, Luis N; Evers, Lisa; Wu, Jianmin; Pinese, Mark; Cowley, Mark J; Jones, Marc D; Colvin, Emily K; Nagrial, Adnan M; Humphrey, Emily S; Chantrill, Lorraine A; Mawson, Amanda; Humphris, Jeremy; Chou, Angela; Pajic, Marina; Scarlett, Christopher J; Pinho, Andreia V; Giry-Laterriere, Marc; Rooman, Ilse; Samra, Jaswinder S; Kench, James G; Lovell, Jessica A; Merrett, Neil D; Toon, Christopher W; Epari, Krishna; Nguyen, Nam Q; Barbour, Andrew; Zeps, Nikolajs; Moran-Jones, Kim; Jamieson, Nigel B; Graham, Janet S; Duthie, Fraser; Oien, Karin; Hair, Jane; Grützmann, Robert; Maitra, Anirban; Iacobuzio-Donahue, Christine A; Wolfgang, Christopher L; Morgan, Richard A; Lawlor, Rita T; Corbo, Vincenzo; Bassi, Claudio; Rusev, Borislav; Capelli, Paola; Salvia, Roberto; Tortora, Giampaolo; Mukhopadhyay, Debabrata; Petersen, Gloria M; Munzy, Donna M; Fisher, William E; Karim, Saadia A; Eshleman, James R; Hruban, Ralph H; Pilarsky, Christian; Morton, Jennifer P; Sansom, Owen J; Scarpa, Aldo; Musgrove, Elizabeth A; Bailey, Ulla-Maja Hagbo; Hofmann, Oliver; Sutherland, Robert L; Wheeler, David A; Gill, Anthony J; Gibbs, Richard A; Pearson, John V; Waddell, Nicola; Biankin, Andrew V; Grimmond, Sean M

    2016-03-03

    Integrated genomic analysis of 456 pancreatic ductal adenocarcinomas identified 32 recurrently mutated genes that aggregate into 10 pathways: KRAS, TGF-β, WNT, NOTCH, ROBO/SLIT signalling, G1/S transition, SWI-SNF, chromatin modification, DNA repair and RNA processing. Expression analysis defined 4 subtypes: (1) squamous; (2) pancreatic progenitor; (3) immunogenic; and (4) aberrantly differentiated endocrine exocrine (ADEX) that correlate with histopathological characteristics. Squamous tumours are enriched for TP53 and KDM6A mutations, upregulation of the TP63∆N transcriptional network, hypermethylation of pancreatic endodermal cell-fate determining genes and have a poor prognosis. Pancreatic progenitor tumours preferentially express genes involved in early pancreatic development (FOXA2/3, PDX1 and MNX1). ADEX tumours displayed upregulation of genes that regulate networks involved in KRAS activation, exocrine (NR5A2 and RBPJL), and endocrine differentiation (NEUROD1 and NKX2-2). Immunogenic tumours contained upregulated immune networks including pathways involved in acquired immune suppression. These data infer differences in the molecular evolution of pancreatic cancer subtypes and identify opportunities for therapeutic development.

  18. Quaternion-valued echo state networks.

    PubMed

    Xia, Yili; Jahanchahi, Cyrus; Mandic, Danilo P

    2015-04-01

    Quaternion-valued echo state networks (QESNs) are introduced to cater for 3-D and 4-D processes, such as those observed in the context of renewable energy (3-D wind modeling) and human centered computing (3-D inertial body sensors). The introduction of QESNs is made possible by the recent emergence of quaternion nonlinear activation functions with local analytic properties, required by nonlinear gradient descent training algorithms. To make QENSs second-order optimal for the generality of quaternion signals (both circular and noncircular), we employ augmented quaternion statistics to introduce widely linear QESNs. To that end, the standard widely linear model is modified so as to suit the properties of dynamical reservoir, typically realized by recurrent neural networks. This allows for a full exploitation of second-order information in the data, contained both in the covariance and pseudocovariances, and a rigorous account of second-order noncircularity (improperness), and the corresponding power mismatch and coupling between the data components. Simulations in the prediction setting on both benchmark circular and noncircular signals and on noncircular real-world 3-D body motion data support the analysis.

  19. Recurrent Shoulder Instability in a Young, Active, Military Population and Its Professional Implications.

    PubMed

    Flint, James H; Pickett, Adam; Owens, Brett D; Svoboda, Steven J; Peck, Karen Y; Cameron, Kenneth L; Biery, John; Giuliani, Jeffrey; Rue, John-Paul

    Shoulder instability is a topic of significant interest within the sports medicine literature, particularly regarding recurrence rates and the ideal treatment indications and techniques. Little has been published specifically addressing the occupational implications of symptomatic recurrent shoulder instability. Previous arthroscopic repair will continue to be a significant predisposing factor for recurrent instability in a young, active population, and that recurrent instability may have a negative effect on college graduation and postgraduate occupational selection. Case series. Level 4. We conducted a retrospective review of approved medical waivers for surgical treatment of anterior shoulder dislocation or instability prior to matriculation at the US Military Academy or the US Naval Academy for the graduating classes of 2010 to 2013. Statistical analysis was performed to determine the incidence and risk factors for recurrence and to determine the impact on graduation rate and occupation selection. Fifty-nine patients were evaluated; 34% developed recurrent anterior instability. Patients with previous arthroscopic repair had a significantly higher incidence of recurrence (38%, P = 0.044). Recurrent shoulder instability did not significantly affect graduation rates or self-selected occupation ( P ≥ 0.05). There is a significant rate of recurrent shoulder instability after primary surgical repair, particularly among young, active individuals. In addition, arthroscopic repair resulted in a significantly higher recurrence rate compared with open repair in our population. Surgical repair for shoulder instability should not necessarily preclude young individuals from pursuing (or being considered for) occupations that may place them at greater risk of recurrence. The risk of recurrent instability is greater than the rate typically described, which may suggest that some subpopulations are at greater risk than others. A unique data point regarding instability is the effect on occupation selection.

  20. Cellular and network properties of the subiculum in the pilocarpine model of temporal lobe epilepsy.

    PubMed

    Knopp, Andreas; Kivi, Anatol; Wozny, Christian; Heinemann, Uwe; Behr, Joachim

    2005-03-21

    The subiculum was recently shown to be crucially involved in the generation of interictal activity in human temporal lobe epilepsy. Using the pilocarpine model of epilepsy, this study examines the anatomical substrates for network hyperexcitability recorded in the subiculum. Regular- and burst-spiking subicular pyramidal cells were stained with fluorescence dyes and reconstructed to analyze seizure-induced alterations of the dendritic and axonal system. In control animals burst-spiking cells outnumbered regular-spiking cells by about two to one. Regular- and burst-spiking cells were characterized by extensive axonal branching and autapse-like contacts, suggesting a high intrinsic connectivity. In addition, subicular axons projecting to CA1 indicate a CA1-subiculum-CA1 circuit. In the subiculum of pilocarpine-treated rats we found an enhanced network excitability characterized by spontaneous rhythmic activity, polysynaptic responses, and all-or-none evoked bursts of action potentials. In pilocarpine-treated rats the subiculum showed cell loss of about 30%. The ratio of regular- and burst-spiking cells was practically inverse as compared to control preparations. A reduced arborization and spine density in the proximal part of the apical dendrites suggests a partial deafferentiation from CA1. In pilocarpine-treated rats no increased axonal outgrowth of pyramidal cells was observed. Hence, axonal sprouting of subicular pyramidal cells is not mandatory for the development of the pathological events. We suggest that pilocarpine-induced seizures cause an unmasking or strengthening of synaptic contacts within the recurrent subicular network. Copyright 2005 Wiley-Liss, Inc.

  1. Using LSTM recurrent neural networks for monitoring the LHC superconducting magnets

    NASA Astrophysics Data System (ADS)

    Wielgosz, Maciej; Skoczeń, Andrzej; Mertik, Matej

    2017-09-01

    The superconducting LHC magnets are coupled with an electronic monitoring system which records and analyzes voltage time series reflecting their performance. A currently used system is based on a range of preprogrammed triggers which launches protection procedures when a misbehavior of the magnets is detected. All the procedures used in the protection equipment were designed and implemented according to known working scenarios of the system and are updated and monitored by human operators. This paper proposes a novel approach to monitoring and fault protection of the Large Hadron Collider (LHC) superconducting magnets which employs state-of-the-art Deep Learning algorithms. Consequently, the authors of the paper decided to examine the performance of LSTM recurrent neural networks for modeling of voltage time series of the magnets. In order to address this challenging task different network architectures and hyper-parameters were used to achieve the best possible performance of the solution. The regression results were measured in terms of RMSE for different number of future steps and history length taken into account for the prediction. The best result of RMSE = 0 . 00104 was obtained for a network of 128 LSTM cells within the internal layer and 16 steps history buffer.

  2. Recurrence networks from multivariate signals for uncovering dynamic transitions of horizontal oil-water stratified flows

    NASA Astrophysics Data System (ADS)

    Gao, Zhong-Ke; Zhang, Xin-Wang; Jin, Ning-De; Donner, Reik V.; Marwan, Norbert; Kurths, Jürgen

    2013-09-01

    Characterizing the mechanism of drop formation at the interface of horizontal oil-water stratified flows is a fundamental problem eliciting a great deal of attention from different disciplines. We experimentally and theoretically investigate the formation and transition of horizontal oil-water stratified flows. We design a new multi-sector conductance sensor and measure multivariate signals from two different stratified flow patterns. Using the Adaptive Optimal Kernel Time-Frequency Representation (AOK TFR) we first characterize the flow behavior from an energy and frequency point of view. Then, we infer multivariate recurrence networks from the experimental data and investigate the cross-transitivity for each constructed network. We find that the cross-transitivity allows quantitatively uncovering the flow behavior when the stratified flow evolves from a stable state to an unstable one and recovers deeper insights into the mechanism governing the formation of droplets at the interface of stratified flows, a task that existing methods based on AOK TFR fail to work. These findings present a first step towards an improved understanding of the dynamic mechanism leading to the transition of horizontal oil-water stratified flows from a complex-network perspective.

  3. Integrated built-in-test false and missed alarms reduction based on forward infinite impulse response & recurrent finite impulse response dynamic neural networks

    NASA Astrophysics Data System (ADS)

    Cui, Yiqian; Shi, Junyou; Wang, Zili

    2017-11-01

    Built-in tests (BITs) are widely used in mechanical systems to perform state identification, whereas the BIT false and missed alarms cause trouble to the operators or beneficiaries to make correct judgments. Artificial neural networks (ANN) are previously used for false and missed alarms identification, which has the features such as self-organizing and self-study. However, these ANN models generally do not incorporate the temporal effect of the bottom-level threshold comparison outputs and the historical temporal features are not fully considered. To improve the situation, this paper proposes a new integrated BIT design methodology by incorporating a novel type of dynamic neural networks (DNN) model. The new DNN model is termed as Forward IIR & Recurrent FIR DNN (FIRF-DNN), where its component neurons, network structures, and input/output relationships are discussed. The condition monitoring false and missed alarms reduction implementation scheme based on FIRF-DNN model is also illustrated, which is composed of three stages including model training, false and missed alarms detection, and false and missed alarms suppression. Finally, the proposed methodology is demonstrated in the application study and the experimental results are analyzed.

  4. A recurrent neural network for classification of unevenly sampled variable stars

    NASA Astrophysics Data System (ADS)

    Naul, Brett; Bloom, Joshua S.; Pérez, Fernando; van der Walt, Stéfan

    2018-02-01

    Astronomical surveys of celestial sources produce streams of noisy time series measuring flux versus time (`light curves'). Unlike in many other physical domains, however, large (and source-specific) temporal gaps in data arise naturally due to intranight cadence choices as well as diurnal and seasonal constraints1-5. With nightly observations of millions of variable stars and transients from upcoming surveys4,6, efficient and accurate discovery and classification techniques on noisy, irregularly sampled data must be employed with minimal human-in-the-loop involvement. Machine learning for inference tasks on such data traditionally requires the laborious hand-coding of domain-specific numerical summaries of raw data (`features')7. Here, we present a novel unsupervised autoencoding recurrent neural network8 that makes explicit use of sampling times and known heteroskedastic noise properties. When trained on optical variable star catalogues, this network produces supervised classification models that rival other best-in-class approaches. We find that autoencoded features learned in one time-domain survey perform nearly as well when applied to another survey. These networks can continue to learn from new unlabelled observations and may be used in other unsupervised tasks, such as forecasting and anomaly detection.

  5. Using Long-Short-Term-Memory Recurrent Neural Networks to Predict Aviation Engine Vibrations

    NASA Astrophysics Data System (ADS)

    ElSaid, AbdElRahman Ahmed

    This thesis examines building viable Recurrent Neural Networks (RNN) using Long Short Term Memory (LSTM) neurons to predict aircraft engine vibrations. The different networks are trained on a large database of flight data records obtained from an airline containing flights that suffered from excessive vibration. RNNs can provide a more generalizable and robust method for prediction over analytical calculations of engine vibration, as analytical calculations must be solved iteratively based on specific empirical engine parameters, and this database contains multiple types of engines. Further, LSTM RNNs provide a "memory" of the contribution of previous time series data which can further improve predictions of future vibration values. LSTM RNNs were used over traditional RNNs, as those suffer from vanishing/exploding gradients when trained with back propagation. The study managed to predict vibration values for 1, 5, 10, and 20 seconds in the future, with 2.84% 3.3%, 5.51% and 10.19% mean absolute error, respectively. These neural networks provide a promising means for the future development of warning systems so that suitable actions can be taken before the occurrence of excess vibration to avoid unfavorable situations during flight.

  6. Sedentary behavior is associated with colorectal adenoma recurrence in men

    PubMed Central

    Molmenti, Christine L. Sardo; Hibler, Elizabeth A.; Ashbeck, Erin L.; Thomson, Cynthia A.; Garcia, David O.; Roe, Denise; Harris, Robin B.; Lance, Peter; Cisneroz, Martin; Martinez, Maria Elena; Thompson, Patricia A.; Jacobs, Elizabeth T.

    2014-01-01

    Purpose The association between physical activity and colorectal adenoma is equivocal. This study was designed to assess the relationship between physical activity and colorectal adenoma recurrence. Methods Pooled analyses from two randomized, controlled trials included 1,730 participants who completed the Arizona Activity Frequency Questionnaire at baseline, had a colorectal adenoma removed within 6 months of study registration, and had a follow-up colonoscopy during the trial. Logistic regression modeling was employed to estimate the effect of sedentary behavior, light-intensity physical activity, and moderate-vigorous physical activity on colorectal adenoma recurrence. Results No statistically significant trends were found for any activity type and odds of colorectal adenoma recurrence in the pooled population. However, males with the highest levels of sedentary time experienced 47% higher odds of adenoma recurrence. Compared to the lowest quartile of sedentary time, the ORs (95% CIs) for the second, third, and fourth quartiles among men were 1.23 (0.88, 1.74), 1.41 (0.99, 2.01), and 1.47 (1.03, 2.11) respectively (P trend=0.03). No similar association was observed for women. Conclusions This study suggests that sedentary behavior is associated with a higher risk of colorectal adenoma recurrence among men, providing evidence of detrimental effects of a sedentary lifestyle early in the carcinogenesis pathway. PMID:25060482

  7. Characterising infant inter-breath interval patterns during active and quiet sleep using recurrence plot analysis.

    PubMed

    Terrill, Philip I; Wilson, Stephen J; Suresh, Sadasivam; Cooper, David M

    2009-01-01

    Breathing patterns are characteristically different between active and quiet sleep states in infants. It has been previously identified that breathing dynamics are governed by a non-linear controller which implies the need for a nonlinear analytical tool. Further, it has been shown that quantified nonlinear variables are different between adult sleep states. This study aims to determine whether a nonlinear analytical tool known as recurrence plot analysis can characterize breath intervals of active and quiet sleep states in infants. Overnight polysomnograms were obtained from 32 healthy infants. The 6 longest periods each of active and quiet sleep were identified and a software routine extracted inter-breath interval data for recurrence plot analysis. Determinism (DET), laminarity (LAM) and radius (RAD) values were calculated for an embedding dimension of 4, 6, 8 and 16, and fixed recurrence of 0.5, 1, 2, 3.5 and 5%. Recurrence plots exhibited characteristically different patterns for active and quiet sleep. Active sleep periods typically had higher values of RAD, DET and LAM than for quiet sleep, and this trend was invariant to a specific choice of embedding dimension or fixed recurrence. These differences may provide a basis for automated sleep state classification, and the quantitative investigation of pathological breathing patterns.

  8. Recurrent patterns of atrial depolarization during atrial fibrillation assessed by recurrence plot quantification.

    PubMed

    Censi, F; Barbaro, V; Bartolini, P; Calcagnini, G; Michelucci, A; Gensini, G F; Cerutti, S

    2000-01-01

    The aim of this study was to determine the presence of organization of atrial activation processes during atrial fibrillation (AF) by assessing whether the activation sequences are wholly random or are governed by deterministic mechanisms. We performed both linear and nonlinear analyses based on the cross correlation function (CCF) and recurrence plot quantification (RPQ), respectively. Recurrence plots were quantified by three variables: percent recurrence (PR), percent determinism (PD), and entropy of recurrences (ER). We recorded bipolar intra-atrial electrograms in two atrial sites during chronic AF in 19 informed subjects, following two protocols. In one, both recording sites were in the right atrium; in the other protocol, one site was in the right atrium, the other one in the left atrium. We extracted 19 episodes of type I AF (Wells' classification). RPQ detected transient recurrent patterns in all the episodes, while CCF was significant only in ten episodes. Surrogate data analysis, based on a cross-phase randomization procedure, decreased PR, PD, and ER values. The detection of spatiotemporal recurrent patterns together with the surrogate data results indicate that during AF a certain degree of local organization exists, likely caused by deterministic mechanisms of activation.

  9. Hotspots of aberrant enhancer activity punctuate the colorectal cancer epigenome

    PubMed Central

    Cohen, Andrea J.; Saiakhova, Alina; Corradin, Olivia; Luppino, Jennifer M.; Lovrenert, Katreya; Bartels, Cynthia F.; Morrow, James J.; Mack, Stephen C.; Dhillon, Gursimran; Beard, Lydia; Myeroff, Lois; Kalady, Matthew F.; Willis, Joseph; Bradner, James E.; Keri, Ruth A.; Berger, Nathan A.; Pruett-Miller, Shondra M.; Markowitz, Sanford D.; Scacheri, Peter C.

    2017-01-01

    In addition to mutations in genes, aberrant enhancer element activity at non-coding regions of the genome is a key driver of tumorigenesis. Here, we perform epigenomic enhancer profiling of a cohort of more than forty genetically diverse human colorectal cancer (CRC) specimens. Using normal colonic crypt epithelium as a comparator, we identify enhancers with recurrently gained or lost activity across CRC specimens. Of the enhancers highly recurrently activated in CRC, most are constituents of super enhancers, are occupied by AP-1 and cohesin complex members, and originate from primed chromatin. Many activate known oncogenes, and CRC growth can be mitigated through pharmacologic inhibition or genome editing of these loci. Nearly half of all GWAS CRC risk loci co-localize to recurrently activated enhancers. These findings indicate that the CRC epigenome is defined by highly recurrent epigenetic alterations at enhancers which activate a common, aberrant transcriptional programme critical for CRC growth and survival. PMID:28169291

  10. Effects of surgery and anesthetic choice on immunosuppression and cancer recurrence.

    PubMed

    Kim, Ryungsa

    2018-01-18

    The relationship between surgery and anesthetic-induced immunosuppression and cancer recurrence remains unresolved. Surgery and anesthesia stimulate the hypothalamic-pituitary-adrenal (HPA) axis and sympathetic nervous system (SNS) to cause immunosuppression through several tumor-derived soluble factors. The potential impact of surgery and anesthesia on cancer recurrence was reviewed to provide guidance for cancer surgical treatment. PubMed was searched up to December 31, 2016 using search terms such as, "anesthetic technique and cancer recurrence," "regional anesthesia and cancer recurrence," "local anesthesia and cancer recurrence," "anesthetic technique and immunosuppression," and "anesthetic technique and oncologic surgery." Surgery-induced stress responses and surgical manipulation enhance tumor metastasis via release of angiogenic factors and suppression of natural killer (NK) cells and cell-mediated immunity. Intravenous agents such as ketamine and thiopental suppress NK cell activity, whereas propofol does not. Ketamine induces T-lymphocyte apoptosis but midazolam does not affect cytotoxic T-lymphocytes. Volatile anesthetics suppress NK cell activity, induce T-lymphocyte apoptosis, and enhance angiogenesis through hypoxia inducible factor-1α (HIF-1α) activity. Opioids suppress NK cell activity and increase regulatory T cells. Local anesthetics such as lidocaine increase NK cell activity. Anesthetics such as propofol and locoregional anesthesia, which decrease surgery-induced neuroendocrine responses through HPA-axis and SNS suppression, may cause less immunosuppression and recurrence of certain types of cancer compared to volatile anesthetics and opioids.

  11. Recurrence of random walks with long-range steps generated by fractional Laplacian matrices on regular networks and simple cubic lattices

    NASA Astrophysics Data System (ADS)

    Michelitsch, T. M.; Collet, B. A.; Riascos, A. P.; Nowakowski, A. F.; Nicolleau, F. C. G. A.

    2017-12-01

    We analyze a Markovian random walk strategy on undirected regular networks involving power matrix functions of the type L\\frac{α{2}} where L indicates a ‘simple’ Laplacian matrix. We refer to such walks as ‘fractional random walks’ with admissible interval 0<α ≤slant 2 . We deduce probability-generating functions (network Green’s functions) for the fractional random walk. From these analytical results we establish a generalization of Polya’s recurrence theorem for fractional random walks on d-dimensional infinite lattices: The fractional random walk is transient for dimensions d > α (recurrent for d≤slantα ) of the lattice. As a consequence, for 0<α< 1 the fractional random walk is transient for all lattice dimensions d=1, 2, .. and in the range 1≤slantα < 2 for dimensions d≥slant 2 . Finally, for α=2 , Polya’s classical recurrence theorem is recovered, namely the walk is transient only for lattice dimensions d≥slant 3 . The generalization of Polya’s recurrence theorem remains valid for the class of random walks with Lévy flight asymptotics for long-range steps. We also analyze the mean first passage probabilities, mean residence times, mean first passage times and global mean first passage times (Kemeny constant) for the fractional random walk. For an infinite 1D lattice (infinite ring) we obtain for the transient regime 0<α<1 closed form expressions for the fractional lattice Green’s function matrix containing the escape and ever passage probabilities. The ever passage probabilities (fractional lattice Green’s functions) in the transient regime fulfil Riesz potential power law decay asymptotic behavior for nodes far from the departure node. The non-locality of the fractional random walk is generated by the non-diagonality of the fractional Laplacian matrix with Lévy-type heavy tailed inverse power law decay for the probability of long-range moves. This non-local and asymptotic behavior of the fractional random walk introduces small-world properties with the emergence of Lévy flights on large (infinite) lattices.

  12. AAAIC '88 - Aerospace Applications of Artificial Intelligence; Proceedings of the Fourth Annual Conference, Dayton, OH, Oct. 25-27, 1988. Volumes 1 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, J.R.; Netrologic, Inc., San Diego, CA)

    1988-01-01

    Topics presented include integrating neural networks and expert systems, neural networks and signal processing, machine learning, cognition and avionics applications, artificial intelligence and man-machine interface issues, real time expert systems, artificial intelligence, and engineering applications. Also considered are advanced problem solving techniques, combinational optimization for scheduling and resource control, data fusion/sensor fusion, back propagation with momentum, shared weights and recurrency, automatic target recognition, cybernetics, optical neural networks.

  13. A Novel Connectionist Network for Solving Long Time-Lag Prediction Tasks

    NASA Astrophysics Data System (ADS)

    Johnson, Keith; MacNish, Cara

    Traditional Recurrent Neural Networks (RNNs) perform poorly on learning tasks involving long time-lag dependencies. More recent approaches such as LSTM and its variants significantly improve on RNNs ability to learn this type of problem. We present an alternative approach to encoding temporal dependencies that associates temporal features with nodes rather than state values, where the nodes explicitly encode dependencies over variable time delays. We show promising results comparing the network's performance to LSTM variants on an extended Reber grammar task.

  14. Dynamic afferent synapses to decision-making networks improve performance in tasks requiring stimulus associations and discriminations

    PubMed Central

    Bourjaily, Mark A.

    2012-01-01

    Animals must often make opposing responses to similar complex stimuli. Multiple sensory inputs from such stimuli combine to produce stimulus-specific patterns of neural activity. It is the differences between these activity patterns, even when small, that provide the basis for any differences in behavioral response. In the present study, we investigate three tasks with differing degrees of overlap in the inputs, each with just two response possibilities. We simulate behavioral output via winner-takes-all activity in one of two pools of neurons forming a biologically based decision-making layer. The decision-making layer receives inputs either in a direct stimulus-dependent manner or via an intervening recurrent network of neurons that form the associative layer, whose activity helps distinguish the stimuli of each task. We show that synaptic facilitation of synapses to the decision-making layer improves performance in these tasks, robustly increasing accuracy and speed of responses across multiple configurations of network inputs. Conversely, we find that synaptic depression worsens performance. In a linearly nonseparable task with exclusive-or logic, the benefit of synaptic facilitation lies in its superlinear transmission: effective synaptic strength increases with presynaptic firing rate, which enhances the already present superlinearity of presynaptic firing rate as a function of stimulus-dependent input. In linearly separable single-stimulus discrimination tasks, we find that facilitating synapses are always beneficial because synaptic facilitation always enhances any differences between inputs. Thus we predict that for optimal decision-making accuracy and speed, synapses from sensory or associative areas to decision-making or premotor areas should be facilitating. PMID:22457467

  15. Identification of cyclin B1 and Sec62 as biomarkers for recurrence in patients with HBV-related hepatocellular carcinoma after surgical resection.

    PubMed

    Weng, Li; Du, Juan; Zhou, Qinghui; Cheng, Binbin; Li, Jun; Zhang, Denghai; Ling, Changquan

    2012-06-08

    Hepatocellular carcinoma (HCC) is the fifth most common cancer worldwide. Frequent tumor recurrence after surgery is related to its poor prognosis. Although gene expression signatures have been associated with outcome, the molecular basis of HCC recurrence is not fully understood, and there is no method to predict recurrence using peripheral blood mononuclear cells (PBMCs), which can be easily obtained for recurrence prediction in the clinical setting. According to the microarray analysis results, we constructed a co-expression network using the k-core algorithm to determine which genes play pivotal roles in the recurrence of HCC associated with the hepatitis B virus (HBV) infection. Furthermore, we evaluated the mRNA and protein expressions in the PBMCs from 80 patients with or without recurrence and 30 healthy subjects. The stability of the signatures was determined in HCC tissues from the same 80 patients. Data analysis included ROC analysis, correlation analysis, log-lank tests, and Cox modeling to identify independent predictors of tumor recurrence. The tumor-associated proteins cyclin B1, Sec62, and Birc3 were highly expressed in a subset of samples of recurrent HCC; cyclin B1, Sec62, and Birc3 positivity was observed in 80%, 65.7%, and 54.2% of the samples, respectively. The Kaplan-Meier analysis revealed that high expression levels of these proteins was associated with significantly reduced recurrence-free survival. Cox proportional hazards model analysis revealed that cyclin B1 (hazard ratio [HR], 4.762; p = 0.002) and Sec62 (HR, 2.674; p = 0.018) were independent predictors of HCC recurrence. These results revealed that cyclin B1 and Sec62 may be candidate biomarkers and potential therapeutic targets for HBV-related HCC recurrence after surgery.

  16. A Tradeoff Between Accuracy and Flexibility in a Working Memory Circuit Endowed with Slow Feedback Mechanisms.

    PubMed

    Pereira, Jacinto; Wang, Xiao-Jing

    2015-10-01

    Recent studies have shown that reverberation underlying mnemonic persistent activity must be slow, to ensure the stability of a working memory system and to give rise to long neural transients capable of accumulation of information over time. Is the slower the underlying process, the better? To address this question, we investigated 3 slow biophysical mechanisms that are activity-dependent and prominently present in the prefrontal cortex: Depolarization-induced suppression of inhibition (DSI), calcium-dependent nonspecific cationic current (ICAN), and short-term facilitation. Using a spiking network model for spatial working memory, we found that these processes enhance the memory accuracy by counteracting noise-induced drifts, heterogeneity-induced biases, and distractors. Furthermore, the incorporation of DSI and ICAN enlarges the range of network's parameter values required for working memory function. However, when a progressively slower process dominates the network, it becomes increasingly more difficult to erase a memory trace. We demonstrate this accuracy-flexibility tradeoff quantitatively and interpret it using a state-space analysis. Our results supports the scenario where N-methyl-d-aspartate receptor-dependent recurrent excitation is the workhorse for the maintenance of persistent activity, whereas slow synaptic or cellular processes contribute to the robustness of mnemonic function in a tradeoff that potentially can be adjusted according to behavioral demands. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  17. Different Gene Expression and Activity Pattern of Antioxidant Enzymes in Bladder Cancer.

    PubMed

    Wieczorek, Edyta; Jablonowski, Zbigniew; Tomasik, Bartlomiej; Gromadzinska, Jolanta; Jablonska, Ewa; Konecki, Tomasz; Fendler, Wojciech; Sosnowski, Marek; Wasowicz, Wojciech; Reszka, Edyta

    2017-02-01

    The aim of this study was to evaluate the possible role in and contribution of antioxidant enzymes to bladder cancer (BC) etiology and recurrence after transurethral resection (TUR). We enrolled 40 patients with BC who underwent TUR and 100 sex- and age-matched healthy controls. The analysis was performed at diagnosis and recurrence, taking into account the time of recurrence. Gene expression of catalase (CAT), glutathione peroxidase 1 (GPX1) and manganese superoxide dismutase (SOD2) was determined in peripheral blood leukocytes. The activity of glutathione peroxidase 3 (GPX3) was examined in plasma, and GPX1 and copper-zinc containing superoxide dismutase 1 (SOD1) in erythrocytes. SOD2 and GPX1 expression and GPX1 and SOD1 activity were significantly higher in patients at diagnosis of BC in comparison to controls. In patients who had recurrence earlier than 1 year from TUR, CAT and SOD2 expression was lower (at diagnosis p=0.024 and p=0.434, at recurrence p=0.022 and p=0.010), while the GPX1 and GPX3 activity was higher (at diagnosis p=0.242 and p=0.394, at recurrence p=0.019 and p=0.025) compared to patients with recurrence after 1 year from TUR. This study revealed that the gene expression and activity of the antioxidant enzymes are elevated in blood of patients with BC, although a low expression of CAT might contribute to the recurrence of BC, in early prognosis. Copyright© 2017, International Institute of Anticancer Research (Dr. George J. Delinasios), All rights reserved.

  18. Ablation as targeted perturbation to rewire communication network of persistent atrial fibrillation

    PubMed Central

    Tao, Susumu; Way, Samuel F.; Garland, Joshua; Chrispin, Jonathan; Ciuffo, Luisa A.; Balouch, Muhammad A.; Nazarian, Saman; Spragg, David D.; Marine, Joseph E.; Berger, Ronald D.; Calkins, Hugh

    2017-01-01

    Persistent atrial fibrillation (AF) can be viewed as disintegrated patterns of information transmission by action potential across the communication network consisting of nodes linked by functional connectivity. To test the hypothesis that ablation of persistent AF is associated with improvement in both local and global connectivity within the communication networks, we analyzed multi-electrode basket catheter electrograms of 22 consecutive patients (63.5 ± 9.7 years, 78% male) during persistent AF before and after the focal impulse and rotor modulation-guided ablation. Eight patients (36%) developed recurrence within 6 months after ablation. We defined communication networks of AF by nodes (cardiac tissue adjacent to each electrode) and edges (mutual information between pairs of nodes). To evaluate patient-specific parameters of communication, thresholds of mutual information were applied to preserve 10% to 30% of the strongest edges. There was no significant difference in network parameters between both atria at baseline. Ablation effectively rewired the communication network of persistent AF to improve the overall connectivity. In addition, successful ablation improved local connectivity by increasing the average clustering coefficient, and also improved global connectivity by decreasing the characteristic path length. As a result, successful ablation improved the efficiency and robustness of the communication network by increasing the small-world index. These changes were not observed in patients with AF recurrence. Furthermore, a significant increase in the small-world index after ablation was associated with synchronization of the rhythm by acute AF termination. In conclusion, successful ablation rewires communication networks during persistent AF, making it more robust, efficient, and easier to synchronize. Quantitative analysis of communication networks provides not only a mechanistic insight that AF may be sustained by spatially localized sources and global connectivity, but also patient-specific metrics that could serve as a valid endpoint for therapeutic interventions. PMID:28678805

  19. De Novo Design of Bioactive Small Molecules by Artificial Intelligence

    PubMed Central

    Merk, Daniel; Friedrich, Lukas; Grisoni, Francesca

    2018-01-01

    Abstract Generative artificial intelligence offers a fresh view on molecular design. We present the first‐time prospective application of a deep learning model for designing new druglike compounds with desired activities. For this purpose, we trained a recurrent neural network to capture the constitution of a large set of known bioactive compounds represented as SMILES strings. By transfer learning, this general model was fine‐tuned on recognizing retinoid X and peroxisome proliferator‐activated receptor agonists. We synthesized five top‐ranking compounds designed by the generative model. Four of the compounds revealed nanomolar to low‐micromolar receptor modulatory activity in cell‐based assays. Apparently, the computational model intrinsically captured relevant chemical and biological knowledge without the need for explicit rules. The results of this study advocate generative artificial intelligence for prospective de novo molecular design, and demonstrate the potential of these methods for future medicinal chemistry. PMID:29319225

  20. A Spiking Working Memory Model Based on Hebbian Short-Term Potentiation.

    PubMed

    Fiebig, Florian; Lansner, Anders

    2017-01-04

    A dominant theory of working memory (WM), referred to as the persistent activity hypothesis, holds that recurrently connected neural networks, presumably located in the prefrontal cortex, encode and maintain WM memory items through sustained elevated activity. Reexamination of experimental data has shown that prefrontal cortex activity in single units during delay periods is much more variable than predicted by such a theory and associated computational models. Alternative models of WM maintenance based on synaptic plasticity, such as short-term nonassociative (non-Hebbian) synaptic facilitation, have been suggested but cannot account for encoding of novel associations. Here we test the hypothesis that a recently identified fast-expressing form of Hebbian synaptic plasticity (associative short-term potentiation) is a possible mechanism for WM encoding and maintenance. Our simulations using a spiking neural network model of cortex reproduce a range of cognitive memory effects in the classical multi-item WM task of encoding and immediate free recall of word lists. Memory reactivation in the model occurs in discrete oscillatory bursts rather than as sustained activity. We relate dynamic network activity as well as key synaptic characteristics to electrophysiological measurements. Our findings support the hypothesis that fast Hebbian short-term potentiation is a key WM mechanism. Working memory (WM) is a key component of cognition. Hypotheses about the neural mechanism behind WM are currently under revision. Reflecting recent findings of fast Hebbian synaptic plasticity in cortex, we test whether a cortical spiking neural network model with such a mechanism can learn a multi-item WM task (word list learning). We show that our model can reproduce human cognitive phenomena and achieve comparable memory performance in both free and cued recall while being simultaneously compatible with experimental data on structure, connectivity, and neurophysiology of the underlying cortical tissue. These findings are directly relevant to the ongoing paradigm shift in the WM field. Copyright © 2017 Fiebig and Lansner.

  1. A Spiking Working Memory Model Based on Hebbian Short-Term Potentiation

    PubMed Central

    Fiebig, Florian

    2017-01-01

    A dominant theory of working memory (WM), referred to as the persistent activity hypothesis, holds that recurrently connected neural networks, presumably located in the prefrontal cortex, encode and maintain WM memory items through sustained elevated activity. Reexamination of experimental data has shown that prefrontal cortex activity in single units during delay periods is much more variable than predicted by such a theory and associated computational models. Alternative models of WM maintenance based on synaptic plasticity, such as short-term nonassociative (non-Hebbian) synaptic facilitation, have been suggested but cannot account for encoding of novel associations. Here we test the hypothesis that a recently identified fast-expressing form of Hebbian synaptic plasticity (associative short-term potentiation) is a possible mechanism for WM encoding and maintenance. Our simulations using a spiking neural network model of cortex reproduce a range of cognitive memory effects in the classical multi-item WM task of encoding and immediate free recall of word lists. Memory reactivation in the model occurs in discrete oscillatory bursts rather than as sustained activity. We relate dynamic network activity as well as key synaptic characteristics to electrophysiological measurements. Our findings support the hypothesis that fast Hebbian short-term potentiation is a key WM mechanism. SIGNIFICANCE STATEMENT Working memory (WM) is a key component of cognition. Hypotheses about the neural mechanism behind WM are currently under revision. Reflecting recent findings of fast Hebbian synaptic plasticity in cortex, we test whether a cortical spiking neural network model with such a mechanism can learn a multi-item WM task (word list learning). We show that our model can reproduce human cognitive phenomena and achieve comparable memory performance in both free and cued recall while being simultaneously compatible with experimental data on structure, connectivity, and neurophysiology of the underlying cortical tissue. These findings are directly relevant to the ongoing paradigm shift in the WM field. PMID:28053032

  2. RECURRENT NEONATAL SEIZURES RESULT IN LONG-TERM INCREASE OF NEURONAL NETWORK EXCITABILITY IN THE RAT NEOCORTEX

    PubMed Central

    Isaeva, Elena; Isaev, Dmytro; Savrasova, Alina; Khazipov, Rustem; Holmes, Gregory L.

    2011-01-01

    Neonatal seizures are associated with a high likelihood of adverse neurological outcomes, including mental retardation, behavioral disorders, and epilepsy. Early seizures typically involve the neocortex, and post-neonatal epilepsy is often of neocortical origin. However, our understanding of the consequences of neonatal seizures for neocortical function is limited. In the present study, we show that neonatal seizures induced by flurothyl result in markedly enhanced susceptibility of the neocortex to seizure-like activity. This change occurs in young rats studied weeks after the last induced seizure and in adult rats studied months after the initial seizures. Neonatal seizures resulted in reductions in the amplitude of spontaneous inhibitory postsynaptic currents and the frequency of miniature inhibitory postsynaptic currents, and significant increases in the amplitude and frequency of spontaneous excitatory postsynaptic currents (sEPSCs) and in the frequency of miniature excitatory postsynaptic currents (mEPSCs) in pyramidal cells of layer 2/3 of the somatosensory cortex. The selective N-methyl-d-aspartate (NMDA) receptor antagonist d-2-amino-5-phosphon-ovalerate eliminated the differences in amplitude and frequency of sEPSCs and mEPSCs in the control and flurothyl groups, suggesting that NMDA receptors contribute significantly to the enhanced excitability seen in slices from rats that experienced recurrent neonatal seizures. Taken together, our results suggest that recurrent seizures in infancy result in a persistent enhancement of neocortical excitability. PMID:20384780

  3. Neural network decoder for quantum error correcting codes

    NASA Astrophysics Data System (ADS)

    Krastanov, Stefan; Jiang, Liang

    Artificial neural networks form a family of extremely powerful - albeit still poorly understood - tools used in anything from image and sound recognition through text generation to, in our case, decoding. We present a straightforward Recurrent Neural Network architecture capable of deducing the correcting procedure for a quantum error-correcting code from a set of repeated stabilizer measurements. We discuss the fault-tolerance of our scheme and the cost of training the neural network for a system of a realistic size. Such decoders are especially interesting when applied to codes, like the quantum LDPC codes, that lack known efficient decoding schemes.

  4. A long non-coding RNA expression profile can predict early recurrence in hepatocellular carcinoma after curative resection.

    PubMed

    Lv, Yufeng; Wei, Wenhao; Huang, Zhong; Chen, Zhichao; Fang, Yuan; Pan, Lili; Han, Xueqiong; Xu, Zihai

    2018-06-20

    The aim of this study was to develop a novel long non-coding RNA (lncRNA) expression signature to accurately predict early recurrence for patients with hepatocellular carcinoma (HCC) after curative resection. Using expression profiles downloaded from The Cancer Genome Atlas database, we identified multiple lncRNAs with differential expression between early recurrence (ER) group and non-early recurrence (non-ER) group of HCC. Least absolute shrinkage and selection operator (LASSO) for logistic regression models were used to develop a lncRNA-based classifier for predicting ER in the training set. An independent test set was used to validated the predictive value of this classifier. Futhermore, a co-expression network based on these lncRNAs and its highly related genes was constructed and Gene Ontology and Kyoto Encyclopedia of Genes and Genomes pathway enrichment analyses of genes in the network were performed. We identified 10 differentially expressed lncRNAs, including 3 that were upregulated and 7 that were downregulated in ER group. The lncRNA-based classifier was constructed based on 7 lncRNAs (AL035661.1, PART1, AC011632.1, AC109588.1, AL365361.1, LINC00861 and LINC02084), and its accuracy was 0.83 in training set, 0.87 in test set and 0.84 in total set. And ROC curve analysis showed the AUROC was 0.741 in training set, 0.824 in the test set and 0.765 in total set. A functional enrichment analysis suggested that the genes of which is highly related to 4 lncRNAs were involved in immune system. This 7-lncRNA expression profile can effectively predict the early recurrence after surgical resection for HCC. This article is protected by copyright. All rights reserved.

  5. Personality and social support as predictors of first and recurrent episodes of depression.

    PubMed

    Noteboom, Annemieke; Beekman, Aartjan T F; Vogelzangs, Nicole; Penninx, Brenda W J H

    2016-01-15

    Depression is a prevalent psychiatric disorder with high personal and public health consequences, partly due to a high risk of recurrence. This longitudinal study examines personality traits, structural and subjective social support dimensions as predictors of first and recurrent episodes of depression in initially non-depressed subjects. Data were obtained from the Netherlands Study of Depression and Anxiety (NESDA). 1085 respondents without a current depression or anxiety diagnosis were included. 437 respondents had a prior history of depression, 648 did not. Personality dimensions were measured with the NEO-FFI, network size, partner-status, negative and positive emotional support were measured with the Close Person Questionnaire. Logistic regression analyses (unadjusted and adjusted for clinical variables and sociodemographic variables) examined whether these psychosocial variables predict a new episode of depression at two year follow up and whether this differed among persons with or without a history of depression. In the unadjusted analyses high extraversion (OR:.93, 95% CI (.91-.96), P<.001), agreeableness (OR:.94, 95% CI (.90-.97), P<.001), conscientiousness (OR:.93, 95% CI (.90-.96), P<.001) and a larger network size (OR:.76, 95% CI (.64-.90), P=.001) significantly reduced the risk of a new episode of depression. Only neuroticism predicted a new episode of depression in both the unadjusted (OR:1.13, 95% CI (1.10-1.15), P<.001) and adjusted analyses (OR:1.06, 95% CI (1.03-1.10), P<.001). None of the predictors predicted first or recurrent episodes of depression differently. we used a relatively short follow up period and broad personality dimensions. Neuroticism seems to predict both first and recurrent episodes of depression and may be suitable for screening for preventive interventions. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. H∞ state estimation for discrete-time memristive recurrent neural networks with stochastic time-delays

    NASA Astrophysics Data System (ADS)

    Liu, Hongjian; Wang, Zidong; Shen, Bo; Alsaadi, Fuad E.

    2016-07-01

    This paper deals with the robust H∞ state estimation problem for a class of memristive recurrent neural networks with stochastic time-delays. The stochastic time-delays under consideration are governed by a Bernoulli-distributed stochastic sequence. The purpose of the addressed problem is to design the robust state estimator such that the dynamics of the estimation error is exponentially stable in the mean square, and the prescribed ? performance constraint is met. By utilizing the difference inclusion theory and choosing a proper Lyapunov-Krasovskii functional, the existence condition of the desired estimator is derived. Based on it, the explicit expression of the estimator gain is given in terms of the solution to a linear matrix inequality. Finally, a numerical example is employed to demonstrate the effectiveness and applicability of the proposed estimation approach.

  7. Auto-Associative Recurrent Neural Networks and Long Term Dependencies in Novelty Detection for Audio Surveillance Applications

    NASA Astrophysics Data System (ADS)

    Rossi, A.; Montefoschi, F.; Rizzo, A.; Diligenti, M.; Festucci, C.

    2017-10-01

    Machine Learning applied to Automatic Audio Surveillance has been attracting increasing attention in recent years. In spite of several investigations based on a large number of different approaches, little attention had been paid to the environmental temporal evolution of the input signal. In this work, we propose an exploration in this direction comparing the temporal correlations extracted at the feature level with the one learned by a representational structure. To this aim we analysed the prediction performances of a Recurrent Neural Network architecture varying the length of the processed input sequence and the size of the time window used in the feature extraction. Results corroborated the hypothesis that sequential models work better when dealing with data characterized by temporal order. However, so far the optimization of the temporal dimension remains an open issue.

  8. On the correlation between reservoir metrics and performance for time series classification under the influence of synaptic plasticity.

    PubMed

    Chrol-Cannon, Joseph; Jin, Yaochu

    2014-01-01

    Reservoir computing provides a simpler paradigm of training recurrent networks by initialising and adapting the recurrent connections separately to a supervised linear readout. This creates a problem, though. As the recurrent weights and topology are now separated from adapting to the task, there is a burden on the reservoir designer to construct an effective network that happens to produce state vectors that can be mapped linearly into the desired outputs. Guidance in forming a reservoir can be through the use of some established metrics which link a number of theoretical properties of the reservoir computing paradigm to quantitative measures that can be used to evaluate the effectiveness of a given design. We provide a comprehensive empirical study of four metrics; class separation, kernel quality, Lyapunov's exponent and spectral radius. These metrics are each compared over a number of repeated runs, for different reservoir computing set-ups that include three types of network topology and three mechanisms of weight adaptation through synaptic plasticity. Each combination of these methods is tested on two time-series classification problems. We find that the two metrics that correlate most strongly with the classification performance are Lyapunov's exponent and kernel quality. It is also evident in the comparisons that these two metrics both measure a similar property of the reservoir dynamics. We also find that class separation and spectral radius are both less reliable and less effective in predicting performance.

  9. CYP2D6 activity and the risk of recurrence of Plasmodium vivax malaria in the Brazilian Amazon: a prospective cohort study.

    PubMed

    Brasil, Larissa W; Rodrigues-Soares, Fernanda; Santoro, Ana B; Almeida, Anne C G; Kühn, Andrea; Ramasawmy, Rajendranath; Lacerda, Marcus V G; Monteiro, Wuelton M; Suarez-Kurtz, Guilherme

    2018-02-01

    CYP2D6 pathway mediates the activation of primaquine into active metabolite(s) in hepatocytes. CYP2D6 is highly polymorphic, encoding CYP2D6 isoforms with normal, reduced, null or increased activity. It is hypothesized that Plasmodium vivax malaria patients with defective CYP2D6 function would be at increased risk for primaquine failure to prevent recurrence. The aim of this study was to investigate the association of CYP2D6 polymorphisms and inferred CYP2D6 phenotypes with malaria recurrence in patients from the Western Brazilian Amazon, following chloroquine/primaquine combined therapy. The prospective cohort consisted of P. vivax malaria patients who were followed for 6 months after completion of the chloroquine/primaquine therapy. Recurrence was defined as one or more malaria episodes, 28-180 days after the initial episode. Genotyping for nine CYP2D6 SNPs and copy number variation was performed using TaqMan assays in a Fast 7500 Real-Time System. CYP2D6 star alleles (haplotypes), diplotypes and CYP2D6 phenotypes were inferred, and the activity score system was used to define the functionality of the CYP2D6 diplotypes. CYP2D6 activity scores (AS) were dichotomized at ≤ 1 (gPM, gIM and gNM-S phenotypes) and ≥ 1.5 (gNM-F and gUM phenotypes). Genotyping was successfully performed in 190 patients (44 with recurrence and 146 without recurrences). Recurrence incidence was higher in individuals presenting reduced activity CYP2D6 phenotypes (adjusted relative risk = 1.89, 95% CI 1.01-3.70; p = 0.049). Attributable risk and population attributable fraction were 11.5 and 9.9%, respectively. The time elapsed from the first P. vivax malaria episode until the recurrence did not differ between patients with AS of ≤ 1 versus ≥ 1.5 (p = 0.917). The results suggest that CYP2D6 polymorphisms are associated with increased risk of recurrence of vivax malaria, following chloroquine-primaquine combined therapy. This association is interpreted as the result of reduced conversion of primaquine into its active metabolites in patients with reduced CYP2D6 enzymatic activity.

  10. Representing Where along with What Information in a Model of a Cortical Patch

    PubMed Central

    Roudi, Yasser; Treves, Alessandro

    2008-01-01

    Behaving in the real world requires flexibly combining and maintaining information about both continuous and discrete variables. In the visual domain, several lines of evidence show that neurons in some cortical networks can simultaneously represent information about the position and identity of objects, and maintain this combined representation when the object is no longer present. The underlying network mechanism for this combined representation is, however, unknown. In this paper, we approach this issue through a theoretical analysis of recurrent networks. We present a model of a cortical network that can retrieve information about the identity of objects from incomplete transient cues, while simultaneously representing their spatial position. Our results show that two factors are important in making this possible: A) a metric organisation of the recurrent connections, and B) a spatially localised change in the linear gain of neurons. Metric connectivity enables a localised retrieval of information about object identity, while gain modulation ensures localisation in the correct position. Importantly, we find that the amount of information that the network can retrieve and retain about identity is strongly affected by the amount of information it maintains about position. This balance can be controlled by global signals that change the neuronal gain. These results show that anatomical and physiological properties, which have long been known to characterise cortical networks, naturally endow them with the ability to maintain a conjunctive representation of the identity and location of objects. PMID:18369416

  11. Deep Recurrent Neural Networks for Supernovae Classification

    NASA Astrophysics Data System (ADS)

    Charnock, Tom; Moss, Adam

    2017-03-01

    We apply deep recurrent neural networks, which are capable of learning complex sequential information, to classify supernovae (code available at https://github.com/adammoss/supernovae). The observational time and filter fluxes are used as inputs to the network, but since the inputs are agnostic, additional data such as host galaxy information can also be included. Using the Supernovae Photometric Classification Challenge (SPCC) data, we find that deep networks are capable of learning about light curves, however the performance of the network is highly sensitive to the amount of training data. For a training size of 50% of the representational SPCC data set (around 104 supernovae) we obtain a type-Ia versus non-type-Ia classification accuracy of 94.7%, an area under the Receiver Operating Characteristic curve AUC of 0.986 and an SPCC figure-of-merit F 1 = 0.64. When using only the data for the early-epoch challenge defined by the SPCC, we achieve a classification accuracy of 93.1%, AUC of 0.977, and F 1 = 0.58, results almost as good as with the whole light curve. By employing bidirectional neural networks, we can acquire impressive classification results between supernovae types I, II and III at an accuracy of 90.4% and AUC of 0.974. We also apply a pre-trained model to obtain classification probabilities as a function of time and show that it can give early indications of supernovae type. Our method is competitive with existing algorithms and has applications for future large-scale photometric surveys.

  12. Altered trunk muscle recruitment patterns during lifting in individuals in remission from recurrent low back pain.

    PubMed

    Suehiro, Tadanobu; Ishida, Hiroshi; Kobara, Kenichi; Osaka, Hiroshi; Watanabe, Susumu

    2018-04-01

    Changes in the recruitment pattern of trunk muscles may contribute to the development of recurrent or chronic symptoms in people with low back pain (LBP). However, the recruitment pattern of trunk muscles during lifting tasks associated with a high risk of LBP has not been clearly determined in recurrent LBP. The present study aimed to investigate potential differences in trunk muscles recruitment patterns between individuals with recurrent LBP and asymptomatic individuals during lifting. The subjects were 25 individuals with recurrent LBP and 20 asymptomatic individuals. Electromyography (EMG) was used to measure onset time, EMG amplitude, overall activity of abdominal muscles, and overall activity of back muscles during a lifting task. The onsets of the transversus abdominis/internal abdominal oblique and multifidus were delayed in the recurrent LBP group despite remission from symptoms. Additionally, the EMG amplitudes of the erector spinae, as well as the overall activity of abdominal muscles or back muscles, were greater in the recurrent LBP group. No differences in EMG amplitude of the external oblique, transversus abdominis/internal abdominal oblique, and multifidus were found between the groups. Our findings indicate the presence of an altered trunk muscle recruitment pattern in individuals with recurrent LBP during lifting. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. Deep Recurrent Neural Networks for seizure detection and early seizure detection systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Talathi, S. S.

    Epilepsy is common neurological diseases, affecting about 0.6-0.8 % of world population. Epileptic patients suffer from chronic unprovoked seizures, which can result in broad spectrum of debilitating medical and social consequences. Since seizures, in general, occur infrequently and are unpredictable, automated seizure detection systems are recommended to screen for seizures during long-term electroencephalogram (EEG) recordings. In addition, systems for early seizure detection can lead to the development of new types of intervention systems that are designed to control or shorten the duration of seizure events. In this article, we investigate the utility of recurrent neural networks (RNNs) in designing seizuremore » detection and early seizure detection systems. We propose a deep learning framework via the use of Gated Recurrent Unit (GRU) RNNs for seizure detection. We use publicly available data in order to evaluate our method and demonstrate very promising evaluation results with overall accuracy close to 100 %. We also systematically investigate the application of our method for early seizure warning systems. Our method can detect about 98% of seizure events within the first 5 seconds of the overall epileptic seizure duration.« less

  14. Recurrent Neural Networks for Multivariate Time Series with Missing Values.

    PubMed

    Che, Zhengping; Purushotham, Sanjay; Cho, Kyunghyun; Sontag, David; Liu, Yan

    2018-04-17

    Multivariate time series data in practical applications, such as health care, geoscience, and biology, are characterized by a variety of missing values. In time series prediction and other related tasks, it has been noted that missing values and their missing patterns are often correlated with the target labels, a.k.a., informative missingness. There is very limited work on exploiting the missing patterns for effective imputation and improving prediction performance. In this paper, we develop novel deep learning models, namely GRU-D, as one of the early attempts. GRU-D is based on Gated Recurrent Unit (GRU), a state-of-the-art recurrent neural network. It takes two representations of missing patterns, i.e., masking and time interval, and effectively incorporates them into a deep model architecture so that it not only captures the long-term temporal dependencies in time series, but also utilizes the missing patterns to achieve better prediction results. Experiments of time series classification tasks on real-world clinical datasets (MIMIC-III, PhysioNet) and synthetic datasets demonstrate that our models achieve state-of-the-art performance and provide useful insights for better understanding and utilization of missing values in time series analysis.

  15. The super-Turing computational power of plastic recurrent neural networks.

    PubMed

    Cabessa, Jérémie; Siegelmann, Hava T

    2014-12-01

    We study the computational capabilities of a biologically inspired neural model where the synaptic weights, the connectivity pattern, and the number of neurons can evolve over time rather than stay static. Our study focuses on the mere concept of plasticity of the model so that the nature of the updates is assumed to be not constrained. In this context, we show that the so-called plastic recurrent neural networks (RNNs) are capable of the precise super-Turing computational power--as the static analog neural networks--irrespective of whether their synaptic weights are modeled by rational or real numbers, and moreover, irrespective of whether their patterns of plasticity are restricted to bi-valued updates or expressed by any other more general form of updating. Consequently, the incorporation of only bi-valued plastic capabilities in a basic model of RNNs suffices to break the Turing barrier and achieve the super-Turing level of computation. The consideration of more general mechanisms of architectural plasticity or of real synaptic weights does not further increase the capabilities of the networks. These results support the claim that the general mechanism of plasticity is crucially involved in the computational and dynamical capabilities of biological neural networks. They further show that the super-Turing level of computation reflects in a suitable way the capabilities of brain-like models of computation.

  16. UArizona at the CLEF eRisk 2017 Pilot Task: Linear and Recurrent Models for Early Depression Detection

    PubMed Central

    Sadeque, Farig; Xu, Dongfang; Bethard, Steven

    2017-01-01

    The 2017 CLEF eRisk pilot task focuses on automatically detecting depression as early as possible from a users’ posts to Reddit. In this paper we present the techniques employed for the University of Arizona team’s participation in this early risk detection shared task. We leveraged external information beyond the small training set, including a preexisting depression lexicon and concepts from the Unified Medical Language System as features. For prediction, we used both sequential (recurrent neural network) and non-sequential (support vector machine) models. Our models perform decently on the test data, and the recurrent neural models perform better than the non-sequential support vector machines while using the same feature sets. PMID:29075167

  17. Analyzing long-term correlated stochastic processes by means of recurrence networks: Potentials and pitfalls

    NASA Astrophysics Data System (ADS)

    Zou, Yong; Donner, Reik V.; Kurths, Jürgen

    2015-02-01

    Long-range correlated processes are ubiquitous, ranging from climate variables to financial time series. One paradigmatic example for such processes is fractional Brownian motion (fBm). In this work, we highlight the potentials and conceptual as well as practical limitations when applying the recently proposed recurrence network (RN) approach to fBm and related stochastic processes. In particular, we demonstrate that the results of a previous application of RN analysis to fBm [Liu et al. Phys. Rev. E 89, 032814 (2014), 10.1103/PhysRevE.89.032814] are mainly due to an inappropriate treatment disregarding the intrinsic nonstationarity of such processes. Complementarily, we analyze some RN properties of the closely related stationary fractional Gaussian noise (fGn) processes and find that the resulting network properties are well-defined and behave as one would expect from basic conceptual considerations. Our results demonstrate that RN analysis can indeed provide meaningful results for stationary stochastic processes, given a proper selection of its intrinsic methodological parameters, whereas it is prone to fail to uniquely retrieve RN properties for nonstationary stochastic processes like fBm.

  18. Innovative second-generation wavelets construction with recurrent neural networks for solar radiation forecasting.

    PubMed

    Capizzi, Giacomo; Napoli, Christian; Bonanno, Francesco

    2012-11-01

    Solar radiation prediction is an important challenge for the electrical engineer because it is used to estimate the power developed by commercial photovoltaic modules. This paper deals with the problem of solar radiation prediction based on observed meteorological data. A 2-day forecast is obtained by using novel wavelet recurrent neural networks (WRNNs). In fact, these WRNNS are used to exploit the correlation between solar radiation and timescale-related variations of wind speed, humidity, and temperature. The input to the selected WRNN is provided by timescale-related bands of wavelet coefficients obtained from meteorological time series. The experimental setup available at the University of Catania, Italy, provided this information. The novelty of this approach is that the proposed WRNN performs the prediction in the wavelet domain and, in addition, also performs the inverse wavelet transform, giving the predicted signal as output. The obtained simulation results show a very low root-mean-square error compared to the results of the solar radiation prediction approaches obtained by hybrid neural networks reported in the recent literature.

  19. Prediction of Sea Surface Temperature Using Long Short-Term Memory

    NASA Astrophysics Data System (ADS)

    Zhang, Qin; Wang, Hui; Dong, Junyu; Zhong, Guoqiang; Sun, Xin

    2017-10-01

    This letter adopts long short-term memory(LSTM) to predict sea surface temperature(SST), which is the first attempt, to our knowledge, to use recurrent neural network to solve the problem of SST prediction, and to make one week and one month daily prediction. We formulate the SST prediction problem as a time series regression problem. LSTM is a special kind of recurrent neural network, which introduces gate mechanism into vanilla RNN to prevent the vanished or exploding gradient problem. It has strong ability to model the temporal relationship of time series data and can handle the long-term dependency problem well. The proposed network architecture is composed of two kinds of layers: LSTM layer and full-connected dense layer. LSTM layer is utilized to model the time series relationship. Full-connected layer is utilized to map the output of LSTM layer to a final prediction. We explore the optimal setting of this architecture by experiments and report the accuracy of coastal seas of China to confirm the effectiveness of the proposed method. In addition, we also show its online updated characteristics.

  20. Does money matter in inflation forecasting?

    NASA Astrophysics Data System (ADS)

    Binner, J. M.; Tino, P.; Tepper, J.; Anderson, R.; Jones, B.; Kendall, G.

    2010-11-01

    This paper provides the most fully comprehensive evidence to date on whether or not monetary aggregates are valuable for forecasting US inflation in the early to mid 2000s. We explore a wide range of different definitions of money, including different methods of aggregation and different collections of included monetary assets. In our forecasting experiment we use two nonlinear techniques, namely, recurrent neural networks and kernel recursive least squares regression-techniques that are new to macroeconomics. Recurrent neural networks operate with potentially unbounded input memory, while the kernel regression technique is a finite memory predictor. The two methodologies compete to find the best fitting US inflation forecasting models and are then compared to forecasts from a naïve random walk model. The best models were nonlinear autoregressive models based on kernel methods. Our findings do not provide much support for the usefulness of monetary aggregates in forecasting inflation. Beyond its economic findings, our study is in the tradition of physicists’ long-standing interest in the interconnections among statistical mechanics, neural networks, and related nonparametric statistical methods, and suggests potential avenues of extension for such studies.

Top